From checker at panix.com Thu Dec 1 23:46:37 2005 From: checker at panix.com (Premise Checker) Date: Thu, 1 Dec 2005 18:46:37 -0500 (EST) Subject: [Paleopsych] NYT: Hooked on the Web: Help Is on the Way Message-ID: Hooked on the Web: Help Is on the Way http://www.nytimes.com/2005/12/01/fashion/thursdaystyles/01addict.html [Checklist appended. I really don't meet that many criteria for addiction, but I have been spending way too much time on the Net and find it difficult to stop. Since Sunday, that is, before this article (Thursday), I have been sending only articles that I have read and have added comments on. Several people have told me that my comments are better than the articles themselves. I've also worked up some memes. This is why you've gotten very little from me lately. [I have three more memes planned for the near future: 1. What it would take for me to give up my three most cherished hypotheses (non-design, importance of gene-culture co-evolution in man right up to the present, inability to nail down our most important concepts). The last is proving very difficult for me to write up. Please send me your own answer to my question. I suggested it to the World Question Center at http://edge.org. 2. Why I am not a Christian (follow-up to Bertrand Russell's essay). This will be easy, since only need to organize it. 3. A piece on conspiracies (including the by-far most talked about one of the last century but which is never called one). Major problem is that I'd like to say that conspiracy thinking is our default Stone Age sociology, since back then the chain of consequences was short enough that what resulted came about by the actions of just a few people. However, I've also read that looking for an active agent is a Western particularity. And again, that conspiracies today are viewed as ones without conspirators. I need to reconcile these viewpoints. And I'll talk about how one might judge the plausibility of various conspiracy candidates. [For those of you on my lists, I *can* still send articles that I don't read or read and don't comment on. Almost anything that I might send and comment on does go onto my disk space in on the UNIX mainframe in Manhattan that houses Panix.com. But I'd rather have you all start working for *me* and help me with my two main projects, namely, "deep culture change" and "persistence of difference."] ------------ By SARAH KERSHAW REDMOND, Wash. THE waiting room for Hilarie Cash's practice has the look and feel of many a therapist's office, with soothing classical music, paintings of gentle swans and colorful flowers and on the bookshelves stacks of brochures on how to get help. But along with her patients, Dr. Cash, who runs Internet/Computer Addiction Services here in the city that is home to Microsoft, is a pioneer in a growing niche in mental health care and addiction recovery. The patients, including Mike, 34, are what Dr. Cash and other mental health professionals call onlineaholics. They even have a diagnosis: Internet addiction disorder. These specialists estimate that 6 percent to 10 percent of the approximately 189 million Internet users in this country have a dependency that can be as destructive as alcoholism and drug addiction, and they are rushing to treat it. Yet some in the field remain skeptical that heavy use of the Internet qualifies as a legitimate addiction, and one academic expert called it a fad illness. Skeptics argue that even obsessive Internet use does not exact the same toll on health or family life as conventionally recognized addictions. But, mental health professionals who support the diagnosis of Internet addiction say, a majority of obsessive users are online to further addictions to gambling or pornography or have become much more dependent on those vices because of their prevalence on the Internet. But other users have a broader dependency and spend hours online each day, surfing the Web, trading stocks, instant messaging or blogging, and a fast-rising number are becoming addicted to Internet video games. Dr. Cash and other professionals say that people who abuse the Internet are typically struggling with other problems, like depression and anxiety. But, they say, the Internet's omnipresent offer of escape from reality, affordability, accessibility and opportunity for anonymity can also lure otherwise healthy people into an addiction. Dr. Cash's patient Mike, who was granted anonymity to protect his privacy, was at high risk for an Internet addiction, having battled alcohol and drug abuse and depression. On a list of 15 symptoms of Internet addiction used for diagnosis by Internet/Computer Addiction Services, Mike, who is unemployed and living with his mother, checked off 13, including intense cravings for the computer, lying about how much time he spends online, withdrawing from hobbies and social interactions, back pain and weight gain. A growing number of therapists and inpatient rehabilitation centers are often treating Web addicts with the same approaches, including 12-step programs, used to treat chemical addictions. Because the condition is not recognized in psychiatry as a disorder, insurance companies do not reimburse for treatment. So patients either pay out of pocket, or therapists and treatment centers bill for other afflictions, including the nonspecific impulse control disorder. There is at least one inpatient program, at Proctor Hospital in Peoria, Ill., which admits patients to recover from obsessive computer use. Experts there said they see similar signs of withdrawal in those patients as in alcoholics or drug addicts, including profuse sweating, severe anxiety and paranoid symptoms. And the prevalence of other technologies - like BlackBerry wireless e-mail devices, sometimes called CrackBerries because they are considered so addictive; the Treo cellphone-organizer ; and text messaging - has created a more generalized technology addiction, said Rick Zehr, the vice president of addiction and behavioral services at Proctor Hospital. The hospital's treatment program places all its clients together for group therapy and other recovery work, whether the addiction is to cocaine or the computer, Mr. Zehr said. "I can't imagine it's going to go away," he said of technology and Internet addiction. "I can only imagine it's going to continue to become more and more prevalent." There are family therapy programs for Internet addicts, and interventionists who specialize in confronting computer addicts. Among the programs offered by the Center for Online Addiction in Bradford, Pa., founded in 1994 by Dr. Kimberly S. Young, a leading researcher in Internet addiction, are cyberwidow support groups for the spouses of those having online affairs, treatment for addiction to eBay and intense behavioral counseling - in person, by telephone and online - to help clients get Web sober. Another leading expert in the field is Dr. Maressa Hecht Orzack, the director of the Computer Addiction Study Center at McLean Hospital in Belmont, Mass., and an assistant professor at Harvard Medical School. She opened a clinic for Internet addicts at the hospital in 1996, when, she said, "everybody thought I was crazy." Dr. Orzack said she got the idea after she discovered she had become addicted to computer solitaire, procrastinating and losing sleep and time with her family. When she started the clinic, she saw two patients a week at most. Now she sees dozens and receives five or six calls daily from those seeking treatment elsewhere in the country. More and more of those calls, she said, are coming from people concerned about family members addicted to Internet video games like EverQuest, Doom 3 and World of Warcraft. Still, there is little hard science available on Internet addiction. "I think using the Internet in certain ways can be quite absorbing, but I don't know that it's any different from an addiction to playing the violin and bowling," said Sara Kiesler, professor of computer science and human-computer interaction at Carnegie Mellon University. "There is absolutely no evidence that spending time online, exchanging e-mail with family and friends, is the least bit harmful. We know that people who are depressed or anxious are likely to go online for escape and that doing so helps them." It was Professor Kiesler who called Internet addiction a fad illness. In her view, she said, television addiction is worse. She added that she was completing a study of heavy Internet users, which showed the majority had sharply reduced their time on the computer over the course of a year, indicating that even problematic use was self-corrective. She said calling it an addiction "demeans really serious illnesses, which are things like addiction to gambling, where you steal your family's money to pay for your gambling debts, drug addictions, cigarette addictions." She added, "These are physiological addictions." But Dr. Cash, who began treating Internet addicts 10 years ago, said that Internet addiction was a potentially serious illness. She said she had treated suicidal patients who had lost jobs and whose marriages had been destroyed because of their addictions. She said she was seeing more patients like Mike, who acknowledges struggling with an addiction to online pornography but who also said he was obsessed with logging on to the Internet for other reasons. He said that he became obsessed with using the Internet during the 2000 presidential election and that now he feels anxious if he does not check several Web sites, mostly news and sports sites, daily. "I'm still wrestling with the idea that it's a problem because I like it so much," Mike said. Three hours straight on the Internet, he said, is a minor dose. The Internet seemed to satisfy "whatever urge crosses my head." Several counselors and other experts said time spent on the computer was not important in diagnosing an addiction to the Internet. The question, they say, is whether Internet use is causing serious problems, including the loss of a job, marital difficulties, depression, isolation and anxiety, and still the user cannot stop. "The line is drawn with Internet addiction," said Mr. Zehr of Proctor Hospital, "when I'm no longer controlling my Internet use. It's controlling me." Dr. Cash and other therapists say they are seeing a growing number of teenagers and young adults as patients, who grew up spending hours on the computer, playing games and sending instant messages. These patients appear to have significant developmental problems, including attention deficit disorder and a lack of social skills. A report released during the summer by the Pew Internet and American Life Project found that teenagers did spend an increasing amount of time online: 51 percent of teenage Internet users are online daily, up from 42 percent in 2000. But the report did not find a withering of social skills. Most teenagers "maintain robust networks of friends," it noted. Some therapists and Internet addiction treatment centers offer online counseling, including at least one 12-step program for video game addicts, which is controversial. Critics say that although it may be a way to catch the attention of someone who needs face-to-face treatment, it is akin to treating an alcoholic in a brewery, mostly because Internet addicts need to break the cycle of living in cyberspace. A crucial difference between treating alcoholics and drug addicts, however, is that total abstinence is usually recommended for recovery from substance abuse, whereas moderate and manageable use is the goal for behavioral addictions. Sierra Tucson in Arizona, a psychiatric hospital and behavioral health center, which treats substance and behavioral addictions, has begun admitting a rising number of Internet addicts, said Gina Ewing, its intake manager. Ms. Ewing said that when such a client left treatment, the center's counselors helped plan ways to reduce time on the computer or asked those who did not need to use the Web for work to step away from the computer entirely. Ms. Ewing said the Tucson center encouraged its Internet-addicted clients when they left treatment to attend open meetings of Alcoholics Anonymous or Narcotics Anonymous, which are not restricted to alcoholics and drug addicts, and simply to listen. Or perhaps, if they find others struggling with the same problem, and if those at the meeting are amenable, they might be able to participate. "It's breaking new ground," Ms. Ewing said. "But an addiction is an addiction." Danger Signs for Too Much of a Good Thing http://www.nytimes.com/2005/12/01/fashion/thursdaystyles/01aside.html FIFTEEN signs of an addiction to using the Internet and computers, according to Internet/Computer Addiction Services in Redmond, Wash., follow: 1. Inability to predict the amount of time spent on computer. 2. Failed attempts to control personal use for an extended period of time. 3. Having a sense of euphoria while on the computer. 4. Craving more computer time. 5. Neglecting family and friends. 6. Feeling restless, irritable and discontent when not on the computer. 7. Lying to employers and family about computer activity. 8. Problems with school or job performance as a result of time spent on the computer. 9. Feelings of guilt, shame, anxiety or depression as a result of time spent on the computer. 10. Changes in sleep patterns. 11. Health problems like carpal tunnel syndrome, eye strain, weight changes, backaches and chronic sleep deprivation. 12. Denying, rationalizing and minimizing adverse consequences stemming from computer use. 13. Withdrawal from real-life hobbies and social interactions. 14. Obsessing about sexual acting out through the use of the Internet. 15. Creation of enhanced personae to find cyberlove or cybersex. From checker at panix.com Fri Dec 2 02:54:38 2005 From: checker at panix.com (Premise Checker) Date: Thu, 1 Dec 2005 21:54:38 -0500 (EST) Subject: [Paleopsych] NYT: Do Babies Dream? Message-ID: Do Babies Dream? http://www.nytimes.com/2005/11/22/science/22qna.html Q & A By C. CLAIBORNE RAY Q. Do babies dream? A. "Yes, as far as we can tell," said Dr. Charles P. Pollak, director of the Center for Sleep Medicine at NewYork-Presbyterian/Weill Cornell hospital in New York. Most dreaming occurs during a type of sleep called REM sleep, for rapid eye movement, which Dr. Pollak explains is "an evolutionarily old type of sleep that occurs at all life stages, including infancy, and even before infancy, in fetal life." There is no question that newborn infants have REM sleep, he said, and the rapid eye movements can be observed as they sleep. The two eyes move together, mostly side to side, but sometimes up and down. It is a well-based inference that babies are dreaming in REM sleep, he said. As for the content of any dreams, Dr. Pollak said: "That is like asking whether your pet dog or cat is dreaming, because they can't communicate, and you can't ask. We presume that infants dream infantile things, but we don't really know what it is that they dream." "There is some evidence in adults that the direction of eye movement corresponds in a crude way to the content of the dream," Dr. Pollak continued. "If they are dreaming about walking in a field," he said, "the movement is most likely horizontal. If they dream of looking up at a building or climbing stairs, vertical eye movement is more likely to predominate. We can't go further than that." Readers are invited to submit questions by mail to Question, Science Times, The New York Times, 229 West 43rd Street, New York, N.Y. 10036-3959, or by e-mail to question at nytimes.com. From checker at panix.com Fri Dec 2 02:54:51 2005 From: checker at panix.com (Premise Checker) Date: Thu, 1 Dec 2005 21:54:51 -0500 (EST) Subject: [Paleopsych] CHE: C.P. Snow: Bridging the Two-Cultures Divide Message-ID: C.P. Snow: Bridging the Two-Cultures Divide The Chronicle of Higher Education, 5.11.25 http://chronicle.com/weekly/v52/i14/14b01001.htm By DAVID P. BARASH The year 2005 is the centenary of the birth -- and the 25th anniversary of the death -- of C.P. Snow, British physicist, novelist, and longtime denizen of the "corridors of power" (a phrase he coined). It is also 45 years since the U.S. publication of his best-known work, a highly influential polemic that generated another phrase with a life of its own, and that warrants revisiting today: The Two Cultures. Actually, the full title was The Two Cultures and the Scientific Revolution, presented by Snow as the prestigious Rede Lecture at the University of Cambridge in 1959 before being published as a brief book shortly thereafter. Since then his basic point has seeped into public consciousness as metaphor for a kind of dialogue of the deaf. Snow's was perhaps the first -- and almost certainly the most influential -- public lamentation over the extent to which the sciences and the humanities have drifted apart. Snow concerned himself with "literary intellectuals" on the one hand and physicists on the other, although each can be seen as representing their "cultures" more generally: "Between the two," he wrote, there is "a gulf ... of hostility and dislike, but most of all lack of understanding. They have a curious distorted image of each other. Their attitudes are so different that, even on the level of emotion, they can't find much common ground." "A good many times," Snow pointed out, in an oft-cited passage, "I have been present at gatherings of people who, by the standards of the traditional culture, are thought highly educated and who have with considerable gusto been expressing their incredulity at the illiteracy of scientists. Once or twice I have been provoked and have asked the company how many of them could describe the Second Law of Thermodynamics. The response was cold: it was also negative. Yet I was asking something which is about the scientific equivalent of: 'Have you read a work of Shakespeare's?'" F.R. Leavis -- reigning don of British literary humanists at the time -- reacted with particular anger and (according to many) unseemly venom, denouncing Snow as a "public relations man" for science. Leavis mocked "the preposterous and menacing absurdity of C.P. Snow's consecrated public standing," scorned his "embarrassing vulgarity of style," his "panoptic pseudo-cogencies," his "complete ignorance" of literature, history, or civilization generally, and of the dehumanizing side of "progress" as science offers it. "It is ridiculous," thundered Leavis, "to credit him with any capacity for serious thinking about the problems on which he offers to advise the world. ... Not only is he not a genius, he is intellectually as undistinguished as it is possible to be." In fact, Charles Percy Snow is not widely (or even narrowly) read as a novelist these days, despite -- or, as critics like Leavis might suggest, because of -- his 11-volume opus, collectively titled Strangers and Brothers, a roman-fleuve written over a period of three decades, depicting the public life of Britain refracted especially through the sensibilities of Snow's semiautobiographic academic/politician, Lewis Eliot. If Waiting for Godot is a two-act play in which nothing happens, twice, in Strangers and Brothers nothing happens, 11 times. The Two Cultures, however, is a different creature altogether: brief, lively, controversial, insightful, albeit perhaps a tad misbegotten. Thus, today's readers will be surprised by Snow's conflation of "literary intellectuals" with backward-looking conservatives, notably right-wing Fascist sympathizers such as Yeats, Wyndham Lewis, and Ezra Pound, and his cheerful, optimistic portrayal of scientists as synonymous with progress and social responsibility. After all, for every D.H. Lawrence and T.S. Eliot there were a dozen luminaries of the literary left, just as for every Leo Szilard, an Edward Teller. Snow himself was an establishment liberal, suitably worried about nuclear war, overpopulation, and the economic disparities between rich and poor countries. He lamented the influence of those who, he feared, were likely to turn their backs on human progress; in turn, Snow may have been na?vely optimistic and even downright simplistic about the potential of science to solve the world's problems. The Two Cultures is generous in criticizing both cultures for their intellectual isolationism, and Snow -- being both novelist and physicist -- was himself criticized for immodestly holding himself forth (albeit implicitly) as the perfect embodiment of what an educated person should be. Indeed, someone once commented about Snow that he was "so well-rounded as to be practically spherical." But Snow's gentle curses do not fall evenhandedly on both houses, which doubtless raised the ire of Leavis and his ilk. The "culture of science," Snow announced, "contains a great deal of argument, usually much more rigorous, and almost always at a higher conceptual level, than the literary persons' arguments." Scientists "have the future in their bones" whereas literary intellectuals are "natural Luddites" who "wish the future did not exist." Snow's proposed solution? Broaden the educational system. More significant for our time, however, are not Snow's recommendations, the tendentious reception of his thesis, how he couched it, or even, perhaps, whether he got it right, so much as whether, as widely construed, it currently applies. And whether it matters. Science may be even more prominent in 2005 than it was half a century ago. But just as people can reside at the foot of a mountain without ever climbing it, the fact that science looms conspicuously over modern life does not mean that it has been widely mastered, just as the existence of profound humanistic insights does not guarantee their universal appreciation. Progress in the humanities typically does not threaten science, whereas the more science advances, the more the humanities seem at risk. Yet, paradoxically, scientific achievement only makes humanistic wisdom more important, as technology not only threatens the planet, but even -- in a world of cloning, stem-cell possibilities, genetic engineering, robotics, cyber-human hybrids, xenotransplants -- raises questions about what it is to be human. At the same time, with political ideologues and "faith based" zealots literally seeking to redefine reality to meet their preconceptions, we need the objective, empirical power of science more than ever. Whereas in Snow's day, science was nearly synonymous with physics, the early 21st century has seen a resurgence of biology; rocket science has been eclipsed by genomic science. But the more things change, the more they remain the same: "The more that the results of science are frankly accepted, the more that poetry and eloquence come to be received and studied as what in truth they really are -- the criticism of life by gifted men, alive and active with extraordinary power." Thus spoke Matthew Arnold, in an earlier (1882) Rede Lecture titled "Literature and Science," itself a response to "Darwin's bulldog," T.H. Huxley, who had conspicuously -- and wrongly -- prophesied that science would some day supplant literature. Rather than defending their discipline, many among the literati have mourned its imminent demise. Thus, in his book The Literary Mind: Its Place in an Age of Science, Max Eastman concluded that science was on the verge of answering "every problem that arises," and that literature, therefore, "has no place in such a world." And in 1970 the playwright Eugene Ionesco wondered "if art hasn't reached a dead-end, if indeed in its present form, it hasn't already reached its end. ... For some time now, science has been making enormous progress, whereas the empirical revelations of writers have been making very little. ... Can literature still be considered a means to knowledge?" Balancing Eastman and Ionesco -- humanists pessimistic about the humanities -- Noam Chomsky is a scientist radically distrustful of science: "It is quite possible -- overwhelmingly probable, one might guess -- that we will always learn more about human life and human personality from novels than from scientific psychology." Should we see the two cultures, instead, the way Stephen Jay Gould used to describe science and religion: as "nonoverlapping magisteria"? But in fact, they do overlap, most obviously when practitioners of either seek to enlarge their domain into the other. And when this happens, there have inevitably been cries of outrage, reminiscent of the Snow-Leavis squabble. Thus Edward O. Wilson's effort at "consilience" evoked strenuous opposition, mostly from humanists. Reciprocally, more than a few scientists -- Alan Sokal most prominently -- have been outraged by postmodernist efforts to "transgress the boundaries" by "privileging" a kind of poly-syllabic verbal hijinks over scientific theory building, empirical validation, and careful thought. It is bad enough when certain key words are hijacked, as with the literary community's use of "theory" to mean "literary theory." (Rumor has it that there exist some other theories, including gravitational, quantum, number, and evolutionary.) Imagine if scientists were to appropriate "significance" to mean only "statistical significance." A gulf clearly exists. But is that a problem? Scientists would doubtless be better people if they were culturally literate, and ditto for humanists if they were scientifically informed. Which is worse, the antiscientific nincompoopery of a Tom DeLay, who announced in Congress that the killings at Columbine High School took place "because our school systems teach our children that they are nothing but glorified apes who have evolutionized [sic] out of some primordial mud," or the antihumanist arrogance of a scientific Strangelove, ignorant of, say, the deeper meaning of personhood as explored by Aquinas, Milton, or Whitman? When the cultures are effectively bridged, the results, if not always admirable, are at least likely to be thought provoking: Witness the plays of Michael Frayn, or Leon Kass's incorporation of humanistic sensibility into the deliberations of the President's Committee on Bioethics. O ne can reformulate the "two cultures" problem as a lament about overspecialization, partly captured by the quip that higher education -- especially at the graduate level -- involves learning more and more about less and less until one knows everything about nothing. On the other hand, there is something to be said for specialization insofar as it bespeaks admirable expertise. In medicine, it used to be that "specialists" were rare; not so today, when even general practitioners specialize in "family medicine." And we are almost certainly better off for it. I'd rather have a colonoscopy from a gastroenterologist than from a general practitioner, and would trust a psychiatrist more than a family doctor to prescribe the most suitable antidepressant. At the same time, something is lost when physicians are more comfortable reading MRI's or analyzing arcane lab results than talking with patients. We might also ask whether scientists are doing a better job of communicating with the public, crossing the Snow bridge and thereby constituting a Third Culture, as John Brockman has claimed. The late Carl Sagan was a master at this art, as are Richard Dawkins, Jared Diamond, Brian Greene, and many others. But there is nothing new in scientists reaching out to hoi polloi; Arthur Eddington and Bertrand Russell weren't slouches, nor was T.H. Huxley, and yet they couldn't prevent Snow's "gap." And it is not obvious that Stephen Hawking's A Brief History of Time bridged the cultures so much as confirmed their mutual incomprehensibility. Within academe, there is eager lip service to bridge building between humanities and science, but has there been any progress? We have numerous interdisciplinary degree programs, undergraduate as well as graduate, but are the sciences and humanities any more integrated? The options of "general studies" degrees for undergraduates or "special individual Ph.D. programs," although admirably intended, often end up isolating would-be bridge crossers from traditional departments where their presence might otherwise encourage genuine traffic across disciplinary boundaries. And despite the proliferation of numerous centers and institutes for interdisciplinary study, I suggest that, if anything, academic cultures are less mutually interpenetrating now than in Snow's day, perhaps because the institutionalization of bridge builders serves, ironically, to marginalize them, and keep them out of the main academic thoroughfares. Society scarcely benefits from those who achieve renown in Mongolian metaphysics by speaking only Mongolian to the metaphysicians, and only metaphysics to the Mongolians. It seems that higher education -- like politics -- is more polarized than ever. Anthropology departments, increasingly, are subdivided into cultural or biological, the two often barely on speaking terms. Many biology departments have split into "skin in" (cellular, molecular, biochemical) and "skin out" (ecology, evolution, organismal), increasingly becoming distinct administrative entities to match their intellectual incompatibility. At my institution, the University of Washington, psychology cherishes its place in the natural sciences, with no one pursuing a humanistic, existential, or even Freudian agenda. There are other universities at which, by contrast, "scientific psychology" is condemned as a kind of sin. Everyone claims to love boundary-busting scholarship, but virtually no one would advise a graduate student or even a faculty member lacking tenure to hitch his or her career to it. There are exceptions -- individuals who are so brave, determined, gifted, foolish or indifferent to professional consequences that they have persevered on one bridge or another. Thanks to them, we have the nascent field of eco-criticism, which links ecology and literature, as well as evolutionary psychology, bioethics, and a growing band of philosophers, neurobiologists, and physicists trying to make sense of consciousness. Many other linkages remain unconsummated, lacking only suitable scholars or maybe -- and here is a heretical notion -- any legitimate basis for them. Geo-poetics, anyone? Or astro-dramaturgy? Most of us would settle for something less abstruse, broader, more natural, yet probably more difficult: increased old-fashioned intellectual traffic between humanists and scientists, as Snow called for. When he was knighted, C.P. Snow chose for his crest (it's a Brit thing), the motto Aut Inveniam Aut Faciam -- "I will either find a way or make one." As we acknowledge his hundredth birthday, maybe someone will find a way to link his two cultures, or at least make a few high-traffic bridges. David P. Barash is a professor of psychology at the University of Washington. He is co-author of Madame Bovary's Ovaries: a Darwinian Look at Literature (Delacorte, 2005), which endeavors to bridge two subcultures: evolutionary biology and literary criticism. From checker at panix.com Fri Dec 2 02:54:58 2005 From: checker at panix.com (Premise Checker) Date: Thu, 1 Dec 2005 21:54:58 -0500 (EST) Subject: [Paleopsych] BBC: Horizon: Homeopathy: The Test Message-ID: Horizon: Homeopathy: The Test http://www.bbc.co.uk/cgi-bin/education/betsie/parser.pl [Transcript appended.] Will James Randi be out of pocket after this week's Horizon? First shown: BBC Two, Tuesday 26 November, 9pm Homeopathy: The Test Homeopathy: The Test - programme summary Homeopathy was pioneered over 200 years ago. Practitioners and patients are convinced it has the power to heal. Today, some of the most famous and influential people in the world, including pop stars, politicians, footballers and even Prince Charles, all use homeopathic remedies. Yet according to traditional science, they are wasting their money. "Unusual claims require unusually good proof" James Randi The Challenge Sceptic James Randi is so convinced that homeopathy will not work, that he has offered $1m to anyone who can provide convincing evidence of its effects. For the first time in the programme's history, Horizon conducts its own scientific experiment, to try and win his money. If they succeed, they will not only be $1m richer - they will also force scientists to rethink some of their fundamental beliefs. Homeopathy and conventional science The basic principle of homeopathy is that like cures like: that an ailment can be cured by small quantities of substances which produce the same symptoms. For example, it is believed that onions, which produce streaming, itchy eyes, can be used to relieve the symptoms of hay fever. However, many of the ingredients of homeopathic cures are poisonous if taken in large enough quantities. So homeopaths dilute the substances they are using in water or alcohol. This is where scientists become sceptical - because homeopathic solutions are diluted so many times they are unlikely to contain any of the original ingredients at all. Yet many of the people who take homeopathic medicines are convinced that they work. Has science missed something, or could there be a more conventional explanation? The Placebo Effect The placebo effect is a well-documented medical phenomenon. Often, a patient taking pills will feel better, regardless of what the pills contain, simply because they believe the pills will work. Doctors studying the placebo effect have noticed that large pills work better than small pills, and that coloured pills work better than white ones. Could the beneficial effects of homeopathy be entirely due to the placebo effect? If so, then homeopathy ought not to work on babies or animals, who have no knowledge that they are taking a medicine. Yet many people are convinced that it does. Can science prove that homeopathy works? In 1988, Jacques Benveniste was studying how allergies affected the body. He focussed on a type of blood cell known as a basophil, which activates when it comes into contact with a substance you're allergic to. As part of his research, Benveniste experimented with very dilute solutions. To his surprise, his research showed that even when the allergic substance was diluted down to homeopathic quantities, it could still trigger a reaction in the basophils. Was this the scientific proof that homeopathic medicines could have a measurable effect on the body? The memory of water In an attempt to explain his results, Benveniste suggested a startling new theory. He proposed that water had the power to 'remember' substances that had been dissolved in it. This startling new idea would force scientists to rethink many fundamental ideas about how liquids behave. Unsurprisingly, the scientific community greeted this idea with scepticism. The then editor of Nature, Sir John Maddox, agreed to publish Benveniste's paper - but on one condition. Benveniste must open his laboratory to a team of independent referees, who would evaluate his techniques. "Scientists are human beings. Like anyone else, they can fool themselves" James Randi Enter James Randi When Maddox named his team, he took everyone by surprise. Included on the team was a man who was not a professional scientist: magician and paranormal investigator James Randi. Randi and the team watched Benveniste's team repeat the experiment. They went to extraordinary lengths to ensure that none of the scientists involved knew which samples were the homeopathic solutions, and which ones were the controls - even taping the sample codes to the ceiling for the duration of the experiment. This time, Benveniste's results were inconclusive, and the scientific community remained unconvinced by Benveniste's memory of water theory. Homeopathy undergoes more tests Since the Benveniste case, more scientists have claimed to see measurable effects of homeopathic medicines. In one of the most convincing tests to date, Dr. David Reilly conducted clinical trials on patients suffering from hay fever. Using hundreds of patients, Reilly was able to show a noticeable improvement in patients taking a homeopathic remedy over those in the control group. Tests on different allergies produced similar results. Yet the scientific community called these results into question because they could not explain how the homeopathic medicines could have worked. Then Professor Madeleine Ennis attended a conference in which a French researcher claimed to be able to show that water had a memory. Ennis was unimpressed - so the researcher challenged her to try the experiment for herself. When she did so, she was astonished to find that her results agreed. Horizon takes up the challenge Although many researchers now offered proof that the effects of homeopathy can be measured, none have yet applied for James Randi's million dollar prize. For the first time in the programme's history, Horizon decided to conduct their own scientific experiment. The programme gathered a team of scientists from among the most respected institutes in the country. The Vice-President of the Royal Society, Professor John Enderby oversaw the experiment, and James Randi flew in from the United States to watch. As with Benveniste's original experiment, Randi insisted that strict precautions be taken to ensure that none of the experimenters knew whether they were dealing with homeopathic solutions, or with pure water Two independent scientists performed tests to see whether their samples produced a biological effect. Only when the experiment was over was it revealed which samples were real. To Randi's relief, the experiment was a total failure. The scientists were no better at deciding which samples were homeopathic than pure chance would have been. Read more [10]questions and answers about homeopathy. References 10. http://www.bbc.co.uk/cgi-bin/education/betsie/parser.pl/0005/www.bbc.co.uk/science/horizon/2002/homeopathyqa.shtml BBC - Science & Nature - Horizon - Homeopathy: The Test http://www.bbc.co.uk/cgi-bin/education/betsie/parser.pl/0005/www.bbc.co.uk/science/horizon/2002/homeopathytrans.shtml You are here: BBC > Science & Nature > TV & Radio Follow-up > Horizon BBC Two, Tuesday 26 November, 9pm Homeopathy: The Test - transcript NARRATOR (NEIL PEARSON): This week Horizon is doing something completely different. For the first time we are conducting our own experiment. We are testing a form of medicine which could transform the world. Should the results be positive this man will have to give us $1m. JAMES RANDI (Paranormal Investigator): Do the test, prove that it works and win a million dollars. NARRATOR: But if the results are negative then millions of people, including some of the most famous and influential in the world, may have been wasting their money. The events that would lead to Horizon's million dollar challenge began with Professor Madeleine Ennis, a scientist who may have found the impossible. PROF. MADELEINE ENNIS (Queen's University, Belfast): I was incredibly surprised and really had great feelings of disbelief. NARRATOR: Her work concerns a type of medicine which defies the laws of science. WALTER STEWART (Research Chemist): If Madeleine Ennis turns out to be right it means that science has missed a huge chunk of something. NARRATOR: She has reawakened one of the most bitter controversies of recent years. PROF. BOB PARK (University of Maryland): Madeleine Ennis's experiments cannot be right. I mean it's, they're, they're, preposterous. MADELEINE ENNIS: I have no explanation for what happened. However, this is science. If we knew the answers to the questions we wouldn't bother doing the experiments. NARRATOR: It's all about something you can find on every high street in Britain: homeopathy. Homeopathy isn't some wacky, fringe belief. It's over 200 years old and is used by millions of people, including Presidents and pop stars. It's even credited with helping David Beckham get over his foot injury and the Royals have been keen users since the days of Queen Victoria, but it's also a scientific puzzle. What makes it so mysterious is its two guiding principles, formulated in the 18th century. The first principle is that to find a cure you look for a substance that actually causes the symptoms you're suffering from. It's the principle that like cures like. DR PETER FISHER (Homeopath to The Queen): For instance in colds and hay fever something we often use is allium cepa which is onion and of course we all know the effects of chopping an onion, you know the sore streaming eyes, streaming nose, sneezing and so we would use allium cepa, onion, for a cold with similar sorts of features. NARRATOR: This theory that like cures like led to thousands of different substances being used, some of them truly bizarre. DR LIONEL MILGROM (Homeopath): In principle you can make a homeopathic remedy out of absolutely anything that's plant. PETER FISHER: Deadly nightshade. LIONEL MILGROM: Animal. PETER FISHER: Snake venom. LIONEL MILGROM: Mineral. PETER FISHER: Calcium carbonate, which is of course chalk. LIONEL MILGROM: Disease product. PETER FISHER: Tuberculous gland of a cow. LIONEL MILGROM: Radiation. NARRATOR: But then homeopaths found that many of these substances were poisonous, so they started to dilute them. This led to the extraordinary second principle of homeopathy: the more you dilute a remedy the more effective it becomes, provided it's done in a special way. The method homeopaths use to this day is called serial dilution. A drop of the original substance, whether it's snake venom or sulphuric acid, is added to 99 drops of waster or alcohol. Then the mixture is violently shaken. Here it's done by machine, but traditionally homeopaths would hit the tube against a hard surface. Either way, homeopaths believe this is a vital stage. It somehow transfers the healing powers from the original substance into the water itself. The result is a mixture diluted 100 times. LIONEL MILGROM: That will give you what's called a 1C solution, that's one part in 100. You then take that 1C solution and dissolve it in another 99 parts and now you end up with a 2C solution. NARRATOR: At 2C the medicine is one part in 10,000, but the homeopaths keep diluting and this is where the conflict with science begins. At 6C the medicine is diluted a million million times. This is equivalent to one drop in 20 swimming pools. Another six dilutions gives you 12C. This is equivalent to one drop in the Atlantic Ocean, but even this is not enough for most homeopathic medicines. The typical dilution is 30C, a truly astronomical level of dilution. BOB PARK: One drop in all of the oceans on Earth would be much more concentrated than that. I would have to go off the planet to make that kind of dilution. NARRATOR: But homeopaths believe that a drop of this ultra dilute solution placed onto sugar pills can cure you. That's why homeopathy is so controversial because science says that makes no sense whatsoever. BOB PARK: There is a limit to how much we can dilute any substance. We can only dilute it down to the point that we have one molecule left. The next dilution we probably won't even have that one molecule. WALTER STEWART: It's possible to go back and count how many molecules are present in a homeopathic dose and the astonishing answer is absolutely none. There's less than a chance in a million, less than a chance in a billion that there's a single molecule. NARRATOR: A molecule is the smallest piece of a substance you can have, so for something to have any effect at all conventional science says you need one molecule of it at the very least. WALTER STEWART: Science has through many, many different experiments shown that when a drug works it's always through the way the molecule interacts with the body and, so the discovery that there's no molecules means absolutely there's no effect. NARRATOR: That's why science and homeopathy have been at war for over 100 years. The homeopaths say that their remedies have healing powers. Science says there's nothing but water. Then one scientist claimed the homeopaths were right after all. Jacques Benveniste was one of France's science superstars. He had a string of discoveries to his name and some believed he was on his way to earning a Nobel Prize. DR JACQUES BENVENISTE (National Institute for Medical Research): I was considered as, well in French we have a word which says Nobel is nobelisable, which means we can have a Nobel Prize because I started from scratch the whole field of research. I was the head of a very large team, had a lot of money and so I was a very successful person. NARRATOR: Benveniste was an expert in the field of allergy, in particular he was studying a type of blood cell involved in allergic reactions - the basophil. When basophils come into contact with something you're sensitive to they become activated causing the telltale symptoms. Benveniste had developed a test that could tell if a person was allergic to something or not. He added a kind of dye that only turns inactive basophils blue, so by counting the blue cells he could work out whether there had been a reaction, but then something utterly unexpected started to happen. JACQUES BENVENISTE: A technician told me one day I don't understand because I have diluted a substance that is activating basophils to a point where it shouldn't work and it still works. NARRATOR: The researcher had taken the chemical and added water, just like homeopaths do. The result should have been a solution so dilute it had absolutely no effect and yet, bizarrely, there was a reaction. The basophils had been activated. Benveniste knew this shouldn't have been possible. JACQUES BENVENISTE: I remember saying to this, to her, this is water so it cannot work. NARRATOR: Benveniste's team was baffled. They needed to find out what was going on, so they carried out hundreds of experiments and soon realised that they'd made an extraordinary discovery. It seemed that when a chemical was diluted to homeopathic levels the result was a special kind of water. It didn't behave like ordinary water, it acted like it still contained the original substance. It was as if the water was remembering the chemical it had once contained, so Benveniste called the phenomenon the 'memory of water'. At last here was scientific evidence that homeopathy could work. Benveniste knew this was a radical suggestion, but there was a way to get his results taken seriously. He had to get them published in a scientific journal. JACQUES BENVENISTE: A result doesn't exist until it is admitted by the scientific community. It's like, like being a good opera singer but singing in your bathroom. That's fine, but it's not Scala, Milan or the Met, Met or the Opera at Paris, what-have-you. NARRATOR: So he sent his work to the most prestigious journal in the world, a journal which for over 100 years has reported the greatest of scientific discoveries: Nature . SIR JOHN MADDOX ( Nature Editor 1980-1995): Nature is the place that everyone working in science recognises to be a way of getting publicity of the best kind. NARRATOR: Benveniste's research ended up with one of the most powerful figures in science, the then Editor of Nature , Sir John Maddox. Maddox knew that the memory of water made no scientific sense, but he couldn't just ignore work from such a respected scientist, so he agonised about what to do. Eventually he reached a decision. SIR JOHN MADDOX: I said OK, we'll publish your paper if you ;et us come and inspect your lab and he agreed, to my astonishment. NARRATOR: So in June 1988 Benveniste's research appeared in the pages of Nature . It caused a scientific sensation. Benveniste became a celebrity. His memory of water made news across the world. He seemed to have found the evidence that made homeopathy scientifically credible, but the story wasn't quite over. Benveniste had agreed to let in a team from Nature . It was a decision he would live to regret. Maddox set about assembling his team of investigators and his choices revealed his true suspicions. First, he chose Walter Stewart, a scientist and fraud-buster, but his next choice would really cause a stir: James Randi. JACQUES BENVENISTE: I looked in my books and I said who are, who is Randi and couldn't find any scientist called Randi. NARRATOR: That was because the amazing Randi isn't a scientist, he's a magician, but he's no ordinary conjuror. He's also an arch sceptic, a fierce opponent of all things supernatural. JACQUES BENVENISTE: I called John Maddox and I said what, what is this? I mean I thought you were coming with, with scientists to discuss science. NARRATOR: But Randi felt he was just the man for the job. On one occasion he had fooled even experienced scientists with his spoon bending tricks. JAMES RANDI: Scientists don't always think rationally and in a direct fashion. They're human beings like anyone else. They can fool themselves. NARRATOR: So Randi became the second investigator. JAMES RANDI: Astonishing. NARRATOR: On 4th July 1988 the investigative team arrived in Paris ready for the final showdown. SIR JOHN MADDOX: The first thing we did was to sit round the table in Benveniste's lab. Benveniste himself struck us all as looking very much like a film star. JAMES RANDI: I found him to be a charming, very continental gentleman. He's a great personality. He was very much in control. JACQUES BENVENISTE: We were quite relaxed because there was no reason why things should not go right. NARRATOR: The first step was for Benveniste and his team to perform their experiment under Randi's watchful gaze. They had to prepare two sets of tubes containing homeopathic water and ordinary water. If the homeopathic water was having a real effect different from ordinary water then homeopathy would be vindicated. (ACTUALITY EXPERIMENT CHAT) As they plotted the results it was clear the experiment had worked. JAMES RANDI: There were huge peaks coming up out of it and that was very active results, I mean very, very positive results. WALTER STEWART: The astonishing thing about these results is that they repeated the claim, they demonstrated the claim that a homeopathic dilution, a dilution where there were no molecules, could actually have some sort of an effect. NARRATOR: But Maddox had seen that the experimenters knew which tubes contained the homeopathic water and which contained the ordinary water, so perhaps unconsciously, this might have influenced the results, so he asked them to repeat the experiment. This time the tubes would be relabelled with a secret code so that no-one knew which tube was which. JAMES RANDI: We went into a sealed room and we actually taped newspapers over the windows to the room that were accessible to the hall. WALTER STEWART: We recorded in handwriting which tube was which and we put this into an envelope and sealed it so that nobody could open it or change it. NARRATOR: At this point the investigation took a turn for the surreal as they went to extraordinary lengths to keep the code secret. JAMES RANDI: Walter and I got up on the stepladder and stuck it to the ceiling of the lab. WALTER STEWART: There it was taped above us as all of this work went on. JACQUES BENVENISTE: Sticking an envelope to the ceiling was utterly ridiculous. There is no way you can associate that with science. NARRATOR: With the codes out of reach the final experiment could begin. By now Benveniste had lost control of events. JACQUES BENVENISTE: It was a madhouse. Randi was doing magician tricks. JAMES RANDI: Yes I was doing perhaps a little bit of sleight-of-hand with an object or something like that, just to lighten the atmosphere. NARRATOR: Soon the analysis was complete. It was time to break the code to see if the experiment had worked. Benveniste and his team were brimming with optimism. JAMES RANDI: Oh my goodness it was party-time, cracked crabs legs and magnums, literally, of champagne packed in ice. WALTER STEWART: We were going to be treated to a wonderful dinner. The press, many members of the press were there. JAMES RANDI: John and Walter and I were looking at one another as if to say wow, if this doesn't work it's going to be a downer. WALTER STEWART: Finally came the actual work of decoding the result. JAMES RANDI: There was much excitement at the table. Everyone was gathered around. NARRATOR: Benveniste felt sure that the results would support homeopathy and that he would be vindicated. JAMES RANDI: That didn't happen. It was just a total failure. SIR JOHN MADDOX: We said well nothing here is there? WALTER STEWART: And immediately the mood in the laboratory switched, people burst into tears. JAMES RANDI: It was general gloom. NARRATOR: The team wrote a report accusing Benveniste of doing bad science and branding the claims for the memory of water a delusion. Benveniste's scientific reputation was ruined. JACQUES BENVENISTE: Everybody believed that I am totally wrong. It's simply dismissed. Your phone call doesn't ring anymore. Just like actresses, or actress that have no, are no more in fashion the phone suddenly is silent. NARRATOR: For now the memory of water was forgotten. Science declared homeopathy impossible once more, but strangely that didn't cause homeopathy to disappear. Instead it grew. Since the Benveniste affair sales of homeopathic medicines have rocketed. Homeopathy has become a trendy lifestyle choice, one of the caring, all natural medicines, more popular in the 21st-century than ever before. Despite the scepticism of science millions of people use it and believe it has helped them, like Marie Smith. Fifteen years ago Marie was diagnosed with a life-threatening blood disorder. MARIE SMITH: I was more concerned for me children. I used to look at them thinking I may, may not be here one day for yous. That was the worst part of it. NARRATOR: She'd tried everything that conventional medicine could offer, including drugs and surgery. Nothing seemed to work. Then she tried homeopathy. She took a remedy made from common salt. MARIE SMITH: It's like somebody putting me in a coffin and taking me back out again. That's just the way I felt and the quality of my life changed completely. NARRATOR: Since then Marie has been healthy and she has no doubt it's homeopathy that's helped her. MARIE SMITH: I know it saved my life and it's made my life a lot different, yeah and I'm just glad I'm enjoying these grandchildren which I never thought I would do. NARRATOR: There are thousands of cases like Marie's and they do present science with a problem. If homeopathy is scientific nonsense then why are so many people apparently being cured by it? The answer may lie in the strange and powerful placebo effect. The placebo effect is one of the most peculiar phenomena in all science. Doctors have long known that some patients can be cured with pills that contain no active ingredient at all, just plain sugar, what they call the placebo, and they've noticed an even great puzzle: that larger placebo pills work better than small ones, coloured pills work better than white pills. The key is simply believing that the pill will help you. This releases the powers in our minds that reduce stress and that alone can improve your health. BOB PARK: Stress hormones make you feel terribly uncomfortable. The minute you relieve the anxiety, relieve the stress hormones people do feel better, but that's a true physiological effect. NARRATOR: Scientists believe the mere act of taking a homeopathic remedy can make people feel better and homeopathy has other ways of reducing stress. LIONEL MILGROM: And is there any particular time of day that you will, you'll, you'll have that feeling? PATIENT: No. NARRATOR: A crucial part of homeopathic care is the consultation. LIONEL MILGROM: The stress that you have at work, is that, are those around issues that make you feel quite emotional? PATIENT: No. LIONEL MILGROM: The main thing about a homeopathic interview is that we do spend a lot of time talking and listening to the patient. We would ask questions of how they eat, how they sleep, how much worry and tension there is in their lives, hopefully give them some advice about how to actually ease problems of stress. PATIENT I just feel I want to have something more natural. LIONEL MILGROM: Yeah... NARRATOR: So most scientists believe that when homeopathy works it must be because of the placebo effect. BOB PARK: As far as I know it's the only thing that is really guaranteed to be a perfect placebo. There is no medicine in the medicine at all. NARRATOR: It seems like a perfect explanation, except that homeopathy appears to work when a placebo shouldn't - when the patient doesn't even know they're taking a medicine. All over the country animals are being treated with homeopathic medicines. Pregnant cows are given dilute cuttlefish ink, sheep receive homeopathic silver to treat eye infections, piglets get sulphur to fatten them up. A growing number of vets believe it's the medicine of the future, like Mark Elliot who's used homeopathy his whole career, on all sorts of animals. MARK ELLIOT (Homeopathic Vet): Primarily it's dogs and horses, but we also treat cats, small rodents, rabbits, guinea pigs, even reptiles, but I have treated an elephant with arthritis and I've heard of colleagues recently who treated giraffes. It works on any species exactly the same as in the human field. NARRATOR: Mark made it his mission to prove that homeopathy works. He decided to study horses with cushing's, a disease caused by cancer. He treated them all with the same homeopathic remedy. The results were impressive. MARK ELLIOT: We achieved an overall 80% success rate which is great because that is comparable with, with modern medical drugs. NARRATOR: To Mark this was clear proof that homeopathy can't be the placebo effect. MARK ELLIOT: You can't explain to this animal why the treatment it's being given is going to ben, to benefit it, or how it's potentially going to benefit it and as a result, when you see a positive result in a horse or a dog that to me is the ultimate proof that homeopathy is not placebo, homeopathy works. NARRATOR: But Mark's small trial doesn't convince the sceptics. They need far more evidence before they'll believe that homeopathic medicines are anything more than plain water. JAMES RANDI: I've heard it said that unusual claims require unusually good proof. That's true. For example, if I tell you that at my home in Florida in the United States I have a goat in my garden. You could easily check that out. Yeah, looks like a goat, smells like a goat, so the case is essentially proven, but if I say I have a unicorn, that's a different matter. That's an unusual claim. NARRATOR: To scientists the claim that homeopathic water can cure you is as unlikely as finding a unicorn. JAMES RANDI: Yes, there is a unicorn. That is called homeopathy. NARRATOR: Homeopathy needed the very highest standards of proof. In science the best evidence there can be is a rigorous trial comparing a medicine against a placebo and in recent years such trials have been done with homeopathy. David Reilly is a conventionally trained doctor who became intrigued by the claims of the homeopaths. He wanted to put homeopathy to the test and decided to look at hay fever. Both homeopathy and conventional medicine use pollen as a treatment for hay fever. What's different about homeopathy is the dilution. DR DAVID REILLY (Glasgow Homeopathic Hospital): The single controversial element is that preparing this pollen by the homeopathic method takes it to a point that there's not a single molecule of the starting material present. I confidently assumed that these diluted medicines were placebos. NARRATOR: David Reilly recruited 35 patients with hayfever. Half of them were given a homeopathic medicine made from pollen, half were given placebo, just sugar pills. No one knew which was which. For four weeks they filled in a diary measuring how bad their symptoms were. The question was: would there be a difference? DAVID REILLY: To our collective shock a result came out that was very clear those on the active medication had a substantially greater reduction in symptoms than those receiving the placebo medicine. According to that data the medicine worked. NARRATOR: But to be absolutely rigorous Reilly decided to repeat the study and he got the same result. Then he went further and tested a different type of allergy. Again the result was positive, but despite all these studies, most scientists refuse to believe his research. DAVID REILLY: It became obvious that in certain minds 100 studies, 200 studies would not change the mental framework and so I'm sceptical that if 200 haven't changed it I don't think 400 would change it. NARRATOR: The reason Reilly's research was dismissed was because his conclusion had no scientific explanation. Sceptics pointed to the glaring problem: there was still no evidence as to how something that was pure water could actually work. BOB PARK: If you design a medication to take advantage of what we know about physiology we're not surprised when it works. When, when you come up with no explanation at all for how it could work and then claim is works we're not likely to take it seriously. NARRATOR: To convince science, homeopathy had to find a mechanism, something that could explain how homeopathic water could cure you. That meant proving that water really does have a memory. Then a scientist appeared to find that proof. Madeleine Ennis has never had much time for homeopathy. As a professor of pharmacology she knows its scientifically impossible. MADELEINE ENNIS: I'm a completely conventional scientist. I have had no experience of using non-conventional medications and have no intention really of starting to use them. NARRATOR: But at a conference Ennis heard a French scientist present some puzzling results, results that seemed to show that water has a memory. MADELEINE ENNIS: Many of us were incredibly sceptical about the findings. We told him that something must have gone wrong in the experiments and that we didn't believe what he had presented. NARRATOR: He replied with a challenge. MADELEINE ENNIS: I was asked whether, if I really believed my viewpoint, would I test the hypothesis that the data were wrong? NARRATOR: Ennis knew that the memory of water breaks the laws the science, but she believed that a scientist should always be willing to investigate new ideas, so the sceptical Ennis ended up testing the central claim of homeopathy. She performed an experiment almost identical to Benveniste's using the same kind of blood cell. Then she added a chemical, histamine, which had been diluted down to homeopathic levels. The crucial question: would it have any effect on the cells? To find out she had to count the cells one by one to see whether they had been affected by the homeopathic water. The results were mystifying. the homeopathic water couldn't have had a single molecule of histamine, yet it still had an effect on the cells. MADELEINE ENNIS: They certainly weren't the results that I wanted to see and they definitely weren't the results that I would have liked to have seen. NARRATOR: Ennis wondered whether counting by hand had introduced an error, so she repeated the experiment using an automated system to count the cells, and astonishingly, the result was still positive. MADELEINE ENNIS: I was incredibly surprised and really had great feelings of disbelief, but I know how the experiments were performed and I couldn't see an error in what we had done. NARRATOR: These results seemed to prove that water does have a memory after all. It's exactly what the homeopaths have been hoping for. PETER FISHER: If these results become generally accepted it will revolutionise the view of homeopathy. Homeopathy will suddenly become this idea that was perhaps born before its time. LIONEL MILGROM: It's particularly exciting because it does seem to suggest that Benveniste was correct. NARRATOR: At last here is evidence from a highly respected researcher that homeopathic water has a real biological effect. The claims of homeopathy might be true after all. However, the arch sceptic Randi is unimpressed. JAMES RANDI: There is so many ways that errors are purposeful interference can take place. NARRATOR: As part of his campaign to test bizarre claims Randi has decided to put his money where his mouth is. On his website is a public promise: to anyone who prove the scientifically impossible Randi will pay $1m. JAMES RANDI: This is not a cheap theatrical stung. It's theatrical, yes, but it's a million dollar's worth. NARRATOR: Proving the memory of water would certainly qualify for the million dollars. To win the prize someone would simply have to repeat Ennis's experiments under controlled conditions, yet no-one has applied. JAMES RANDI: Where are the homeopathic labs, the biological labs around the world, who say that this is the real thing who would want to make a million dollars and aren't doing it? NARRATOR: So Horizon decided to take up Randi's challenge. We gathered experts from some of Britain's leading scientific institutions to help us repeat Ennis's experiments. Under the most rigorous of conditions they'll see whether they can find any evidence for the memory of water. We brought James Randi over from the United States to witness the experiment and we came to the world's most august scientific institution, the Royal Society. The Vice-President of the Society, Professor John Enderby, agreed to oversee the experiment for us. PROF. JOHN ENDERBY: ...but they'll, of course as far as the experimenters are concerned they'll have totally different numbers... NARRATOR: And with a million dollars at stake James Randi wants to make sure there's no room for error. JAMES RANDI: ...keeping the original samples, so I'm very happy with that provision. I'm willing to accept a positive result for homeopathy or for astrology or for anything else. I may not like it, but I have to be willing to accept it. NARRATOR: The first stage is to prepare the homeopathic dilutions. We came to the laboratories of University College London where Professor Peter Mobbs agreed to produce them for us. He's going to make a homeopathic solution of histamine by repeatedly diluting one drop of solution into 99 drops of water. PETER MOBBS: OK, now I'm transferring the histamine into 9.9mmls of distilled water and then we'll discard the tip. NARRATOR: For comparison we also need control tubes, tubes that have never had histamine in them. For these Peter starts with plain water. PETER MOBBS: In it goes. NARRATOR: This stage dilutes the solutions down to one in 100 - that's 1C. We now have 10 tubes. Half are just water diluted with more water, the control tubes, half are histamine diluted in water. These are all shaken, the crucial homeopathic step. Now he dilutes each of the tubes again, to 2C. Then to 3C, all the way to 5C. PETER MOBBS: The histamine's now been diluted ten thousand million times. Still a few molecules left in there, but not very many. NARRATOR: Then we asked Professor of Electrical Engineering, Hugh Griffiths, to randomly relabel each of our 10 tubes. Now only he has the code for which tubes contain the homeopathic dilutions and which tubes contain water. HUGH GRIFFITHS: OK, so there's the record of which is which. I'm going to encase it in aluminium foil and then seal it in this envelope here. NARRATOR: Next the time-consuming task of taking these solutions down to true homeopathic levels. UCL scientist Rachel Pearson takes each of the tubes and dilutes them down further - to 6C. That's one drop in 20 swimming pools. To 12C - a drop in the Atlantic. Then to 15C - one drop in all the world's oceans. The tubes have now been diluted one million million million million million times. Some are taken even further down, to 18C. Every tube, whether it contains histamine or water, goes through exactly the same procedure. To guard against any possibility of fraud, Professor Enderby himself recodes every single tube. The result is 40 tubes none of which should contain any molecules of histamine at all. Conventional science says they are all identical, but if Madeleine Ennis is right her methods should tell which ones contain the real homeopathic dilutions. Now we repeat Ennis's procedure. We take a drop of water from each of the tubes and add a sample of living human cells. Then it's time for Wayne Turnbull at Guys Hospital, to analyse the cells to see whether the homeopathic water has had any effect. He'll be using the most sophisticated system available: a flow cytometer. WAYNE TURNBULL: Loading it up, bringing it up to pressure. Essentially the technology allows us to take individual cells and push them past a focused laser beam. A single stream of cells will be pushed along through the nozzle head and come straight down through the machine. The laser lights will be focussed at each individual cell as it goes past. Reflected laser light is then being picked up by these electronic detectors here. NARRATOR: By measuring the light reflected off each cell the computer can tell whether they've reacted or not. WAYNE TURNBULL: This is actually a very fast machine. I can run up to 100 million cells an hour. JAMES RANDI: Whoa. NARRATOR: But to be absolutely rigorous we asked a second scientist, Marian Macey at the Royal London Hospital, to perform the analysis in parallel. Our two labs get to work. Using a flow cytometer they measure how many of the cells are being activated by the different test solutions. Some tubes do seem to be having more of an effect than others. The question is: are they the homeopathic ones? At last the analysis is complete. We gather all the participants here to the Royal Society to find out the results. First, everyone confirms that the experiment has been conducted in a rigorous fashion. MARION MACEY: I applied my own numbering system to the... RACHEL PEARSON: ...5, 5.4 millimolar solution... WAYNE TURNBULL: ...we eventually did arrive at a protocol that we were happy with. NARRATOR: Then there's the small matter of the million dollars. JOHN ENDERBY: James, is the cheque in your pocket ready now? JAMES RANDI: We don't actually carry a cheque around. It's in the form of negotiable bonds which will be immediately sep, separated from our account and given to whoever should win the prize. NARRATOR: We asked the firm to fax us confirmation that the million dollar prize is there. JOHN ENDERBY: OK, now look, I'm going to open this envelope. NARRATOR: Now at last it's time to break the code. On hand to analyse the results is statistician Martin Bland. JOHN ENDERBY: 59. NARRATOR: We've divided the tubes into those that did and didn't seem to have an effect in our experiment. JOHN ENDERBY: 62. NARRATOR: Each tube is either a D for the homeopathic dilutions, or a C, for the plain water controls. JOHN ENDERBY: 52 and 75 were Cs. NARRATOR: Rachel Pearson identifies the tubes with a C or D. If the memory of water is real each column should either have mostly Cs or mostly Ds. This would show that the homeopathic dilutions are having a real effect, different from ordinary water. There's a hint that the letters are starting to line up. JOHN ENDERBY: Column 1 we've got 5 Cs and a D. Column 3 we've got 4 Cs and a D, so let's press on. 148 and 9, 28 and... NARRATOR: But as more codes are read out the true result becomes clear: the Cs and Ds are completely mixed up. The results are just what you'd expect by chance. A statistical analysis confirms it. The homeopathic water hasn't had any effect. PROF. MARTIN BLAND (St. George's Hospital Medical School): There's absolutely no evidence at all to say that there is any difference between the solution that started off as pure water and the solution that started off with the histamine. JOHN ENDERBY: What this has convinced me is that water does not have a memory. NARRATOR: So Horizon hasn't won the million dollars. It's another triumph for James Randi. His reputation and his money are safe, but even he admits this may not be the final word. JAMES RANDI: Further investigation needs to be done. This may sound a little strange coming from me, but if there is any possibility that there's a reality here I want to know about it, all of humanity wants to know about it. NARRATOR: Homeopathy is back where it started without any credible scientific explanation. That won't stop millions of people putting their faith in it, but science is confident. Homeopathy is impossible. From checker at panix.com Fri Dec 2 02:55:06 2005 From: checker at panix.com (Premise Checker) Date: Thu, 1 Dec 2005 21:55:06 -0500 (EST) Subject: [Paleopsych] Prospect: Chomsky as the world's top public intellectual Message-ID: The Chronicle of Higher Education: Magazine & journal reader http://chronicle.com/daily/2005/11/2005111801j.htm 5.11.18 [articles appended] A glance at the November issue of Prospect: Chomsky as the world's top public intellectual Noam Chomsky, the controversial author and professor of linguistics at the Massachusetts Institute of Technology, has been voted the world's leading public intellectual from a list of 100 prominent thinkers compiled by the British magazine. Mr. Chomsky first won acclaim for his transformational-grammar theory, which holds that the ability to form language is an innate human trait. But he is better-known for his outspokenness on political issues. He was a major voice against the Vietnam War and continues to argue against American policies that he finds immoral. He falls into a line of "oppositional intellectuals," writes David Herman, a contributing editor for the magazine, in an explanation of the poll. Mr. Chomsky's selection, he adds, proves that "we still yearn for such figures." More than 20,000 people participated in the magazine's poll. The vote for Mr. Chomsky came as no surprise to Robin Blackburn, a visiting professor of historical studies at the New School, in New York. Mr. Chomsky, he writes, is a "brilliant thinker" who has stepped outside his own field of study in order to lambaste corrupt government policies. Oliver Kamm, a columnist for The Times of London, does not share in the adoration. For starters, he writes, Mr. Chomsky combines elaborate rhetoric with thin evidence to support "dubious arguments." Mr. Kamm particularly criticizes Mr. Chomsky's opposition to American military interventions and arguments that equate American foreign policy with the actions of Nazi Germany. "If this is your judgment of the U.S.," writes Mr. Kamm, "then it will be difficult to credit that its intervention might ever serve humanitarian ends." That's not necessarily so, says Mr. Blackburn, who notes that neither apartheid in South Africa nor Stalinism in Russia was eradicated by "bombardment and invasion." Mr. Chomsky simply opposes putting American soldiers in harm's way, he writes, where they can "do harm and acquire a taste for it." Mr. Blackburn's and Mr. Kamm's essays are contained in the article "For and Against Chomsky," which is available at [54]http://www.prospect-magazine.co.uk/article_details.php?id= 7110&AuthKey=fea7d83f56a70abc8c07b819492523e1&issue=512 Mr. Herman's analysis, Global public intellectuals poll, is available at [55]http://www.prospect-magazine.co.uk/article_details.php?id=7078&Aut hKey=fea7d83f56a70abc8c07b819492523e1&issue=512 A tally of the votes for all 100 candidates is available at [56]http://www.prospect-magazine.co.uk/intellectuals/results --Jason M. Breslow ----------- http://www.prospect-magazine.co.uk/article_details.php?id=7078&AuthKey=fea7d83f56a70abc8c07b819492523e1&issue=512 [No. 116 / Nov 2005] The Prospect/Foreign Policy list of 100 global public intellectuals suggested that the age of the great oppositional thinker was over, but Noam Chomsky's emphatic victory shows many remain nostalgic for it David Herman _________________________________________________________________ The two most striking things about this [40]poll are the number of people who took part and the age of the winners. Over 20,000 people voted for their top five names from our longlist of 100, and they tended to reinforce the trends of the original list. More than half of the top 30 are based in North America. Europe, by contrast, is surprisingly under-represented--a cluster of well-known names in the top 20 (Eco, Havel, Habermas) but then it is a long way down to Kristeva (48) and Negri (50). The most striking absence is France--one name in the top 40, fewer than Iran or Peru. There is not one woman in the top ten, and only three in the top 20. The big names of the left did well (Chomsky, Habermas, Hobsbawm) but there weren't many of them. Scientists, literary critics, philosophers and psychologists all fared badly. And voters did not use the "bonus ball" to champion new faces. The top two names, Milton Friedman and Stephen Hawking, do not represent new strands of thought. (In fact, Friedman was specifically named in last month's "criteria for inclusion"--along with other ancient greats like Solzhenitsyn--as an example of someone who had been deliberately left off the longlist on the grounds that they were no longer actively contributing to their discipline.) The poll was in one sense a victim of its own success. Word spread around the internet very quickly, and at least three of our top 20 (Chomsky, Hitchens and Soroush), or their acolytes, decided to draw attention to their presence on the list by using their personal websites to link to Prospect's voting page. In Hitchens's and Soroush's case, the votes then started to flood in. Although it is hard to tell exactly where voters came from, it is likely that a clear majority were from Britain and America, with a fair sprinkling from other parts of Europe and the English-speaking world. There was also a huge burst from Iran, although very little voting from the far east, which may explain why four of the bottom five on the list were thinkers from Japan and China. What is most interesting about the votes, though, is the age of the top names. Chomsky won by a mile, with over 4,800 votes. Then Eco, with just under 2,500, Dawkins and Havel. Only two in the top nine--Hitchens and Rushdie--were born after the second world war. And of the top 20, only Klein and Lomborg are under 50. This may reflect the age of the voters, choosing familiar names. However, surely it also tells us something about the radically shifting nature of the public intellectual in the west. Who are the younger equivalents to Habermas, Chomsky and Havel? Great names are formed by great events. But there has been no shortage of terrible events in the last ten years and some names on the list (Ignatieff, Fukuyama, Hitchens) are so prominent precisely because of what they have said about them. Only one of these, though, is European, and he lives in Washington DC. You can read more elsewhere in this issue about Chomsky. Even if you disagree with his attacks on US foreign policy, there are two reasons why few would be surprised to see him at the top of the poll. First, his intellectual range. Like a number of other figures in the top ten, he is prominent in a number of areas. Havel was a playwright and statesman; Eco a literary critic and bestselling author; Diamond was a professor of physiology and now has a chair in geography at UCLA, and writes on huge issues ranging over a great time span. Second, and more important, Chomsky belongs to a tradition which goes back to Zola, Russell and Sartre: a major thinker or writer who speaks out on the great public issues of his time, opposing his government on questions of conscience rather than the fine print of policy. I said last month in my commentary on the original Prospect/Foreign Policy list of 100 names that it seemed to represent the death of that grand tradition of oppositional intellectuals. The overwhelming victory for Noam Chomsky suggests that we still yearn for such figures--we just don't seem to be able to find any under the age of 70. http://www.prospect-magazine.co.uk/intellectuals/results The Prospect/FP Global public intellectuals poll--results Over 20,000 people voted for their top names from our original longlist of 100. The final results are below; click [10]here for David Herman's analysis, and [11]here for brief biographies of the top names Position Name Total votes 1 Noam Chomsky 4827 2 Umberto Eco 2464 3 Richard Dawkins 2188 4 V?clav Havel 1990 5 Christopher Hitchens 1844 6 Paul Krugman 1746 7 J?rgen Habermas 1639 8 Amartya Sen 1590 9 Jared Diamond 1499 10 Salman Rushdie 1468 11 Naomi Klein 1378 12 Shirin Ebadi 1309 13 Hernando De Soto 1202 14 Bj?rn Lomborg 1141 15 Abdolkarim Soroush 1114 16 Thomas Friedman 1049 17 Pope Benedict XVI 1046 18 Eric Hobsbawm 1037 19 Paul Wolfowitz 1028 20 Camille Paglia 1013 21 Francis Fukuyama 883 22 Jean Baudrillard 858 23 Slavoj Zizek 840 24 Daniel Dennett 832 25 Freeman Dyson 823 26 Steven Pinker 812 27 Jeffrey Sachs 810 28 Samuel Huntington 805 29 Mario Vargas Llosa 771 30 Ali al-Sistani 768 31 EO Wilson 742 32 Richard Posner 740 33 Peter Singer 703 34 Bernard Lewis 660 35 Fareed Zakaria 634 36 Gary Becker 630 37 Michael Ignatieff 610 38 Chinua Achebe 585 39 Anthony Giddens 582 40 Lawrence Lessig 565 41 Richard Rorty 562 42 Jagdish Bhagwati 561 43 Fernando Cardoso 556 44= JM Coetzee 548 44= Niall Ferguson 548 46 Ayaan Hirsi Ali 546 47 Steven Weinberg 507 48 Julia Kristeva 487 49 Germaine Greer 471 50 Antonio Negri 452 51 Rem Koolhaas 429 52 Timothy Garton Ash 428 53 Martha Nussbaum 422 54 Orhan Pamuk 393 55 Clifford Geertz 388 56 Yusuf al-Qaradawi 382 57 Henry Louis Gates Jr. 379 58 Tariq Ramadan 372 59 Amos Oz 358 60 Larry Summers 351 61 Hans K?ng 344 62 Robert Kagan 339 63 Paul Kennedy 334 64 Daniel Kahnemann 312 65 Sari Nusseibeh 297 66 Wole Soyinka 296 67 Kemal Dervis 295 68 Michael Walzer 279 69 Gao Xingjian 277 70 Howard Gardner 273 71 James Lovelock 268 72 Robert Hughes 259 73 Ali Mazrui 251 74 Craig Venter 244 75 Martin Rees 242 76 James Q Wilson 229 77 Robert Putnam 221 78 Peter Sloterdijk 217 79 Sergei Karaganov 194 80 Sunita Narain 186 81 Alain Finkielkraut 185 82 Fan Gang 180 83 Florence Wambugu 159 84 Gilles Kepel 156 85 Enrique Krauze 144 86 Ha Jin 129 87 Neil Gershenfeld 120 88 Paul Ekman 118 89 Jaron Lanier 117 90 Gordon Conway 90 91 Pavol Demes 88 92 Elaine Scarry 87 93 Robert Cooper 86 94 Harold Varmus 85 95 Pramoedya Ananta Toer 84 96 Zheng Bijian 76 97 Kenichi Ohmae 68 98= Wang Jisi 59 98= Kishore Mahbubani 59 100 Shintaro Ishihara 57 We asked voters to select a "bonus ball" nomination--a name they believe we should have included on our original longlist. Hundreds of people were chosen--from Bob Dylan to Kofi Annan. Here are the top 20 names Position Name Total votes 1 Milton Friedman 98 2 Stephen Hawking 81 3 Arundhati Roy 78 4 Howard Zinn 72 5 Bill Clinton 67 6 Joseph Stiglitz 57 7 Johan Norberg 48 8= Dalai Lama 45 8= Thomas Sowell 45 10= Cornell West 39 10= Nelson Mandela 39 12 Gore Vidal 37 13 Mohammad Khatami 35 14 John Ralston Saul 33 15= George Monbiot 26 15= Judith Butler 26 17 Victor Davis Hanson 25 18 Gabriel Garc?a M?rquez 24 19= Bono 23 19= Harold Bloom 23 http://www.prospect-magazine.co.uk/article_details.php?id=7110&AuthKey=fea7d83f56a70abc8c07b819492523e1&issue=512 [No. 116 / Nov 2005] For and against Chomsky Is the world's top public intellectual a brilliant expositor of linguistics and the US's duplicitous foreign policy? Or a reflexive anti-American, cavalier with his sources? Robin Blackburn Oliver Kamm _________________________________________________________________ Robin Blackburn teaches at the New School for Social Research, New York. Oliver Kamm is a "Times" columnist For Chomsky Robin Blackburn celebrates a courageous truth-teller to power The huge [40]vote for Noam Chomsky as the world's leading "public intellectual" should be no surprise at all. Who could match him for sheer intellectual achievement and political courage? Very few transform an entire field of enquiry, as Chomsky has done in linguistics. Chomsky's scientific work is still controversial, but his immense achievement is not in question, as may be easily confirmed by consulting the recent Cambridge Companion to Chomsky. He didn't only transform linguistics in the 1950s and 1960s; he has remained in the forefront of controversy and research. The huge admiration for Chomsky evident in Prospect's poll is obviously not only, or even mainly, a response to intellectual achievement. Rather it goes to a brilliant thinker who is willing to step outside his study and devote himself to exposing the high crimes and misdemeanours of the most powerful country in the world and its complicity with venal and brutal rulers across four continents over half a century or more. Some believe--as Paul Robinson, writing in the New York Times Book Review, once put it--that there is a "Chomsky problem." On the one hand, he is the author of profound, though forbiddingly technical, contributions to linguistics. On the other, his political pronouncements are often "maddeningly simple-minded." In fact, it is not difficult to spot connections between the intellectual strategies Chomsky has adopted in science and in politics. Chomsky's approach to syntax stressed the economy of explanation that could be achieved if similarities in the structure of human languages were seen as stemming from biologically rooted, innate capacities of the human mind, above all the recursive ability to generate an infinite number of statements from a finite set of words and symbols. Many modern critics of the radical academy are apt to bemoan its disregard for scientific method and evidence. This is not a reproach that can be aimed at Chomsky, who has pursued a naturalistic and reductionist standpoint in what he calls, in the title of his 1995 volume, The Minimalist Programme. Chomsky's political analyses also strive to keep it simple, but not at the expense of the evidence, which he can abundantly cite if challenged. But it is "maddening" none the less, just as the minimalist programme may be to some of his scientific colleagues. The apparent straightforwardness of Chomsky's political judgements--his "predictable" or even "kneejerk" opposition to western, especially US, military intervention--could seem simplistic. Yet they are based on a mountain of evidence and an economical account of how power and information are shared, distributed and denied. Characteristically, Chomsky begins with a claim of stark simplicity which he elaborates into an intricate account of the different roles of government, military, media and business in the running of the world. Chomsky's apparently simple political stance is rooted in an anarchism and collectivism which generates its own sense of individuality and complexity. He was drawn to the study of language and syntax by a mentor, Zellig Harris, who also combined libertarianism with linguistics. Chomsky's key idea of an innate, shared linguistic capacity for co-operation and innovation is a positive, rather than purely normative, rebuttal of the Straussian argument that natural human inequality vitiates democracy. Andersen's tale of the little boy who, to the fury of the courtiers, pointed out that the emperor was naked, has a Chomskian flavour, not simply because it told of speaking truth to power but also because the simple childish eye proved keener than the sophisticated adult eye. I was present when Chomsky addressed Karl Popper's LSE seminar in the spring of 1969 and paid tribute to children's intellectual powers (Chomsky secured my admittance to the seminar at a time when my employment at the LSE was suspended). As I recall, Chomsky explained how the vowel shift that had occurred in late medieval English was part of a transformation that resulted from a generational dynamic. The parent generation spoke using small innovations of their own, arrived at in a spontaneous and ad hoc fashion. Growing youngsters, because of their innate syntactical capacity, ordered the language they heard their parents using by means of a more inclusive grammatical structure, which itself made possible more systematic change. In politics, the child's eye might see right through the humanitarian and democratic claptrap to the dismal results of western military interventions--shattered states, gangsterism, narco-traffic, elite competition for the occupiers' favour, vicious communal and religious hatred. Chomsky openly admits he prefers "pacifist platitudes" to belligerent mendacity. This makes some wrongly charge that he is "passive in the face of evil." But neither apartheid in South Africa, nor Stalinism in Russia, nor military rule in much of Latin America were defeated or dismantled by bombardment and invasion. Chomsky had no difficulty supporting the ultimately successful campaign against apartheid, or for the Indonesian withdrawal from East Timor. He simply opposes putting US soldiers in harm's way--also meaning where they will do harm and acquire a taste for it. Chomsky's victory in a parlour game should not be overpitched. But, like Marx's win earlier this year in the BBC Radio 4 competition for "greatest philosopher," it shows that thinking people are still attracted by the critical impulse, above all when it is directed with consistency at the trend towards a global pens?e unique. The Prospect/FP list was sparing in its inclusion of critics of US foreign policy, which may have increased Chomsky's lead a little. But no change in the list would have made a difference to the outcome. The editors had misjudged the mood and discernment of their own readers. _________________________________________________________________ Against Chomsky Oliver Kamm deplores his crude and dishonest arguments In his book Public Intellectuals: A Study of Decline, Richard Posner noted that "a successful academic may be able to use his success to reach the general public on matters about which he is an idiot." Judging by caustic remarks elsewhere in the book, he was thinking of Noam Chomsky. He was not wrong. [Intellectuals_Kamm.gif]-SubmitChomsky remains the most influential figure in theoretical linguistics, known to the public for his ideas that language is a cognitive system and the realisation of an innate faculty. While those ideas enjoy a wide currency, many linguists reject them. His theories have come under criticism from those, such as the cognitive scientist Steven Pinker, who were once close to him. Paul Postal, one of Chomsky's earliest colleagues, stresses the tendency for the grandiloquence of Chomsky's claims to increase as he addresses non-specialist audiences. Frederick Newmeyer, a supporter of Chomsky's ideas until the mid-1990s, notes: "One is left with the feeling that Chomsky's ever-increasingly triumphalistic rhetoric is inversely proportional to the actual empirical results that he can point to." Prospect readers who voted for Chomsky will know his prominence in linguistics, but are more likely to have read his numerous popular critiques of western foreign policy. The connection, if any, between Chomsky's linguistics and his politics is a matter of debate, but one obvious link is that in both fields he deploys dubious arguments leavened with extravagant rhetoric--which is what makes the notion of Chomsky as pre-eminent public intellectual untimely as well as unwarranted. Chomsky's first book on politics, American Power and the New Mandarins (1969) grew from protest against the Vietnam war. But Chomsky went beyond the standard left critique of US imperialism to the belief that "what is needed [in the US] is a kind of denazification." This diagnosis is central to Chomsky's political output. While he does not depict the US as an overtly repressive society--instead, it is a place where "money and power are able to filter out the news fit to print and marginalise dissent"--he does liken America's conduct to that of Nazi Germany. In his newly published Imperial Ambitions, he maintains that "the pretences for the invasion [of Iraq] are no more convincing than Hitler's." If this is your judgement of the US then it will be difficult to credit that its interventionism might ever serve humanitarian ends. Even so, Chomsky's political judgements have only become more startling over the past decade. In The Prosperous Few and the Restless Many (1994), Chomsky considered whether the west should bomb Serb encampments to stop the dismemberment of Bosnia, and by an absurdly tortuous route concluded "it's not so simple." By the time of the Kosovo war, this prophet of the amoral quietism of the Major government had progressed to depicting Milosevic's regime as a wronged party: "Nato had no intention of living up to the scraps of paper it had signed, and moved at once to violate them." After 9/11, Chomsky deployed fanciful arithmetic to draw an equivalence between the destruction of the twin towers and the Clinton administration's bombing of Sudan--in which a pharmaceutical factory, wrongly identified as a bomb factory, was destroyed and a nightwatchman killed. When the US-led coalition bombed Afghanistan, Chomsky depicted mass starvation as a conscious choice of US policy, declaring that "plans are being made and programmes implemented on the assumption that they may lead to the death of several million people in the next couple of weeks... very casually, with no particular thought about it." His judgement was offered without evidence. In A New Generation Draws the Line: Kosovo, East Timor and the Standards of the West (2000), Chomsky wryly challenged advocates of Nato intervention in Kosovo to urge also the bombing of Jakarta, Washington and London in protest at Indonesia's subjugation of East Timor. If necessary, citizens should be encouraged to do the bombing themselves, "perhaps joining the Bin Laden network." Shortly after 9/11, the political theorist Jeffrey Isaac wrote of this thought experiment that, while it was intended metaphorically, "One wonders if Chomsky ever considered the possibility that someone lacking in his own logical rigour might read his book and carelessly draw the conclusion that the bombing of Washington is required." This episode gives an indication of the destructiveness of Chomsky's advocacy even on issues where he has been right. Chomsky was an early critic of Indonesia's brutal annexation of East Timor in 1975 in the face of the indolence, at best, of the Ford administration. The problem is not these criticisms, but Chomsky's later use of them to rationalise his opposition to western efforts to halt genocide elsewhere. (Chomsky buttresses his argument, incidentally, with a peculiarly dishonest handling of source material. He manipulates a self-mocking reference in the memoirs of the then US ambassador to the UN, Daniel Patrick Moynihan, by running separate passages together as if they are sequential and attributing to Moynihan comments he did not make, to yield the conclusion that Moynihan took pride in Nazi-like policies. The victims of cold war realpolitik are real enough without such rhetorical expedients.) If Chomsky's political writings expressed merely an id?e fixe, they would be a footnote in his career as a public intellectual. But Chomsky has a dedicated following among those of university education, and especially of university age, for judgements that have the veneer of scholarship and reason yet verge on the pathological. He once described the task of the media as "to select the facts, or to invent them, in such a way as to render the required conclusions not too transparently absurd--at least for properly disciplined minds." There could scarcely be a nicer encapsulation of his own practice. The author is grateful for the advice of Bob Borsley and Paul Postal. From checker at panix.com Fri Dec 2 02:55:19 2005 From: checker at panix.com (Premise Checker) Date: Thu, 1 Dec 2005 21:55:19 -0500 (EST) Subject: [Paleopsych] CHE: Placebos Could Play a Role in Treating Some Conditions, Scientists Say Message-ID: Placebos Could Play a Role in Treating Some Conditions, Scientists Say News bulletin from the Chronicle of Higher Education, 5.11.21 http://chronicle.com/daily/2005/11/2005112103n.htm The placebo effect -- it's all in your head. When you swallow sugar pills instead of powerful medicine and your symptoms disappear, it's all thanks to the power of your mind. How does the brain perform that parlor trick? In the past, scientists suspected that any apparent health benefits from placebos had little more basis in biology than did sleight of hand. In studies of new drugs, patients might tell their doctors they feel better because they think that is what their doctors want to hear. Or perhaps they would have recovered without any treatment, real or sham. But researchers now know that the placebo effect is real and grounded in the physiology of the brain. Using techniques to peer inside the skull, they have begun to find regions of the brain that respond to placebos, and they have even watched a single nerve cell react to a sham medicine. Those studies show that placebos affect the brain in much the same way that actual treatments do, researchers reported here last week at the annual meeting of the Society for Neuroscience. In other words, the power to treat several troublesome disorders may be wrapped up in the three-pound spongy lump of tissue protected by the skull. The research points to the power of positive thinking -- even at the unconscious level. When the brain expects relief, it can manufacture some on its own. "The things you can change with a positive outlook are profound," said Tor D. Wager, an assistant professor of psychology at Columbia University. "They are deeper physiologically than we have previously appreciated." None of the researchers who study the mechanism of the placebo effect suggest that doctors should prescribe dummy pills instead of real medicine. But they say that the study of the placebo effect could change how scientists perform clinical trials of new treatments and could even alter how we understand and treat pain, Parkinson's disease, and depression. By studying placebos, said Christian S. Stohler, dean of the school of dentistry at the University of Maryland at Baltimore, "you crack into disease mechanisms that might be very important for improving the lives of many pain patients." Fooling the Patient Researchers gained their first glimpse at the causes of the placebo effect in the late 1970s, when scientists discovered that under certain conditions they could cancel the effect. In a study of pain relievers, a drug called naloxone prevented patients on placebo pills from experiencing the usual benefit. Since naloxone blocks the action of painkillers called opioids, researchers figured that placebos must stimulate the brain to produce its own opioids. In the 1990s, another set of experiments provided more evidence that the placebo effect was a real physiological phenomenon. Fabrizio Benedetti, a professor of neuroscience at the University of Turin, and others studied the effect without using a placebo. Dr. Benedetti judged that a placebo's effect comes from the patient's psychosocial context: talking to a doctor, observing the treatment, and expecting improved health. So he took away that context by giving study participants real drugs, but on the sly. Patients were told that they would receive an active drug, a placebo, or nothing through intravenous needles, and consented to get any of the different treatments without knowing when any treatment would be supplied. The scientists compared the results when a doctor overtly gave the patient the drug and when a computer supplied the drug without the patient's knowledge. Bedside manner, it turned out, made a difference: Patients required far more painkiller if they unknowingly received the medicine from a computer. When the doctor gives a drug in full view, Dr. Benedetti said at the neuroscience conference, "there is an additive effect of the drug and of the placebo, the psychosocial component." He suggested that his experimental setup could be extended to become part of the testing procedure for new drugs. Clinical trials could then compare covert and overt administration, rather than comparing the active drug to a placebo. That way, none of the volunteers would go through the trouble of participating without receiving the real experimental treatment, and researchers could still demonstrate that the drug was effective by showing that it reduced symptoms when given covertly. Peering at the Brain With the recent advent of modern brain-scanning techniques, scientists gained the ability to look directly at the regions of the brain involved in the placebo effect. In 2002 researchers in Finland and Sweden published in Science the first brain images of the effect, using a technique called positron emission tomography, better known as PET. The researchers pressed a hot surface onto the hands of nine male volunteers, and then a doctor gave them injections of either a painkiller or a placebo. When the researchers performed PET scans on the men, both the drug and the dummy induced high blood flow -- indicating brain activity -- in an area of the brain called the rostral anterior cingulate cortex. That area plays a key role in the painkilling effects of opioid drugs. Then in 2004, also in Science, Mr. Wager reported using functional magnetic resonance imaging, or fMRI, to show that a placebo that relieved pain also decreased activity in the brain's pain-sensing areas. Different people felt varying amounts of pain relief from the placebo. The amount of pain reduction a volunteer experienced went hand in hand with the amount of change in activity in the brain. "Part of the effect of a drug," Mr. Wager said at the conference, "is it changes the way you think about drugs." Jon-Kar Zubieta, an associate professor of psychiatry and radiology at the University of Michigan at Ann Arbor, and several colleagues, including Dr. Stohler, of the University of Maryland, peered deeper into the brain's workings by finding out where the brain produces opioids in response to placebo treatment. They used PET scans along with a stain that marks opioid activity in the brain. When the researchers gave male volunteers a painful injection of saline solution into their jaw muscles, the scans showed an increase of opioids in the brain. Most of the regions where the brain produced painkillers coincided with the ones that Mr. Wager identified as important. "Expectation releases substances, molecules, in your brain that ultimately change your experience," said Dr. Stohler. "Our brain is on drugs. It's on our own drugs." Relief for Parkinson's The placebo effect helps not only people in pain but also patients with diseases. In fact, scientists got their most detailed look at the placebo effect by studying how single neurons responded to sham drugs given to Parkinson's patients. Parkinson's disease is a motor disorder caused by the loss of brain cells that produce dopamine. Some patients experience temporary relief of symptoms from a placebo, and a previous study showed that the relief occurred because the brain produced dopamine in response. Patients who have Parkinson's disease sometimes receive surgery to implant electrodes deep within the brain. The electrodes can stimulate a neuron or record its activity. Dr. Benedetti, of the University of Turin, and his colleagues enrolled 11 patients who underwent surgery for that type of treatment. They gave the patients a placebo injection, telling them it was a powerful drug that should improve their motor control. The researchers then compared the activity of a single neuron before and after injection of the placebo. In the six patients who responded to the placebo -- who demonstrated less arm rigidity and said they felt better -- the rate of firing of the neuron went down. (Nerve cells "fire," or generate electrical impulses, in order to send signals to neighboring neurons.) The neurons' firing rate did not change for people who experienced no placebo effect. Another disorder that shows clinical improvement with placebos is depression. Depressed patients' moods often lift when they take a placebo, although the effect does not last, and they normally need to seek real treatment, according to Helen S. Mayberg, a professor of neurology and of psychiatry and behavioral sciences at Emory University. Dr. Mayberg became immersed in placebo research a few years ago, when she did a PET study of the brain's response to an antidepressant and to a placebo. In her study of 15 depressed men, four who had taken Prozac and four who had received a placebo experienced a remission of their symptoms. At the end of six weeks, after allowing the drug sufficient time to take effect, Dr. Mayberg took PET scans. For patients whose symptoms improved, the regions where the brain activity increased after a patient took a placebo formed a subset of the regions that increased after a patient took the true drug. "Drug is placebo plus," she said at the conference. In patients whose symptoms did not improve, whether they were on Prozac or on the placebo, the brain activity did not increase in those regions. She published the results of that study in 2002, but at the conference she reported a new analysis of her data. In the study, she had also collected brain scans one week after patients had begun receiving their treatments, even though the drug had not yet taken its full effect. Still, people whose symptoms later improved, whether they took the placebo or Prozac, again had increased brain activity in similar areas. One week into treatment, she said, the men's state of mind could be interpreted as a "heightened state of expectation" since they were anticipating clinical improvements. Nonresponders did not show those patterns, so such expectation could be key to whether a depressed patient will recover. Raising Expectations Dr. Mayberg would like to find ways to help those who do not respond to antidepressant drugs, and she surmises that expectation could make the difference. Such patients, she said, perhaps should imagine themselves getting well. "What is expectation?" she asked. "How do you cultivate it?" Those are questions that all of the scientists involved in this research would like to answer. Patients with chronic pain, said Dr. Zubieta, of Michigan, perhaps have lost the ability to produce the brain's natural painkillers. "If you are able to recruit mechanisms that help you cope with stress or pain, that's a good thing," he said, "The question is, How do things like this, or meditation, or biofeedback, work? We don't know." Dr. Stohler, of Maryland, agrees. "Getting a person to boost their own machinery to improve health -- that's something that medicine needs to know," he said. It may be especially urgent for patients with dementia, according to Dr. Benedetti. At the conference, he reported preliminary results that patients with Alzheimer's disease may not experience placebo effects at all. He found that Alzheimer's patients felt no difference between overt and hidden administration of painkillers. To Dr. Benedetti, that suggests that the psychological components of treatments -- the expectation of health improvements, and the circuits that such expectations create in the brain -- are absent. Perhaps, he said at the conference, doctors need to take that loss into account when prescribing any drug for Alzheimer's patients. Those patients may need higher doses of many drugs, such as painkillers, if their brain has stopped aiding the drug's action. The mind, it seems, may play a critical role in treating diseases. And its services come free of charge, with no co-payments or deductibles. _________________________________________________________________ Background articles from The Chronicle: * [68]Take 2 Herbal Remedies and Call Me in the Morning (11/18/2005) * [69]Pray and Bear It (2/11/2005) Magazine & Journal Reader: * [70]A Glance at 'Current Directions in Psychological Science': Why Placebos Work (10/19/2005) 68. http://chronicle.com/weekly/v52/i13/13a01001.htm 69. http://chronicle.com/weekly/v51/i23/23a00703.htm 70. http://chronicle.com/daily/2005/10/2005101901j.htm E-mail me if you have problems getting the referenced articles. From checker at panix.com Fri Dec 2 02:55:26 2005 From: checker at panix.com (Premise Checker) Date: Thu, 1 Dec 2005 21:55:26 -0500 (EST) Subject: [Paleopsych] CHE: So Happy Together Message-ID: So Happy Together The Chronicle of Higher Education, 5.11.25 http://chronicle.com/weekly/v52/i14/14c00301.htm CATALYST If you're a scientist who is not used to collaborating with nonscientists, you'd better get used to it By KAREN M. MARKIN Perhaps you love your scientific work because it allows you to spend lots of time outdoors, taking water samples in all kinds of weather. Or perhaps your scientific work allows you to hole up with your computer and run calculation after calculation as you seek solutions to problems. Either way, its you and your intellectual pursuits, shielded from the day-to-day irritations of dealing with people. So what could you possibly gain from collaborating with others as you pursue your scientific goals, exposing yourself to interpersonal conflict like a lab rat to a pathogen? More money. Whether we like it or not, collaboration is becoming the norm for much federally financed research. Sometimes the complexity of todays scientific questions requires investigators from a variety of disciplines to work together. In other cases, agencies seek multiple payoffs from their grant dollars: educational innovations and societal benefits as well as advances in basic science. Either way, the multiyear, multimillion-dollar awards increasingly are reserved for collaborative work. Some investigators may now be thinking, I can play that game. Ill collect a bunch of individual research proposals, slap them together, and send them in under one title. More money for the same amount of effort on my part. It doesnt work that way. Over and over, I have heard program officers say that in the collaborative proposals they see, scientific excellence is usually a given. What makes or breaks those proposals are the nonscience aspects, such as management and leadership. Here are some things to consider in preparing a competitive collaborative grant application. Thinking Outside the College. First, understand that your idea of multidisciplinary and the grant agencys idea of that concept may be very different, and it is the agencys view that matters in grant writing. Faculty members often focus narrowly on their area of expertise, so that anything just a little different seems exotic. For example, a physical oceanographer might view a partnership with a biological oceanographer as multidisciplinary work. While that may be so in the rarefied world of oceanography journals, it typically is not enough for a large grant agency. Such agencies want you to do more than think outside the department. They want you to think outside the college. When I say outside the college, I dont just mean chemists joining hands with chemical engineers. In some instances, it can mean scientists engaging with social scientists and humanists. Check past awards in the program that interests you to see what has been considered multidisciplinary. Lets use the hypothetical example of a center for the study of natural disasters such as earthquakes to consider what a multidisciplinary collaboration might look like. The team will naturally include a seismologist, a geophysicist, and an earthquake engineer. But a comprehensive center also might include social scientists. One might explore how groups of people behave when faced with an imminent threat. Another might be a public-policy expert who studies obstacles to effective emergency planning. Those social scientists will have to be an integral part of the team. If you have an underlying disdain for what you view as soft scientists, it will come through loud and clear. For a truly collaborative project, you will need to accept them as equal partners rather than people whose biographical sketches you throw in merely to satisfy the program requirement for investigators outside your discipline. You show that they are partners by providing resources for them in the budget. It is also wise to include them in development of the proposal to ensure it is sound from a scholarly standpoint. If you are a biologist and you write your conception of what your political-science colleagues will contribute instead of their conception, you will weaken your case for collaboration. You might make a fatal mistake, such as calling psychology one of the humanities. (It has happened.) Reviewers of collaborative proposals will be drawn from the array of disciplines represented in the proposal. A political scientist from another institution will quickly notice if your social scientists are mere window dressing. In planning your collaboration, think about an orchestra. If the violinist is fiddling away at a bluegrass melody, the clarinetist is tootling a klezmer tune, and the pianist is banging out Billy Joel, its cacophony, no matter how good they are individually. But put them together for Rhapsody in Blue, and theyre making music. They Also Serve Who Only Push Paper. The entire scholarly team will have to accept that a large collaborative grant requires the services of people who arent scientists but must be adequately paid. Some researchers find it anguishing to spend their scarce grant dollars on anything but lab equipment and scientific personnel. But part of the challenge of a large collaborative grant is to manage it efficiently after you receive the award. That takes time, and you probably have firsthand experience with it. Do you complain when you have to submit annual and final reports for your grants? Think of that kind of work multiplied by a factor of 10 or 15, and you will begin to see the value of a project manager. Previous recipients of collaborative grants say that bad management, rather than bad science, is usually the reason that a renewal application is rejected. Staffing needs will vary from program to program, but all collaborations need a manager and someone to oversee the budget. In some collaborative projects, agencies expect diversity and education efforts. Some investigators have hired full-time individuals for each of those duties. Although these administrative tasks may sound like punishment to you, some people enjoy them and perform them well. Follow the Leader. A collaborative grant requires strong leadership. The impetus must come from faculty members who are excited about pursuing the area of scientific inquiry at the heart of the project. The principal investigator should be a prominent scientist with a long record of extramural grants and publications. But the project also needs someone to serve as its prime mover, and that person does not necessarily have to be the senior scientist. That individual has to be willing to put time and energy into pulling together the collaboration. He or she needs to be organized, a good time manager, a team builder, and able to take criticism in stride. The project leader also must be able to persuade top institutional officials that multidisciplinary work is valuable and rewarded in tenure and promotion decisions. Those tasks are clearly not science, but theyre essential to the success of the project. If you scoff at them as mere management clich?s, find someone who takes them seriously. Those who have formed collaborations emphasize that they take a lot of time. It is common to spend a year developing a collaborative proposal that is based on a decade of less formal interactions with other scientists. As with any proposal, it may take two or three submissions before you get any money. But look on the bright side: The additional years you spend revising the proposal allow you to develop better relationships with your collaborators -- and to jettison the ones you dont want. Karen M. Markin is director of research development at the University of Rhode Islands research office. From checker at panix.com Sat Dec 3 02:27:51 2005 From: checker at panix.com (Premise Checker) Date: Fri, 2 Dec 2005 21:27:51 -0500 (EST) Subject: [Paleopsych] Meme 052: The Inverted Demographic Pyramid Message-ID: Meme 052: The Inverted Demographic Pyramid by Frank Forman sent 5.12.2 The inverted demographic pyramid, those richer and more able having fewer children, has been a problem for evolutionary theory ever since Francis Galton. My solution is that the decision about whether to have more or fewer children is determined by a trade-off function set in the Stone Age. What parents consider to be adequate support for a child is determined more by what their peers do than the objective facts of the situation today, which would indicate a much larger advantage for the better off than in the Stone Age where incomes were far more equal. But we listen to the "whispering genes within" rather than accept any factual studies that back up the Ninety-Six Percent Rule, namely that 96% of parents don't matter much one way of the other. An article in the New York Times, shown below, about a surprising 26% increase in the number of children age 0-5 in Manhattan between 2000 and 2004 induced these reflections. This increase is probably just an effect of greater income inequality in recent years, not a sudden reversal of the inverted demographic pyramid This paradox, as we all know, has caused some to question the whole selfish gene-sociobiological paradigm, and with good reason, though I try to make a good crack at saving the paradigm here. Animals in any species can chose, within limits, whether to pursue an r strategy (mnemonic: reproduce, reproduce, reproduce) of many offspring with little parental investment per child or the K (mnemonic: Kwality) strategy of few children but high investment per child. The trade off *function* was mostly set in the Stone Age. Conditions have changed and rich parents should be able to have far, far more children than the poor, since income inequality is far greater today than then by I think every account by anthropologists. But when you ask rich parents why they don't pack them in like they do in the barrios, and you get told that that would be indecent and inadequate with such vehemence that befit moral absolutes. What's going on is that one's standard of decency or adequacy is not set by thinking about Stone Age environments, nor by comparison with those who lead far longer lives in the barrios and ghettos and whatever Asian immigrants cram themselves into than Stone Age man ever did, but by comparison with one's peers. Your neighbors surround their children with a big house, give them an expensive education, and so on. The Stone Age genes within you whisper to you that if you don't do these things for your kids, they will not have their own children and you will have no grandchildren. You will ignore any studies by Judith Rich Harris that affirm the Ninety-Six percent rule that only the worst and best two percent of parents make a measurable difference in how your kids turn out. You will reject showings by economists that educational credentials count for little beyond helping you children get their first jobs. You look at only a small slice of the population, namely your peers, in which effort does seem to matter more than innate factors. Indeed the big brains of primates are primarily geared more to getting along with your fellows (thus allowing for greater and more complex social cooperation) rather than for maneuvering the physical environment by finding out what is really out there. It's an accidental byproduct of blue eyes and flaming red (or blond) hair (my "Maureen Dowd Theory of Western Civilization") that triggered off a larger regard for objectivity. Mr. Mencken was often given to noting how weak this regard is, even in America, especially in America, but he did not know the rest of the world. There are other factors involved in the inverted demographic pyramid. Our drives work only remotely, and there is no drive for maximizing inclusive reproductive fitness directly. (I don't need to beat yet another drum for group selection here.) Of the Sixteen Basic Drives Steven Reiss has identified through factor analysis, Romance certainly seems closely related, this drive (no. 2 on my personal list) including acts of coitus and also having aesthetic experiences. (I can't logic out the connection, but these three are correlated so much on questionnaires that they are cluster into a single drive. The desire to raise one's own children (NOT clustered with the drive to raise adopted children) would also seem to weigh heavily in the selfish gene model. (It's no. 6 on my list, ranked that high, not because I have spent a great deal of time, Kwality or not, with my children, but because I chose to give up the teaching job I really would have much preferred. Spending lots of time with them does not satisfy my no. 1 drive, Curiosity, all that well. I'd rather read books! Indeed, Curiosity, which is so much more satisfiable today than in the Stone Age (a supply side change) could well be responsible for a large part of the inverted demographic pyramid. I suggest that those having higher incomes (correlated 0.5 with intelligence but making a huge difference between populations then and now) will purchase relatively more satisfaction of this drive today, with the result of having relatively fewer children, than they would have back in the Stone Age. There's also the drive for Status (no. 14 out of 16 on my list, which explains why we chose to live in an inexpensive apartment in a high-toned neighborhood and let the neighbors snub as they may, as some did), which means that parents will spend on their children to impress their peers as well as to actually help their kids. This may also be more readily satisfiable today than then. I don't know. And so on, through the rest of the Reiss list. I resend my meme on them at the end by the simple expedient of typing crtl-R||enter. Neat, isn't it, which is what a UNIX shell account gives you. I'm just giving a framework for speculation. The hard work of empirically weighing the changes in supply and demand for the drives, which as I said are only loosely connected to reproductive success, begins. It will be a nearly impossible task to do with full scientific vigor, since we don't know all that much about the EEA. But, once again, don't compare your findings against a perfectionist model but merely with *competing* explanations, any more than you should compare the actual workings of the market with an ideal government that would correct market defects. P.S. I'm not a Reissian fundamentalist: it's just that he has provided me with one of my many filters with which to view the world.] Some of the respondents to Dan Vining's 1986 Brain and Behavioral Sciences target article, "Social versus Reproductive Success: The Central Theoretical Problem with Human Sociobiology" (9:167-216), did hint at the trade-offs among desires, but only indirectly, as none were economists. My own allegedly expertise in the subject at least urges me to look at a trade-off function that may have changed not inconsiderably on the demand side: that for curiosity and objectivity caused by the Maureen Dowd factor may be hugely important for the West versus the Rest. But the biggest changes are in the supply of ways to satisfy the Reiss desires. It is the changes on the supply side that apparently outweigh the changes in demand, since the inverted demographic pyramid is common to rich countries and not just to the West. In any case, I hope I've managed to introduce some economic reasoning to more fully explain the inverted demographic pyramid. Enthusiastic eugenicists will have a terrific task ahead to change the demand and supply curves. One of Reiss' drives is Idealism (no. 7 on my list), but the sorts of questions he asked were heavy into redistribution. We know, or should know, that the enthusiasm for redistribution is hyped up with the the huge influence of 20th century leftists in the education business. Issues were--and still are, there being a lot of momentum (a/k/a culture lag)--largely framed in these terms, much like debates in the Middle Ages were framed in Christian terms. Hauling in manufactured emotions will be easier than changing underlying biologies, at least until Designer Children come along. ------ Manhattan's Little Ones Come in Bigger Numbers http://www.nytimes.com/2005/12/01/nyregion/01kids.html By EDUARDO PORTER The sidewalks crowded with strollers, the panoply of new clubs catering to the toddler set and the trail of cupcake crumbs that seem to crisscross Manhattan are proof: The children are back. After a decade of steady decline, the number of children under 5 in Manhattan increased more than 26 percent from 2000 to 2004, census estimates show, surpassing the 8 percent increase in small children citywide during the same period and vastly outstripping the slight overall growth in population in the borough and city. Even as soaring house prices have continued to lift the cost of raising a family beyond the means of many Americans, the borough's preschool population reached almost 97,000 last year, the most since the 1960's. This increase has perplexed social scientists, who have grown used to seeing Manhattan families disappear into Brooklyn and New Jersey, and it has pushed the borough into 11th place among New York State counties ranked by percentage of population under 5. In 2000, fewer than one in 20 Manhattan residents were under 5, putting the borough in 58th place. "Potentially this is very good news for New York," said Kathleen Gerson, a professor of sociology at New York University. "It depends on whether this is a short-term blip or a long-term trend. We must understand what explains the rise." Indeed, nobody can say for sure what caused the baby boom, but several factors clearly played a part. The city's growing cohorts of immigrants may have contributed, as the number of children in Manhattan born to foreign-born parents has risen slightly since the 1990's. But other social scientists say that the number of births is growing at the other end of the income scale. "I wouldn't be surprised if it had to do with more rich families having babies and staying in Manhattan," said Andrew A. Beveridge, a professor of sociology at Queens College. According to census data, 16.4 percent of Manhattan families earned more than $200,000 last year, up from 13.7 percent in 2000. Kathryne Lyons, 40, a mother of two who left her job as a vice president of a commercial real estate firm when her second daughter was born three years ago, acknowledges that having children in the city is a tougher proposition if one cannot afford nannies, music lessons and other amenities, which, as the wife of an investment banker, she can. "It's much more difficult to be here and not be well to do." Over the past few years, New York has become more family-friendly, clearly benefiting from the perception that the city's quality of life is improving. Test scores in public schools have improved, and according to F.B.I. statistics, New York is the nation's safest large city. Sociologists and city officials believe that these improvements in the quality of life in Manhattan may have stanched the suburban flight that occurred in the 1990's. And while Manhattan lacks big backyards for children to play in, it offers a packed selection of services, which can be especially useful for working mothers. In fact, the baby boomlet also may pose challenges to a borough that in many ways struggles to serve its young. According to Childcare Inc., day care centers in the city have enough slots for only one in five babies under age 3 who need it. And while census figures show that children over 5 have continued to decline as a percentage of the Manhattan population, if the children under 5 stay, they could well put extra stress on the city's public and private school systems, already strained beyond capacity in some neighborhoods. Private preschools and kindergartens "are already more difficult to get into than college," said Amanda Uhry, who owns Manhattan Private School Advisors. So who are these children? Robert Smith, a sociologist at Baruch College who is an expert on the growing Mexican immigration to the city, argued that the children of Mexican immigrants - many of whom live in the El Barrio neighborhood in East Harlem - are a big part of the story. But this is unlikely to account for all of the increase. For example, in 2003, fewer than 1,000 babies were born to Mexican mothers living in Manhattan. And births to Dominicans, the largest immigrant group in the city, have fallen sharply. Some scholars suspect that a substantial part of Manhattan's surge is being driven by homegrown forces: namely, the decision by professionals to raise their families here. Consider the case of Tim and Lucinda Karter. Despite the cost of having a family in the city, Ms. Karter, a 38-year old literary agent, and her husband, an editor at a publishing house, stayed in Manhattan to have their two daughters, Eleanor and Sarah. They had Eleanor seven and a half years ago while living in a one-bedroom apartment near Gracie Mansion on the Upper East Side. Then they bought the apartment next door and completed an expansion of their home into a four-bedroom apartment two years ago. A little less than a year ago, they had Sarah. "Manhattan is a fabulous, stimulating place to raise a child," Ms. Karter said. "We didn't plan it but we just delayed the situation. We were just carving away and then there was room." The city's businesses and institutions are responding to the rising toddler population. Three years ago, the Metropolitan Museum of Art began a family initiative including programs geared to children 3 and older. The Museum of Modern Art has programs for those as young as 4. In January, Andy Stenzler and a group of partners opened Kidville, a 16,000-square-foot smorgasbord of activities for children under 5 - and their parents - on the Upper East Side. "We were looking for a concentration of young people," Mr. Stenzler said. "There are 16,000 kids under 5 between 60th and 96th Streets." Many of the new offerings reflect the wealth of the parents who have decided to call Manhattan home. Citibabes, which opened in TriBeCa last month, provides everything from a gym and workplaces with Internet connections for parents, to science lessons, language classes and baby yoga for their children. It charges families $6,250 for unlimited membership for three months. Manhattan preschools can charge $23,000 a year. Ms. Uhry, with Private School Advisors, charges parents $6,000 a year just to coach them through the application process to get their children in. Yet in spite of the high costs, small spaces and infuriating extras that seem unique to Manhattan - like the preschools that require an I.Q. test - many parents would never live anywhere else. "Manhattan has always been a great place for raising your children," said Lori Robinson, the president of the New Mommies Network, a networking project for mothers on the Upper West Side. "It's easier to be in the city with a baby. It's less isolation. You feel you are part of society." ------------- Meme 023: Steven Reiss' 16 Basic Desires 3.9.21 Here's the results of research into the basic human desires. I've ordered them by what I think is my own hierarchy and invite you to do the same for yourself and for historical personages, like Ayn Rand. This list is not only important in its own right but has great implications for one's political concerns. Curiosity being my highest desire, I am an advocate of what I call the "information state," whereby the major function of the central government is the production of information and reserach. (Currently, it occpies at most two percent of U.S. federal spending.) And since independence is no. 3 for me, I am close to being a libertarian, in the sense that I'd vote with Ron Paul on most issues. But someone for whom independence is his most basic desire, he'd be advocating a full liberatarian order and impose it on states and counties. On the other hand, an idealist could advocate massive redistribution programs from rich to poor and military intervention in foreign countries that do not live up to his standards. I simply care much less than he does about such matters. The task of designing a state, or a world federal order, that reflects the diversity of desires and not just "this is what I want the world to be" continues. STEVEN REISS' 16 BASIC DESIRES Curiosity. The desire to explore and learn. End: knowledge, truth. Romance. The desire for love and sex. Includes a desire for aesthetic experiences. End: beauty, sex. Independence. The desire for self-reliance. End: freedom, ego integrity. Saving. Includes the desire to collect things as well as to accumulate wealth. End: collection, property. Order. The desire for organization and for a predictable environment. End: cleanliness, stability, organization. Family. The desire to raise one's own children. Does not include the desire to raise other people's children. End: children. Idealism. The desire to improve society. Includes a desire for social justice. End: fairness, justice. Exercise. The desire to move one's muscles. End: fitness. Acceptance. The desire for inclusion. Includes reaction to criticism and rejection. End: positive self-image, self-worth. Social Contact. The desire for companionship. Includes the desire for fun and pleasure. End: friendship, fun. Honor. The desire to be loyal to one's parents and heritage. End: morality, character, loyalty. Power. The desire for influence including mastery, leadership, and dominance. End: achievement, competence, mastery. Vengeance. The desire to get even. Includes the joy of competition. End: winning, aggression. Status. The desire for social standing. Includes a desire for attention. End: wealth, titles, attention, awards. Tranquility. The desire for emotional calm, to be free of anxiety, fear, and pain. End: relaxation, safety. Eating. The desire to consume food. End: food, dining, hunting. Source: Steven Reiss, _Who am I?: the 16 basic desires that motivate our actions and define our personalities. NY: Penguin Putnam: Jeremy P. Tarcher/Putman, 2000. I have changed his exact wordings in a few places, based upon the fuller descriptions in his book and upon his other writings. The ends given in the table are taken directly from page 31. The desires are directed to the psychological (not immediately biological) ends of actions, not to actions as means toward other actions. He has determined the basic ends by the use of factor analysis, a technique pioneered by Raymond B. Cattell. Spirituality, for example, he finds is distributed over the other desires and is not an end, statistically independent of other ends. And he finds that the desire for aesthetic experiences is so closely correlated with romance that he subsumes it thereunder. Reiss' list is in no particular order, and so, after much reflection, not only upon my thinking but upon my actual behavior, I have ranked the desires by what I think is my own hierarchy. A few remarks, directed to those who know me, are in order: Saving: Not much good at keeping within my budget, I have a relatively big pension coming and have a large collection of recordings of classical music and books. Order: While my office and home is in a mess, I have written a number of extremely well-organized discographies. Family: Not always an attentive father, I have kept at a job I've not always liked, instead of starting over again as an assistant professor. Idealism: I took the description from an earlier article by Reiss, so as not to restrict it to state redistribution of income. Exercise: I am well-known for my running and having entered (legally) the Boston Marathon, but I usually just set myself to a daily routine and don't go canoeing, for example, when on vacations. In high school, I was notorious for avoiding exercise. Acceptance: I can be rather sensitive to being ignored, though I don't do much about it in fact. Social Contact: Fun, for me, is intellectual discussion, often with playful allusions on words and ideas. Honor: I'm very low on patriotism, but I do like to think of myself as having good character. Vengeance: I've been told I love to win arguments for their own sake, but I have only a small desire ever to get even and never act upon it. [I am sending forth these memes, not because I agree wholeheartedly with all of them, but to impregnate females of both sexes. Ponder them and spread them.] From checker at panix.com Sun Dec 4 01:19:15 2005 From: checker at panix.com (Premise Checker) Date: Sat, 3 Dec 2005 20:19:15 -0500 (EST) Subject: [Paleopsych] SF Area Independent Media Center: Government Accounting Office takes a big bite out of the Bush cliques pretense of legitimacy Message-ID: Government Accounting Office takes a big bite out of the Bush cliques pretense of legitimacy http://www.indybay.org/print.php?id=1783843 San Francisco Bay Area Independent Media Center [This, and the related articles I have appended, are about how the Republicans stole the 2004 election by manipulating electronic voting machines, esp. in Ohio. I don't have the full URL for the GAO report handy, but it's easy to get from http://www.gao.gov. It deals with the potential for fraud but does not claim actual fraud. The other articles argue for actual fraud. [Now, elections do get stolen. I'm convinced that Lyndon Johnson and Mayor Daley got Kennedy elected. Nixon thought so do but he decided, for the good of the country, not to contest it. And it's not unlikely that Tilden's victory was taken from him by fraud. I don't think fraud was responsible for Bush's victory in Florida in 2000, since the margin was well below the difference between Algore and Bush fraudulent (illegal) voters (thousands of them and a 2:1 tilt toward Algore). I'm not so sure about Ohio, if what the articles below say is plausible. [Algore's presidency would have made a difference: he's been around Washington long enough to know when pressure is being brought to bear on him. Rather than listen to neocon arguments for war in Iraq, he would have tuned them out. We're less than a year into Bush's second term. Kerry would have (or rather will have) bowed to political pressure to pull out of Iraq and the media would turn away from whatever disaster happens as a result of the pullout. [What's worth thinking about is why the media are ignoring the GAO report. (And also why the isolationist right is ignoring it also. Nothing on LewRockwell.com. Maybe they don't much care and certainly think Kerry would have been at most a slightly lesser evil than Bush, while the left at least hopes that Kerry would have been a significantly lesser evil.) The reason is that the mainstream media is, above all else, Establishment. Most of them are Democrats, it is true, and some of them make noises about how awful Bush is. (They say Reagan "ended" welfare, you know, whereas there was at best a decrease in the rate of increase.) But they believe mostly in the System. They most definitely do not want there to be a widespread belief that something as illegitimate as stealing a Presidential election can happen north of the Rio Grande. [Nor a President assassinated by any other than a lone nut. During the 40th anniversary of the JFK assassination, talking heads were unanimous in upholding the lone nut hypothesis, even though only 20% of the citizens do. [When you think about elites, ask what beliefs are mandated of its members. Some must be publicly affirmed. Others may be only privately doubted. In what sense is Gary Bauer, the evangelist who gets on lots of talk shows, a member of the elite. Would it hurt him to come out against the lone-nut theory of the JFK assassination? Is it best for a respectable dissident to have only a small number of disagreements? Can Bill Gates come out against modern art? Can anyone besides Tom Wolfe do so? [How big is the elite? How many different elites are there? They interlock, like corporate board members serving on art museum boards. Ponder these questions as you read what seems to me a plausible case of massive voter fraud yet a studious ignoring by the mainstream media.] by Joe Baker Wednesday, Nov. 16, 2005 at 12:03 PM How much will it take before people rise up and force the Repugs in Congress to impeach this dangerous thief? Secret CIA torture gulags, now secret Shiite torture chambers in Iraq. At least two stolen US elections. It boggles the mind. GAO report upholds Ohio vote fraud claims By Joe Baker, Senior Editor As if the indictment of Lewis Scooter Libby wasnt enough to give the White House some heavy concerns, a report from the Government Accounting Office takes a big bite out of the Bush cliques pretense of legitimacy. This powerful and probing report takes a hard look at the election of 2004 and supports the contention that the election was stolen. The report has received almost no coverage in the national media. The GAO is the governments lead investigative agency, and is known for rock-solid integrity and its penetrating and thorough analysis. The agencys agreement with what have been brushed aside as conspiracy theories adds even more weight to the conclusion that the Bush regime has no business in the White House whatever. Almost a year ago, Rep. John Conyers, senior Democrat on the House Judiciary Committee, asked the GAO to investigate the use of electronic voting machines in the Nov. 2, 2004, presidential election. That request was made as a flood of protests from Ohio and elsewhere deluged Washington with claims that shocking irregularities were common in that vote and were linked to the machines. CNN said the Judiciary Committee got more than 57,000 complaints after Bushs claimed re-election. Many were made under oath in a series of statements and affidavits in public hearings and investigations carried out in Ohio by the Free Press and other groups seeking to maintain transparent elections. Online Journal.com reported that the GAO report stated that some of [the] concerns about electronic voting machines have been realized and have caused problems with recent elections, resulting in the loss and miscount of votes. This is the only democratic nation that permits private partisan companies to count and tabulate the vote in secret, using privately-held software. The public is excluded from the process. Rev. Jesse Jackson and others have declared that public elections must not be conducted on privately-owned machines. The makers of nearly all electronic voting machines are owned by conservative Republicans. The chief executive of Diebold, one of the major suppliers of electronic voting machines, Warren Wally ODell, went on record in the 2004 campaign vowing to deliver Ohio and the presidency to George W. Bush. In Ohio, Bush won by only 118,775 votes out of more than 5.6 million cast. Honest election advocates contend that ODells statement to hand Ohios vote to Bush still stands as a clear indictment of an apparently successful effort to steal the White House. Some of the GAOs findings are: 1. Some electronic voting machines did not encrypt cast ballots or system audit logs, and it was possible to alter both without being detected. In short, the machines; provided a way to manipulate the outcome of the election. In Ohio, more than 800,000 votes were cast on electronic voting machines, some registered seven times Bushs official margin of victory. 2: the report further stated that: it was possible to alter the files that define how a ballot looks and works, so that the votes for one candidate could be recorded for a different candidate. Very many sworn statements and affidavits claim that did happen in Ohio in 2004. Next, the report says, Vendors installed uncertified versions of voting system software at the local level. The GAO found that falsifying election results without leaving evidence of doing so by using altered memory cards could easily be done. The GAO additionally found that access to the voting network was very easy to compromise because not all electronic voting systems had supervisory functions protected by password. That meant access to one machine gave access to the whole network. That critical finding showed that rigging the election did not take a widespread conspiracy but simply the cooperation of a small number of operators with the power to tap into the networked machines. They could thus alter the vote totals at will. It therefore was no big task for a single programmer to flip vote numbers to give Bush the 118,775 votes. Another factor in the Ohio election was that access to the voting network was also compromised by repeated use of the same user ID, coupled with easy-to-guess passwords. Even amateur hackers could have gotten into the network and changed the vote. System locks were easily picked, and keys were easy to copy, so gaining access to the system was a snap. One digital machine model was shown to have been networked in such a rudimentary manner that if one machine experienced a power failure, the entire network would go down. That is too fragile a system to decide the presidency of the United States. Problems obviously exist with security protocols and screening methods for vendor personnel. The GAO study clearly shows that no responsible business would operate with a computer system as flimsy, fragile and easily manipulated as the one used in the 2004 election. These findings are even more damning when we understand the election in Ohio was run by a secretary of state who also was co-chairman of Bushs Ohio campaign. Far from the conclusion of anti-fraud skeptics, the GAOs findings confirm that the network, which handled 800,000 Ohio votes, was vulnerable enough to permit a handful of purposeful operatives to turn the entire election by means of personal computers using comparatively simple software. One Ohio campaign operative, Tom Noe, a coin dealer, was indicted Oct. 27 for illegally funneling $45,400 to Bush by writing checks to others, who then wrote checks to Bushs re-election campaign, allegedly dodging the $2,000 limit on contributions by an individual. Its one of the most blatant and excessive finance schemes we have encountered, said Noel Hillman, section chief of the U.S. Department of Justices public integrity section, as quoted in the Kansas City Star. In the 2000 election, Florida was the key; in the 2004 election, Ohio was the key. From the Nov. 2-8, 2005, issue _________________________________________________________________ The Free Press -- Independent News Media - Election Issues http://www.freepress.org/departments/display/19/2005/1556 The Free Press: Speaking Truth to Power Thu Dec 01 2005 What John Kerry definitely said about 2004s stolen election and why it's killing American democracy by Bob Fitrakis & Harvey Wasserman November 10, 2005 The net is abuzz about what John Kerry may or may not be saying now about the stolen election of 2004. But we can definitively report what he has said about New Mexico and electronic voting machines soon after his abrupt "abandon ship" with 250,000 Ohio votes still uncounted. And we must also report that what he's not saying is having a catastrophic effect on what's left of American democracy, including what has just happened (again) in Ohio 2005. In recent days Mark Crispin Miller has reported that he heard from Kerry personally that Kerry believes the election was stolen. The dialog has been widely reported on the internet. Kerry has since seemed to deny it. We have every reason to believe Miller. His recent book FOOLED AGAIN, has been making headlines along with our own HOW THE GOP STOLE AMERICA'S 2004 ELECTION & IS RIGGING 2008. As in his campaign for president, Kerry has been ambivalent and inconsistent about Ohio's stolen vote count. Soon after the presidential election, Kerry was involved in a conference call with Rev. Jesse Jackson and a number of attorneys, including co-author Bob Fitrakis. In the course of the conversation, Kerry said "You know, wherever they used those [e-voting] machines, I lost, regardless if the precinct was Democratic or Republican." Kerry was referring to New Mexico. But he might just as well have been talking about Ohio, where the election was decided, as well as about Iowa and Nevada. All four of those "purple" states switched from Democratic "blue" in the exit polls as late as 12:20am to Republican "red" a few hours later, giving Bush the White House. A scant few hours after that, Kerry left tens of thousands of volunteers and millions of voters hanging. With Bush apparently leading by some 130,000 votes in Ohio, but with a quarter-million votes still uncounted here, Kerry abruptly conceded. He was then heard from primarily through attorneys from Republican law firms attacking grassroots election protection activists who dared question the Ohio outcome. In the year since that abrupt surrender, Theresa Heinz Kerry has made insinuations that she thought the election might have been stolen. But there has been no follow-up. Now we have this report from M. C. Miller that Kerry said he knew the election was stolen, and then denied saying it. Coming from Kerry, the inconsistency would be entirely consistent. But those committed to democracy and horrified by the on-going carnage of the Bush catastrophe still have no credible explanation as to why Kerry abandoned ship so abruptly. He had raised many millions specifically dedicated to "counting every vote," which clearly never happened in Ohio. More than a year after the election, more than 100,000 votes are STILL uncounted in the Buckeye state. And now, tragically, we have had another set of stolen elections. Four statewide referenda aimed at reforming Ohio's electoral process have been defeated in a manner that is (again) totally inconsistent with polling data, One statewide referendum, aimed at handing the corrupt Taft Administration a $2 billion windfall, has allegedly passed, again in a manner totally inconsistent with polling data, or even a rudimentary assessment of Ohio politics. We will write more about this tomorrow. But suffice it to say these latest "official" vote counts make sense only in the context of a powerful recent report issued by the Government Accounting Office confirming that electronic voting machines like those used in Ohio can be easily hacked by a very few players to deliver a vote count totally at odds with the will of the electorate. We have seen it in the presidential elections of 2000 and 2004, in at least three Senatorial races in 2002, and now in the referenda in Ohio 2005, and possibly elsewhere. How could this have happened? By and large, the nation is in denial, including much of the left. Miller recently debated Mark Hertsgaard over a Mother Jones review of both our books. The idea that the 2004 election could have been stolen has also been attacked by others on the left. Some reporters have briefly visited here or made calls from the coasts and then taken as gospel anything that mainstream Democratic regulars utter, even if its totally implausible and counter-factual. For example, they would have you believe that, in direct contradiction to how elections have gone in Ohio for decades, its now routine for boards of elections to record that 100% of the precincts are reporting, and then suddenly add 18,615 more votes at 1:43 a.m. after the polls have been closed since 7:30 p.m. and 100% of the precincts had been reporting since approximately 9 p.m. Or that 18,615 Miami County votes could come in late with an impossibly consistent 33.92% for Kerry, as if somebody had pushed a button on a computer with a pre-set percentage---just as the GAO says it can be done. Or that it's ok for a Democratic county election official, with a lucrative contract from the Republican-controlled Board of Elections (BOE), to admit he doesn't really know whether the vote count had been doctored. Or it's fine for BOE officials take election data home to report on from their personal PCs. Or for central tabulators to run on corporate-owned proprietary software with no public access. Or for BOE officials to hold up vote counts late into the night that time and again miraculously provide sufficient margins for GOP victories, as with Paul Hackett's recent failed Congressional race in southwestern Ohio. Or for one precinct to claim a 97.55% turnout when a Free Press/Pacifica canvass quickly found far too many people who didn't vote to make that possible. There is clearly no end to this story, and there is no indication the dialog on the net will diminish, even though the mainstream media---like the mainstream Democratic Party---absolutely refuses to touch this issue. But ultimately, whatever John Kerry or the bloviators or even the left press say about these stolen elections, America is very close to crossing the line that permanently defines the loss of our democracy. As we will show tomorrow, this week's theft of five referendum issues in Ohio is yet another tragic by-product of the unwillingness of John Kerry and so many others to stand up for a fair and reliable electoral process in this country. -- Bob Fitrakis and Harvey Wasserman are co-authors of HOW THE GOP STOLE AMERICA'S 2004 ELECTION & IS RIGGING 2008, available at www.freepress.org, and, with Steve Rosenfeld, of WHAT HAPPENED IN OHIO, to be published this spring by The New Press. A Discussion with Mark Crispin Miller - Democratic Underground http://www.democraticunderground.com/articles/05/11/05_mcm.html November 5, 2005 [05_mcm.jpg] On November 3-4, 2005, Mark Crispin Miller, author of Fooled Again, took part in an online discussion at Democratic Underground, answering questions from members of our message board. This is a lightly edited transcript of that discussion. The original discussion thread can be found [9]here. Mark might even return to continue the discussion. Skinner: Today we are very excited to host an online discussion with Mark Crispin Miller. Mark is a professor of media ecology at New York University. Some of you may remember him from our online discussion on Democratic Underground in May of 2002. He is well known for his writings on all aspects of the media and for his activism on behalf of democratic media reform. He has written a number of books, including Boxed in: The Culture of TV, The Bush Dyslexicon: Observations on a National Disorder, and Cruel and Unusual: Bush/Cheney's New World Order. He writes regularly on his blog, [10]News From Underground. [11][05_fooledagain.gif] Mark has a brand new book about the 2004 election, [12]Fooled Again: How the Right Stole the 2004 Election, and Why They'll Steal the Next One Too (Unless We Stop Them). This discussion is going to be pretty informal. Mark has some book signings and other events today, so he might be checking in a few different times throughout the day, and he is not going to be able to answer every question that is posted here. He will pick the questions that he considers most relevant and answer those. All DU members are welcome to participate. If you have a question or topic that you would like to discuss with Mark, just click "Reply" on this message to post it. Mark, thank you so much for being with us. The first question is an easy one. Please tell us about your book. Mark Crispin Miller: Hi, everyone. It's a pleasure to be here. Many warm thanks, Stephanie, for making this happen. Why I wrote Fooled Again: The scandal of last year's election never resonated as it should have done, because the national Democrats AND "the liberal media" refused to face, or even to discuss, the facts. We very badly need electoral reform, but we won't get it if that mammoth scandal doesn't finally resonate. My aim in writing Fooled Again was to lay out the evidence that Bush & Co. stole their so-called "mandate," so that the scandal might at last resound, so that we'll all be motivated to repair the system. If we don't, it seems to me, we're really cooked. Let me add that I myself am not a Democrat but a proud independent. This is not a partisan endeavor but a crucial civic issue. There's evidence that many a Republican did NOT vote for Bush/Cheney in 2004. Those folks too were disenfranchised, along with countless voters on the other side. I await your questions/comments. Bruce McAuley: Hi Mark! Given what has happened so far with the Bush administratio9n, what do you foresee for the future? Will the neo-conservatives have a final triumph, or will liberalism make a resurgence? Or neither of the above? Best guesses, please, and thanks for participating! Mark Crispin Miller: Bruce, liberalism will make a resurgence; or, rather, it is resurgent already, although many liberals out there don't know they're liberals. It's an odd situation. The word itself is now pejorative, thanks to the far-right propaganda drive that's overwhelmed our politics and culture for last few decades. So folks are often quick to say that they're not liberalsbut their politics, on nearly every score, ARE, by any definition, liberal: economically, environmentally, on foreign policy, on healthcare, abortion rights, you name it. Because the word has been so badly tarnished, I'd prefer to say that our Enlightenment ideals will re-assert themselves. I deem myself a follower of Jefferson and Paine. The world-view of those framers will prevail, if we promote it and defend it just as zealously as BushCo has attacked it. underpants: How susceptible to the "first story out there" is MSM? I haven't read your work so excuse me if this has been covered. It appears that the news media in this country all follow the very first wire report written on an event or an issue. Is it really that cut and dry? and does the right really have packaged ready to go versions of what I just saw ready to go (it would appear that they do)? Mark Crispin Miller: The right has the propaganda thing down cold. The MSM, moreover, will certainly not follow any story that it's disinclined to follow, however hot it may appear. Every day amazing pieces come from the Associated Press, with no follow-up whatsoever. AP did a good story on the GAO report on electronic touchscreen voting machines. There was no follow-up at all. mzmolly: What is the first thing we CAN/should do to secure our voting system? Just want some tangible ideas for Democrats and other concerned citizens. Mark Crispin Miller: The first thing to do is to campaign relentlessly, in every way at hand, to get the scandal of last year's election on the national radar screen. Unless we do that, all our policy suggestions will mean nothing. As we do that, though, we should also be resisting the proliferation of touch-screen voting machines sold by private vendorsDiebold, ES&S, Sequoiaand agitating on behalf of paper ballots, unless and until we learn about a tamper-proof computer-based system (if such is possible). That would be a local matter, by and large. We also should be working very hard to get the Voting Rights Act renewed completely. (The Busheviks want to remove certain provisions from it, so that it can then be junked by the Scalito Court.) And we must support Rep. Jesse Jackson III's call for a constitutional amendment formally confirming every adult American's right to vote, and establishing a uniform federal voting system. We should also enable same-day registration, extend the voting period to, say, a week, advocate for Instant Run-off Voting (IRV), and do whatever else it take to make the system truly democratic. sfexpat2000: I'd like to ask Mark, was there a moment, or an event, that you can identify as the one that spurred you to write on this topic? Thanks. Mark Crispin Miller: That moment was Election Day, and the huge screaming gap between the propaganda ("It's all gone really well!") and what was really happening coast to coast. IndyOp: Praise: Loved the Harpers' article! Thank you! My Question: Do we have the votes? Are you convinced that all of the fraudulent actions stole the election from Kerry? Do you estimate numbers of votes stolen in your book? Also, MCM - Stephanie said that you wanted the link for the Petition to keep Marc Maron on Morning Sedition: [13]Petition to Keep Marc Maron on Morning Sedition More [14]Contact Information for Danny Goldberg at AAR - A call from Mark Crispin Miller might get Goldberg's attention. Mark Crispin Miller: It's very hard to come up with precise figures. That's the problem. But consider, for example, that the Census Bureau came out in late May with an astounding revelation. According to their survey, 3.4 million more Americans claimed to have cast ballots in 2004 than the official toll of those who voted. So maybe some of them were lying. OK, let's say half were lying. That still leaves some 1.7 million votes that somehow never got recorded. And that number does not include those (countless) voters who knew very wll that they could not vote, or even register. And neither of those sums include those US citizens abroad who tried and failed to vote. (The last chapter of Fooled Again is all about Bush/Cheney's interference with the expatriate vote, which includes up to 7 million ballots.) Put it all together, and what does it spell? "IT CAN HAPPEN HERE, AND DID." And EVERYONE out there, PLEASE contact Air America, and urge the board NOT to allow the cancellation of Marc Maron's show!!! wrathofkahn: Now I'm confused... "And that number does not include those (countless) voters who knew very well that they could not vote, or even register." Umm... If they knew that they could not vote or register (vs. simply choosing not to do so), then I must assume that they were ineligible to vote. How is it that someone who couldn't have voted anyway could have affected the outcome of the election (other than campaigning, etc.)? Mark Crispin Miller: Those who tried to register and/or vote and couldn't. Not because they were ineligible. They were eligible, and yet could not register or vote. Arkana: Mr. Miller, I read "The Bush Dyslexicon" and loved it, BTW. I wanted to ask you: What is your proposal to deal with companies such as Diebold, ES&S, Triad, and others that "hack the vote"? Mark Crispin Miller: All private vendors should be outlawed. nashville_brook: Can you please speak to the importance of EXIT POLLS in our case for ELECTION FRAUD. what's the appropriate weight to give exit poll discrepancies in the on-going debate? And (follow-up) Do you have a response to EXIT POLL DISCREPANCY deniers; those who claim we either don't have all the information yet, or that we don't understand the numbers. Thank you -- and I just have to say... everyone we've loaned your Patriot Act video to has compared you to the late Spaulding Gray. We look forward to more monologues. Mark Crispin Miller: The exit poll/"official" count discrepancies are certainly significant although the issue is extremely complicated. Let me recommend the writings of Steve Freeman at the University of Pennsylvania. He has a book, co-written with Joel Bleifuss, coming out from Seven Stories in a month or so. A must-read. Steve is expert on the subject of those polls. He's debated Warren Mitofsky, who came off the worse for it. Just Me: Numerous states have enacted "paper trail" laws. Will such laws be sufficient to protect our votes? If not, what other actions do you suggest should be taken? Mark Crispin Miller: Paper trails per se are not enough. Certainly it's better to have paper trails than none, but the mere existence of such disparate slips of paper is no panacea. I think that thre should be a paper BALLOT, so that the ballots can be stored indefinitely and counted or recounted as required. The TruVote machine looks like a very good idea. (That's the company whose CEO was evidently Silkwooded last year.) ignatzmouse: Many Threads Into an Unmistakable Case: I'm sorry that I can't stay long, but I wanted to at least give a high recommendation to "Fooled Again." As with all of Mark's books, it is exhaustively researched, insightful, and has teeth. Mark does cite a couple of my studies including the "Unofficial Audit of the NC Election" that initially appeared here at DU. I'm deeply honored to be included, but it makes it even better for I have read and respected Mark for years. To paraphrase Van Morrison (the way I like to hear it), "If you pull your punches, you don't push the river." Mark pushes the river. "Cruel and Unusual" is the benchmark for me in getting at who these people are and what partially concealed agenda they seek. It's an important book. Likewise, "Fooled Again" pulls together the many-pronged RNC attack on the election process and exposes it in a way that is hard to marginalize. That is critically important because the culprits utilize marginalization of facts to elude media focus and cover their trail. They'll say... "but there were reported electronic discrepancies that favored Kerry too. See, it all amounts to much ado about a few electronic glitches." But, in fact, if you look at the EIRS data, the electronic vote switching favors Bush by a ridiculously large percentage. It is also interesting to see how often these reports are centered in minority districts. To have someone of Mark Cripsin Miller's credentials to not be fooled by the marginalizations and not carry the comfortable disdain for populism that seems embedded in most of the national media is necessary and validating if the story of what happened in the 2004 election is to reach out and enable reform in the future. Absentees and aborting votes: I've read the accounts of the missing absentee ballots in Florida (which you also nicely document) and have likewise noted in several states the unlawful collection of absentee ballots by mysterious persons and groups. In Georgia, we've just had the Republican legislature attempt to restrict minority voting by creating a voter ID requirement where no documented fraud has occurred. Interestingly, however, the voter ID restrictions would not apply to absentee ballots. To me, that's a tip that one method of rigging is to either create phantom absentee voters or revote for "captured" absentee ballots. Something very fishy is going on with absentees. I noted this particularly in Nevada where they have verified voting. Absentee fraud could be a way to circumvent all other measures of safeguarding the vote. Did you get a sense of rank in the types and methods of vote fraud -- electronic, vote switching DRE's, absentee, various types of disenfranchisement, etc.? And finally, at the old Kerry-Edwards forum on election day, one of the regulars posted an odd firsthand account that I have not seen since. While on the phone to Blackwell's Secretary of State headquarters, she was put on hold and could hear a phone bank of numerous people in the SoS's office making phone calls to voters stating that they were calling from Planned Parenthood and asking that they vote for John Kerry in order to keep abortion legal. My take was that they were calling identified Catholic voters in order to anger them to the polls to vote for Bush. That sort of illegal and underhanded tactic is Rovian by nature (or Mehlman-esque as the case may be) and I would guess prosecution worthy. The old Kerry-Edwards forum is long gone, and I have no way of researching it further. Have you heard of similar Ohio accounts, or is that state so awash in corruption that it almost gets lost in the mix? Mark Crispin Miller: I salute you, ignatzmouse. A thousand thanks for your kind words. I think your work is indispensable, and was delighted to be able to include it in my book, which seems all the stronger for your research. I had not heard anything about that phone bank. If you find it, could you send it to me? cry baby: Thank you for coming online with us! Do you think that the states will actually entertain the idea of replacing the voting machines that they just purchased to be in compliance with HAVA? Can those machines be retrofitted with a "proof of vote" certificate and would that keep our elections from being stolen? How likely do you believe it is that states will actually go to a voter verified paper ballot (which is what I'd like to see)? Mark Crispin Miller: The states will do what their residents demand they do. if the demand is long and loud enough. HAVA, furthermore, should be repealed ASAP. SteppingRazor: I haven't read the book -- yet -- but from what I understand... you rely fairly heavily on anecdotal evidence, which -- while certain to stir the proper response -- doesn't carry much weight in scientific or (more importantly) legal analysis. I ran into the same problem while looking into the 2000 election here in Florida -- plenty of people willing to talk, but little direct evidence of willful manipulation. My question is, do you believe that, if given to a prosecutor with subpoena power, real evidence not relying on circumstance or anecdote could be found, such that either this administration and/or the leaders of the Republican Party could be held criminally liable? If so, what would it mean for both parties in the long term (the short term conclusions being fairly obvious)? In other words, if taken into the ostensibly objective realm of the courtroom, could this dog hunt? Mark Crispin Miller: I have far more than anecdotal evidence, which, as you note, works better in a narrative composed for broad consumption than it would in court. If you'd like a good example of non-anecdotal evidence, please let me recommend the section of the book that deals with Sproul and Associates. There is solid evidence of fraud committed by the GOPand also evidence of a bald effort by the party to conceal all trace of that wrongdoing. Fly by night: A few more questions. But first, thanks kindly for all you do. I would like to know your impressions of how your piece has been received, both among other journalists and among the general public. Any feel for the impact on sales of Harper's at the newsstand, LTTEs or hits on Harper's and your web-sites. (I'm trying to gauge the legs of this story.) What evidence from other states besides Ohio (or the behavior of the Rethugs in Congress and elsewhere) during and after the election confirms your suspicions that the election was stolen. What are your reactions to the recent piece o' shit article in Mother Jones or the older piece o' shit article in TomPaine which dismissed the election fraud evidence. Why do you believe there is still such resistance (even among progressives) to acknowledging that our elections are being stolen these days? Any responses to any of these questions would be appreciated. Thanks again from Tennessee. We're not a red state or a blue state -- we're an Orange State. Mark Crispin Miller: That issue of Harper's broke a lot of records for newsstand sales. It sold more than any prior issue since the one that published Norman Mailer's Prisoner of Sex in 1972, and may well have outsold that one too. (We don't know yet.) In any case, the response was exhilarating. The evidence of nationwide vote theft is vast. It's in the book. (In large part, it IS the book.) I was disappointed in Mark Hertsgaard's pieceespecially as he's a friend of mine, and generally a very good reporter. He really blew it there. For one thing, my book is not based largely on the Conyers Report: a characterization that implies that my focus is Ohio. In fact, I devote only ten pages to the Conyers Report, and another five to scandals in Ohio NOT discussed by Conyers et al. The book is nearly 300 pages long, with copious evidence from many states. And more generally, Mark's piece badly distorted not just my book but the Conyers Report (WHAT WENT WRONG IN OHIO?) and the excellent compilation of documents put together by Bob Fitrakis and Harvey Wasserman (DID GEORGE W. BUSH STEAL THE ELECTION IN 2004?). The evidence speaks for itself. I wish that Mark had worked a little harder on that piece. The resistance is based partly on corruption, in some cases, and careerism, and very largely on denial. The implications of the theft last year are very grave. Better to deny them categorically. The whole red state/blue state dichotomy is pure crapola. ms liberty: Hi Mark! Thanks for chatting with us... The GAO report on miselection 04 came out last week to virtual silence from the main stream media, but BushCo is (finally) getting a more critical look from them, thanks to Mr. Fitzgerald. Isn't this the perfect time to push this issue, with this corrupt regime already vulnerable? How can we get this issue more attention from the MSM? Are you going to be on The Daily Show, or do you have any MSM interviews scheduled? What I would really enjoy is to see you on Washington Journal! Loved the Dyslexicon, and Cruel and Unusual. I'm looking forward to reading your new one! Anything you can do to help us save Marc Maron is REALLY appreciated! Mark Crispin Miller: These questions are terrific I wish I had the time to answer all of them in detail! The GAO report is an important document. The press's silence on it is appalling, and, I'm afraid, revealing. The Daily Show said they would have me on if a relevant "big story" should break sometime soon. I'm not sure what that means. The GAO report is such a story, except that, as you noticed, it was not a story. So what would such a story be, I wonder? Anyway, I'd love to be invited on. (Feel free to pester them on my behalf!) I think the MSM will be a tough sell for this book, although not as tough as it was a few months ago. The Florida Sun-Sentinel gave me a pretty good review, and I got good reviews as well in Publishers Weekly and Kirkus Reviews. The peoples at Basic Books are working overtime to get the word out, so we'll see. readmoreoften: Professor Miller, There are SO MANY wildly outrageous events occuring simultaneously-- the death of our democracy through stolen elections, the prospect of never-ending war, an unprotected and abused labor force (yes, I will be striking next week), the normalization of torture and rape, the loss of civil liberties, the suspicion surrounding the Bush Adminstration's culpability in 9-11-- why is the public so RESOUNDINGLY SILENT? I believe this resounding silence would have been unthinkable 20 years ago. I went to the World Can't Wait rally at Union Square yesterday and I was perplexed at that so few New Yorkers were willing to take to the streets to protest this regime. As undergrads in the late 80s/early 90s, we occupied the administration building because of a slight increase in tuition for low-income students. At this point, I swear I can't imagine undergraduates taking action to stop a college administration from forcing low income students to sell their organs to pay for tuition. It seems to be more than just a chilling effect. We are living in a media bubble-- a bubble of disinformation. As a people under undemocratic rule, who currently have no ability to manage or confront our mediated environment... how can we cut through the apathy? how do we debrief our fellow Americans? how do we address the fact that even those who are critical of the Bush administration will not confront the gravity of the situation? Do you have any ideas on how to burst the media bubble? Can you share with us any particular strategies you have used to cut through normally thoughtful people's overwhelming desire to pull the covers over heads and go back to bed? And thank you for signing the faculty democracy statement in support of TA's freedom to strike! Mark Crispin Miller: It isn't necessarily apathy. Discontent is more widespread than we are generally led to think. BushCo's popularity among the military, for example, and among military families, is not at all impressive; and he has lost a lot of ground even among his own erstwhile constituents. His current "STRONG approval" rating is now around 22%, with a four-point margin of error, which means that it could be as low as 18%the same percentage of Americans who did not disapprove of how Bush/Cheney and their Congress tried to meddle in the Terri Schiavo case. We tend to think of many of our fellow-citizens as apathetic because, let's face it, we too live inside "the media bubble," which represents us to ourselves (and to the whole wide world) as far less discontented than we really are. Now, it is surely true that people should be more than discontented. They should be actively protesting and resisting. (Although there too the media tunes out what protest and resistance HAS welled up.) On the other hand, the system has radically depoliticized us, training us to watch and, if we can afford it, shop, and little else. We've therefore long since lost our civic virtue, and the necessary habit of saying NO when things become oppressive. Just remember that the situation is a lot more fluid, and potentially explosive, than it appears to be on CNN and in the New York Times. The elites have fallen out with one anothera clash that now provides us with a most important opportunity to say things that have been verboten for too long. The iron is hot. It's therefore crucial that we not despair, or paralyze ourselves with undue worries vis-a-vis the seeming or alleged indifference of "the masses." DUBYASCREWEDUS: I live in Cleveland, Ohio - Land of Blackwell the Evil. I know he stole the 2004 election. How can we - as ordinary citizens - stop them from doing it again? Are you familiar with State Issues 2, 3, 4 and 5? We have been receiving conflicting views on whether or not to vote for them. Do you know of them, and if so, do you have an opinion? Mark Crispin Miller: I don't know about those issues. What do Bob Fitrakis and Harvey Wasserman say? Free Press is terrific. I trust them all implicitly re: all electoral issues in Ohio. Bill Bored: Why do you favor Early Voting? You say we should "extend the voting period to, say, a week." If we are concerned about security, early voting is not advisable. The longer the machines are available to accept votes, the greater the temptation and opportunity to screw around with them. Also, early voting gives any potential fraudster the knowledge of how the election is going so that vote rigging can be targeted to areas on election day in which the early results were "disappointing" and in need of reversal. Wouldn't it be safer/better to have an Election Day holiday to allow everyone to get to the polls? Mark Crispin Miller: I don't think we should be using those machines. althecat: Hi Mark.... Alastair from Scoop NZ here.... I will have to buy your book ASAP. And am delighted you have decided to come and chat here in DU. Is Volusia County in the book? I was always very disappointed after we followed up your story ([15]Diebold Memos Disclose Florida 2000 E-Voting Fraud) that Dana Millbank didn't go back and dig a bit deeper into this. For me I thought this discovery was a bit of a breakthrough in terms of indicating that fraud had quite probably occurred at a fairly high level in the 2000 election. P.S. I will have a scout around the thread to figure out the best place to buy the book. Mark Crispin Miller: Alastair, I'm honored by your praise. Scoop.co.nz is indispensable! 1,000 thanks. Yes, in Fooled Again I do deal with Volusia Countyespecially with the fact that Fox News called the race for Bush just at that moment when those 16,000+ Democratic votes had temporarily zipped down the rabbit hole. You can get the book from my own blog, at [16]markcrispinmiller.com, or from Buzzflash. ------------ Mark Crispin Miller interview http://www.stayfreemagazine.org/archives/19/mcm.html Mark Crispin Miller on conspiracies, media, and mad scientists Interview by Carrie McLaren | [8]Issue #19 After years of dropping Mark Crispin Miller's name in Stay Free!, I figured it was time to interview him. Miller is, after all, one of the sharpest thinkers around. His writings on television predicted the cult of irony-or whatever you call it when actual Presidential candidates mock themselves on Saturday Night Live, when sitcoms ridicule sitcoms, and when advertisements attack advertising. More recently, he has authored The Bush Dyslexicon, aided by his humble and ever-devoted assistant (me). [mcm2.gif] Miller works at New York University in the Department of Media Ecology. Though he bristles at being called an academic, Miller is exactly the sort of person that should be leading classrooms. He's an excellent speaker, with a genius for taking cultural products-be they Jell-O commercials or George W. Bush press conferences-and teasing out hidden meanings. (He's also funny, articulate, and knows how to swear.) I talked to Mark at his home in November, between NPR appearances and babysitting duty. He is currently writing about the Marlboro Man for American Icons, a Yale University Press series that he also happens to be editing. His book Mad Scientists: Paranoid Delusion and the Craft of Propaganda (W. Norton) is due out in 2004.-CM STAY FREE: Let's start with a simple one: Why are conspiracy theories so popular? MCM: People are fascinated by the fundamental evil that seems to explain everything. Lately, this is why we've had the anomaly of, say, Rupert Murdoch's Twentieth Century Fox releasing films that feature media moguls as villains out to rule the world-villains much like Rupert Murdoch. Who's a bigger conspirator than he is? And yet he's given us The X-Files. Another example: Time Warner released Oliver Stone's JFK, that crackpot-classic statement of the case that American history was hijacked by a great cabal of devious manipulators. It just so happens that Stone himself, with Time Warner behind him, was instrumental in suppressing two rival projects on the Kennedy assassination. These are trivial examples of a genuine danger, which is that those most convinced that there is an evil world conspiracy tend to be the most evil world conspirators. STAY FREE: Because they know what's inside their own heads? MCM: Yes and no. The evil that they imagine is inside their heads-but they can't be said to know it, at least not consciously. What we're discussing is the tendency to paranoid projection. Out of your own deep hostility you envision a conspiracy so deep and hostile that you're justified in using any tactics to shatter it. If you look at those who have propagated the most noxious doctrines of the twentieth century, you will find that they've been motivated by the fierce conviction that they have been the targets of a grand conspiracy against them. Hitler believed he was fighting back, righteously, against "the Jewish world conspiracy." [See pp. 30-31] Lenin and Stalin both believed they were fighting back against the capitalist powers-a view that had some basis in reality, of course, but that those Bolsheviks embraced to an insane degree. (In 1941, for example, Stalin actually believed that England posed a greater danger to the Soviet Union than the Nazis did.) We see the same sort of paranoid projection among many of the leading lights of our Cold War-the first U.S. Secretary of Defense, James Forrestal, who was in fact clinically insane; the CIA's James Angleton; Richard Nixon; J. Edgar Hoover; Frank Wisner, who was in charge of the CIA's propaganda operations worldwide. Forrestal and Wisner both committed suicide because they were convinced the Communists were after them. Now, there was a grain of truth to this since the Soviet Union did exist and it was a hostile power. But it wasn't on the rise, and it wasn't trying to take over the world, and it certainly wasn't trying to destroy James Forrestal personally. We have to understand that there was just as much insanity in our own government as there was with the Nazis and the Bolsheviks. This paranoid dynamic did not vanish when the Cold War ended. The U.S. is now dominated, once again, by rightists who believe themselves besieged. And the same conviction motivates Osama bin Laden and his followers. They see themselves as the victims of an expansionist Judeo-Christianity. STAY FREE: Al Qaeda is itself a conspiracy. MCM: Yes. We have to realize that the wildest notions of a deliberate plot are themselves tinged with the same dangerous energy that drives such plots. What we need today, therefore, is not just more alarmism, but a rational appraisal of the terrorist danger, a clear recognition of our own contribution to that danger, and a realistic examination of the weak spots in our system. Unfortunately, George W. Bush is motivated by an adolescent version of the same fantasy that drives the terrorists. He divides the whole world into Good and Evil, and has no doubt that God is on his side-just like bin Laden. So how can Bush guide the nation through this danger, when he himself sounds dangerous? How can he oversee the necessary national self-examination, when he's incapable of looking critically within? In this sense the media merely echoes him. Amid all the media's fulminations against al Qaeda, there has been no sober accounting of how the FBI and CIA screwed up. Those bureaucracies have done a lousy job, but that fact hasn't been investigated because too many of us are very comfortably locked into this hypnotic narrative of ourselves as the good victims and the enemy as purely evil. STAY FREE: There's so much contradictory information out there. Tommy Thompson was on 60 Minutes the other night saying that we were prepared for biological warfare, that there was nothing to worry about. Yet The New York Times and The Wall Street Journal have quoted experts saying the exact opposite. Do you think this kind of confusion contributes to conspiratorial thinking? I see some conspiratorial thinking as a normal function of getting along in the world. When, on September 11th, the plane in Pennsylvania went down, there was lots of speculation that the U.S. military shot it down. MCM: Which I tend to think is true, by the way. I've heard from some folks in the military that that plane was shot down. STAY FREE: But we have no real way of knowing, no expertise. MCM: Yes, conspiratorial thinking is a normal response to a world in which information is either missing or untrustworthy. I think that quite a few Americans subscribe to some pretty wild notions of what's going on. There's nothing new in this, of course. There's always been a certain demented plurality that's bought just about any explanation that comes along. That explains the centuries-old mythology of anti-Semitism. There will always be people who believe that kind of thing. To a certain extent, religion itself makes people susceptible to such theorizing. STAY FREE: How so? [mcm1.gif] MCM: Because it tends a propagate that Manichean picture of the universe as split between the good people and "the evil-doers." Christianity has spread this vision-even though it's considered a heresy to believe that evil is an active force in God's universe. According to orthodox Christianity, evil is not a positive force but the absence of God. STAY FREE: A lot of religious people believe what they want to believe, anyway. Christianity is negotiable. MCM: Absolutely. But when it comes to the paranoid world view, all ethical and moral tenets are negotiable, just as all facts are easily disposable. Here we need to make a distinction. On the one hand, there have been, and there are, conspiracies. Since the Cold War, our government has been addicted to secrecy and dangerously fixated on covert action all around the world. So it would be a mistake to dismiss all conspiracy theory. At the same time, you can't accept everything-that's just as na?ve and dangerous as dismissing everything. Vincent Bugliosi, who wrote The Betrayal of America, is finishing up a book on the conspiracy theories of the Kennedy assassination. He has meticulously gone through the case and has decided that the Warren Report is right. Now, Bugliosi is no knee-jerk debunker. He recognizes that a big conspiracy landed George W. Bush in the White House. STAY FREE: So I take it you don't buy the conspiracy theories about JFK? MCM: I think there's something pathological about the obsession with JFK's death. Some students of the case have raised legitimate questions, certainly, but people like Stone are really less concerned about the facts than with constructing an idealized myth. STAY FREE: Critics of the war in Afghanistan have called for more covert action as an alternative to bombing. That's an unusual thing for the left to be advocating, isn't it? MCM: It is. On the one hand, any nation would appear to be within its rights to try to track down and kill these mass murderers. I would personally prefer to see the whole thing done legally, but that may not be realistic. So, if it would work as a covert program without harm to any innocents I wouldn't be against it. But that presumes a level of right-mindedness and competence that I don't see in our government right now. I don't think that we can trust Bush/Cheney to carry out such dirty business. Because they have a paranoid world-view-just like the terrorists-they must abuse their mandate to "do what it takes" to keep us safe. By now they have bombed more innocents than perished in the World Trade Center, and they're also busily trashing many of our rights. The "intelligence community" itself, far from being chastened by their failure, has used the great disaster to empower itself. That bureaucracy has asked for still more money, but that request is wholly disingenuous. They didn't blow it because they didn't have enough money-they blew it because they're inept! They coasted along for years in a cozy symbiosis with the Soviet Union. The two superpowers needed one another to justify all this military and intelligence spending, and it made them complacent. Also, they succumbed to the fatal tendency to emphasize technological intelligence while de-emphasizing human intelligence. STAY FREE: Yeah, the Green Berets sent to Afghanistan are equipped with all sorts of crazy equipment. They each wear gigantic puffy suits with pockets fit to carry a GPS, various hi-tech gizmos, and arms. MCM: That's just terrific. Meanwhile, the terrorists used boxcutters! STAY FREE: Did you see that the U.S. Army has asked Hollywood to come up with possible terrorist scenarios to help prepare the military for attack? MCM: Yeah, it sent a chill right through me. If that's what they're reduced to doing to protect us from the scourge of terrorism, they're completely clueless. They might as well be hiring psychics-which, for all we know, they are! STAY FREE: The Bush administration also asked Al Jazeera, the Arab TV station, to censor its programming. MCM: Right. And, you know, every oppressive move we make, from trying to muzzle that network to dropping bombs all over Afghanistan, is like a gift to the terrorists. Al Jazeera is the only independent TV network in the Arab world. It has managed to piss off just about every powerful interest in the Middle East, which is a sign of genuine independence. In 1998, the network applied for membership in the Arab Press Union, and the application was rejected because Al Jazeera refused to abide by the stricture that it would do everything it can to champion "Arab brotherhood." STAY FREE: What do you think our government should have done instead of bombing? MCM: I rather wish they had responded with a little more imagination. Doing nothing was not an option. But bombing the hell out of Afghanistan was not the only alternative-and it was a very big mistake, however much it may have gratified a lot of anxious TV viewers in this country. By bombing, the U.S. quickly squandered its advantage in the propaganda war. We had attracted quite a lot of sympathy worldwide, but that lessened markedly once we killed Afghan civilians by the hundreds, then the thousands. Americans have tended not to want to know about those foreign victims. But elsewhere in the world, where 9/11 doesn't resonate as much, the spectacle of all those people killed by us can only build more sympathy for our opponents. That is, the bombing only helps the terrorists in the long run. And so has our government's decision to define the 9/11 crimes as acts of war. That definition has served only to exalt the perpetrators, who should be treated as mass murderers, not as soldiers. But the strongest argument against our policy is this-that it is exactly what the terrorists were hoping for. Eager to accelerate the global split between the faithful and the infidels, they wanted to provoke us into a response that might inflame the faithful to take arms against us. I think we can agree that, if they wanted it, we should have done something else. STAY FREE: You've written that, before the Gulf War, Bush the elder's administration made the Iraqi army sound a lot more threatening than it really was. Bush referred to Iraq's scanty, dwindling troops as the "elite Republican guard." Do you think that kind of exaggeration could happen with this war? MCM: No, because the great given in this case is that we are rousing ourselves from our stupor and dealing an almighty and completely righteous blow against those who have hurt us. Now we have to seem invincible, whereas ten years ago, they wanted to make us very scared that those Iraqi troops might beat us. By terrorizing us ahead of time, the Pentagon and White House made our rapid, easy victory seem like a holy miracle. STAY FREE: Let's get back to conspiracy theories. Do people ever call you a conspiracy theorist? MCM: Readers have accused me of paranoia. People who attacked me for The Bush Dyslexicon seized on the fact that my next book is subtitled Paranoid Delusion and the Craft of Propaganda, and they said, "He's writing about himself!" But I don't get that kind of thing often because most people see that there's a lot of propaganda out there. I don't write as if people are sitting around with sly smiles plotting evil-they're just doing their jobs. The word propaganda has an interesting history, you know. It was coined by the Vatican. It comes from propagare, which means grafting a shoot onto a plant to make it grow. It's an apt derivation, because propaganda only works when there is fertile ground for it. History's first great propagandist was St. Paul, who saw himself as bringing the word of God to people who needed to hear it. The word wasn't pejorative until the first World War, when the Allies used it to refer to what the Germans did, while casting their own output as "education," or "information." There was a promising period after the war when it got out that our government had done a lot of lying. The word propaganda came to connote domestic propaganda, and there were a number of progressive efforts to analyze and debunk it. But with the start of World War II, propaganda analysis disappeared. Since we were fighting Nazi propaganda with our own, it wasn't fruitful to be criticizing propaganda. STAY FREE: I read that the word "propaganda" fell out of fashion among academics around that time, so social scientists started referring to their work as "communications." It was no longer politically safe to study how to improve propaganda. MCM: Experts in propaganda started doing "communications" studies after the war. Since then, "communication" has been the most common euphemism used for "propaganda," as in "political communication." There's also "psychological warfare" and, of course, "spin." The Cold War was when "propaganda" became firmly linked to Communism. "Communist propaganda" was like "tax-and-spend Democrats" or "elite Republican guard." The two elements were inseparable. If the Communists said it, it was considered propaganda; and if it was propaganda, there were Communists behind it. Only now that the Cold War is over is it possible to talk about U.S. propaganda without running the risk of people looking at you funny. The word does still tend to be used more readily in reference to liberals or Democrats. The right was always quick to charge Bill Clinton-that leftist!-with doing propaganda. In fact, his right-wing enemies, whose propaganda skills were awesome, would routinely fault him for his "propaganda." You never heard anybody say Ronald Reagan was as a master propagandist, though. He was "the Great Communicator." STAY FREE: Talk a bit about how conspiracy is used to delegitimize someone who's doing critical analysis. I've heard you on TV saying, "I don't mean to sound like a conspiracy theorist, but . . . " People even do this in regular conversation. A friend of mine was telling me about going to Bush's inauguration in D.C. He was stunned that none of the protests were covered by the media but prefaced his comments by saying, "I want don't want to sound like a conspiracy theorist, but [the press completely ignored the protests]." It's almost as if people feel the need to apologize if they don't follow some party line. MCM: I wouldn't say that, because there are people who are conspiracy theorists. And I think the emphasis there should not be on the conspiracy but on the theory. A theorist is a speculator. It's always much easier to construct a convincing conspiracy theory if you don't bother looking at reality. The web is filled with stuff like this. So, if you want cover yourself, you should say something like: "I don't subscribe to every crackpot notion that comes along, but in this case there's something funny going on-and here's the evidence." It really is a rhetorical necessity. Especially when you're on TV. STAY FREE: Maybe it's more of a necessity, too, when you're talking about propaganda. MCM: I'll tell you something: it's necessary when you're talking about real conspiracies. You know who benefited big time from the cavalier dismissal of certain conspiracies? The Nazis. The Nazis were expert at countering true reports of their atrocities by recalling the outrageous lies the Allies had told about the Germans back in World War I. The Allies had spread insane rumors about Germans bayoneting Belgian babies, and crucifying Canadian soldiers on barn doors, and on and on. So, when it first got out that the Nazis were carrying out this horrible scheme, their flacks would roll their eyes and say, "Oh yeah-just like the atrocity stories we heard in WWI, right?" STAY FREE: I once attended a lecture on Channel One [an advertising-funded, in-school "news" program], where a professor dissected several broadcasts. He talked about how Channel One stories always emphasize "oneness" and individuality. Collective efforts or activism is framed in the negative sense, while business and governmental sources are portrayed positively and authoritatively. Now, someone listening to this lecture might say, "That just your reading into it. You sound conspiratorial." So where do you think this sort of media analysis or literary analysis and conspiracy-mongering intersect? MCM: That's a very good question. For years I've encountered the same problem as a professor. You've got to make the point that any critical interpretation has to abide by the rules of evidence-it must be based on a credible argument. If you think I'm "reading into it," tell me where my reading's weak. Otherwise, grant that, since the evidence that I adduce supports my point, I might be onto something. Where it gets complicated with propaganda is around the question of intention, because an intention doesn't have to be entirely conscious. The people who make ads, for example, are imbedded in a larger system; they've internalized its imperatives. So they may not be conscious intellectually of certain moves they make. If you said to somebody at Channel One, "You're hostile to the collective and you insult the individual," he'd say, reasonably, "What are you talking about? I'm just doing the news." So you have to explain what ideology is. I'm acutely sensitive to this whole problem. When I teach advertising, for example, I proceed by using as many examples as possible, to show that there is a trend, whatever any individual art director or photographer might insist about his or her own deliberate aims. Take liquor advertising, which appeals to the infant within every alcoholic by associating drink with mother's milk. This is clearly a deliberate strategy because we see it in ad after ad-some babe holding a glass of some brew right at nipple level. She's invariably small-breasted so that the actual mammary does not upstage the all-important product. If that's an accident, it's a pretty amazing accident. Now, does this mean that the ad people sit down and study the pathology of alcoholics, or is it something they've discovered through trial and error? My point is that it ultimately makes no difference. We see it over and over-and if I can show you that, according to experts, visual association speaks to a desire in alcoholics, a regressive impulse, then you have to admit I have a point. Of course, there are going to be people who'll accuse you of "reading into it" no matter what you say because they don't want to hear the argument. This is where we come up against the fundamental importance of anti-intellectualism on the right. They hate any kind of explanation. They feel affronted by the very act of thinking. I ran into this when I promoted The Bush Dyslexicon on talk shows-which I could do before 9/11. Bush's partisans would fault me just for scrutinizing what he'd said. STAY FREE: I recently read Richard Hofstadter's famous essay about political paranoia. He argued that conspiracy is not specific to any culture or country. Would you agree with that, or do you think there is something about America that makes it particularly hospitable to conspiracy theories? MCM: Well, there's a lot of argument about this. There's a whole school of thought that holds that England's Civil War brought about a great explosion of paranoid partisanship. Bernard Baylin's book The Ideological Origins of the American Revolution includes a chapter on the peculiar paranoid orientation of the American revolutionaries. But I think paranoia is universal. It's an eternal, regressive impulse, and it poses a special danger to democracy. STAY FREE: Why, specifically, is it dangerous to democracy? MCM: Because democracies have always been undone by paranoia. You cannot have a functioning democracy where everyone is ruled by mutual distrust. A democratic polity requires a certain degree of rationality, a tolerance of others, and a willingness to listen to opposing views without assuming people are out to kill you. There's a guy named Eli Sagan who wrote a book on the destructive effect of paranoia on Athenian democracy. And I think that the American experiment may also fail; America has always come closest to betraying its founding principles at moments of widespread xenophobic paranoia. In wartime, people want to sink to their knees and feel protected. They give up thinking for themselves-an impulse fatal to democracy but quite appropriate for fascism and Stalinism. The question now is whether paranoia can remain confined to that thirty-or-so percent of the electorate who are permanently crazy. That's what Nixon himself said, by the way-that "one third of the American electorate is nuts." About a third of the German people voted for the Nazis. I think there's something to that. It's sort of a magic number. STAY FREE: Come to think of it, public opinion polls repeatedly show that 70% of the public are skeptical of advertising claims. I guess that means about 30% believe anything. MCM: Wow. I wonder if that lack of skepticism toward advertising correlates in any way with this collective paranoia. That would be interesting to know. STAY FREE: Well, during the Gulf War, a market research firm conducted a study that found that the more hawkish people were, the more likely they were to be rampant consumers. Warmongers, in other words, consumed more than peaceniks. Why do you think these two reactions might be correlated? MCM: One could argue that this mild, collective paranoia often finds expression in promiscuous consumption. Eli Sagan talks about the "paranoidia of greed" as well as the "paranoidia of domination." Both arise out of suspicion of the enemy. You either try to take over all his territory forcibly, or you try to buy everything up and wall yourself within the fortress of your property. STAY FREE: Those two reactions also practically dominate American culture. When people from other countries think of America, they think of us being materialistic and violent. We buy stuff and kill people. Do you think there's any positive form of paranoia? Any advantage to it? MCM: No, I don't, because paranoids have a fatal tendency to look for the enemy in the wrong place. James Angleton of the CIA was so very destructive because he was paranoid. I mean, he should have been in a hospital-and I'm not being facetious. Just like James Forrestal, our first defense secretary. These people were unable to protect themselves, much less serve their country. I think paranoia is only useful if you're in combat and need to be constantly ready to kill. Whether it's left-wing or right-wing paranoia, the drive is ultimately suicidal. STAY FREE: Our government is weak compared to the corporations that run our country. What role do you see for corporations in the anti-terrorist effort? MCM: Well, corporations do largely run the country, and yet we can't trust them with our security. The private sector wants to cut costs, so you don't trust them with your life. Our welfare is not uppermost in their minds; our money is. So what role can the corporations play? STAY FREE: They can make the puffy suits! MCM: The puffy suits and whatever else the Pentagon claims to need. Those players have a vested interest in eternal war. STAY FREE: Did you read that article about Wal-Mart? After September 11, sales shot up for televisions, guns, and canned goods. MCM: Paranoia can be very good for business. STAY FREE: Have you ever watched one of those television news shows that interpret current events in terms of Christian eschatology? They analyze everyday events as signs of the Second Coming. MCM: No. I bet they're really excited now, though. I wonder what our president thinks of that big Happy Ending, since he's a born-again. You know, Reagan thought it was the end times. STAY FREE: But those are minority beliefs, even among born-again Christians. [mcm4.gif] MCM: It depends on what you mean by "minority." Why are books by Tim LaHayes selling millions? He's a far-right fundamentalist, co-author of a series of novels all about the end times-the Rapture and so on. And Pat Robertson's best-seller, the New World Order, sounds the same apocalyptic note. STAY FREE: He's crazy. He can't really believe all that stuff. MCM: No, he's crazy and therefore he can believe that stuff. His nurse told him years ago that he was showing symptoms of paranoid schizophrenia. STAY FREE: I recently read a chapter from Empire of Conspiracy-an intelligent book about conspiracy theories. But it struck me that the author considered Vance Packard, who wrote Hidden Persuaders, a conspiracy theorist. Packard's book was straightforward journalism. He interviewed advertising psychologists and simply reported their claims. There was very little that was speculative about it. MCM: The author should have written about Subliminal Seduction and the other books by Wilson Brian Key. STAY FREE: Exactly! That nonsense about subliminal advertising was a perfect example of paranoid conspiracy. Yet he picked on Vance Packard, who conducted his research as any good journalist would. MCM: Again, we must distinguish between idle, lunatic conspiracy theorizing, and well-informed historical discussion. There have been quite a few conspiracies in U.S. history-and if you don't know that, you're either ignorant or in denial. Since 1947, for example, we have conspiratorially fomented counter-revolutions and repression the world over. That's not conspiracy theory. That's fact-which is precisely why it meets the charge of speculation. How better to discredit someone than to say she's chasing phantoms-or that she has an axe to grind? When James Loewen's book Lies Across America was reviewed in The New York Times, for example, the reviewer said it revealed an ideological bias because it mentions the bombing of civilians in Vietnam. Loewen wrote back a killer letter to the editor pointing out that he had learned about those bombings from The New York Times. Simply mentioning such inconvenient facts is to be dismissed as a wild-eyed leftist. When someone tells me I'm conspiracy-mongering I usually reply, "It isn't a conspiracy, it's just business as usual." STAY FREE: That's like what Noam Chomsky says about his work: "This is not conspiracy theory, it is institutional analysis." Institutions do what is necessary to assure the survival of the institution. It's built into the process. MCM: That's true. There's a problem with Chomsky's position, though-and I say this with all due respect because I really love Chomsky. When talking about U.S. press coverage, Chomsky will say that reporters have internalized the bias of the system. He says this, but the claim is belied by the moralistic tone of Chomsky's critique-he charges journalists with telling "lies" and lying "knowingly." There is an important contradiction here. Either journalists believe they're reporting truthfully, which is what Chomsky suggests when he talks about internalizing institutional bias. Or they're lying-and that, I think, is what Chomsky actually believes because his prose is most energetic when he's calling people liars. One of the purposes of my next book, Mad Scientists, will be to suggest that all the best-known and most edifying works on propaganda are slightly flawed by their assumption that the propagandist is a wholly rational, detached, and calculating player. Most critics-not just Chomsky, but Jacques Ellul and Hannah Arendt, among others-tend to project their own rationality onto the propagandist. But you can't study the Nazis or the Bolsheviks or the Republicans without noticing the crucial strain of mad sincerity that runs throughout their work, even at its most cynical. [mcm3.gif] STAY FREE: You have written that even worse than the possibility that a conspiracy exists may be the possibility that no conspiracy is needed. What do you mean by that? MCM: The fantasy of one big, bad cabal out there is terrifying but also comforting. Not only does it help make sense of a bewildering reality, but it also suggests a fairly neat solution. If we could just find all the members of the network and kill them, everything will be okay. It's more frightening to me that there are no knowing authors. No one is at the top handling the controls. Rather, the system is on auto-pilot, with cadres just going about their business, vaguely assuming that they're doing good and telling truths-when in fact they are carrying out what could objectively be considered evil. What do you do, then? Who is there to kill? How do you expose the perpetrators? Whom do you bring before the bar of justice-and who believes in "justice"? And yet I do think that a lot of participants in this enterprise know they're doing wrong. One reason people who work for the tobacco companies make so much money, for example, is to still the voice of conscience, make them feel like they're doing something valuable. But the voice is very deeply buried. Ultimately, though, it is the machine itself that's in command, acting through those workers. They let themselves become the media's own media-the instruments whereby the system does its thing. I finally learned this when I studied the Gulf War, or rather, the TV spectacle that we all watched in early 1991. There was a moment on the war's first night when Ron Dellums was just about to speak against the war. He was on the Capitol steps, ready to be interviewed on ABC-and then he disappeared. They cut to something else. I was certain that someone, somewhere, had ordered them to pull the plug because the congressman was threatening to spoil the party. But it wasn't that at all. We looked into it and found the guy who'd made that decision, which was a split-second thing based on the gut instinct that Dellums' comments would make bad TV. So that was that-a quick, unconscious act of censorship, effected not by any big conspiracy but by one eager employee. No doubt many of his colleagues would have done the same. And that, I think, is scarier than any interference from on high. From checker at panix.com Sun Dec 4 03:21:38 2005 From: checker at panix.com (Premise Checker) Date: Sat, 3 Dec 2005 22:21:38 -0500 (EST) Subject: [Paleopsych] Die Zeit: Washing Weber's dirty laundry Message-ID: Washing Weber's dirty laundry http://print.signandsight.com/features/445.html 2005-11-14 Joachim Radkau's compendious new biography deals in detail with Max Weber's personal - and sexual - life. A critical review by Robert Leicht. Why are we interested in the lives of people whose works lie before us like an open book? If we didn't know the first thing about Johann Sebastian Bach, for example, would his music sound any different? The case of Martin Luther is different: would we be able to comprehend the full impact of his reformist breakthrough on the idea of justification according to Saint Paul if we knew nothing of the biographical, and above all autobiographical accounts of how he tortured himself with his awareness of his sin, and with his (as his paternal friend [1]Johann von Staupitz shouted out to him) "superficial sins"? Which of these two perspectives applies to the relationship between the life and work of Max Weber, the German [2]founder of sociology, whose opus doesn't even exist in a coherent edition? (The eminent complete edition of his works is still awaiting completion.) "During his lifetime", as [3]political scientist Wilhelm Hennis observed back in 1982, "Weber only published two 'real books', the ones indispensable to his academic career: a dissertation and a habilitation. All other works consist of enquiry-type reports and rapidly thrown together essays, which were only published in book form after his death." How does the incompleteness of his work relate to his widespread influence as a thinker? Is it even possible to explain the fragmentary nature of his work with reference to fragments from his life - in other words to the "suffering" which caused him years of writing block and forced him to give up teaching for most of his life? At the time Hennis wrote forebodingly: "We are going to have to postpone all wishes for a fitting biography, one that replaces [4]Marianne Weber's, until the vast treasure of letters has been published in its entirety... In any event, only the letters can provide a deeper and more accurate understanding of Weber's life." This must have been understood as an indication from those in the know that the letters contained biographical details that could not yet be made public. Certainly, sketchy facts about the complicated love triangle between Elsa Jaff?, Max Weber and his brother Alfred were in circulation, but less was known about Weber's love affair with the pianist Mina Tobler. These affairs certainly provide material for speculation and chatter, but also for serious interrogation. Paradoxically, Wilhelm Hennis both argued for and warned against a new biography: "The 'derivation' of Weber's work from his psyche has turned out to be as questionable as the effort to separate his life from his work. He was a genius, a man sensitive to the world in which we live. Both his genius and his sensitivity were invested in a body of work that attempted to be social science." Thirty-three years after Hennis wrote these words, Joachim Radkau has published a monumental [5]biography of Max Weber. As Hennis predicted, the book depends heavily on the letters. And it owes much to the fact that Radkau has a thorough knowledge of many areas significant for understanding Max Weber's time and states of mind, for example his study on [6]"Das Zeitalter der Nervosit?t" (The age of nervousness). Seldom has a biography dealt with sources in such a detailed way. Seldom has a work given such a full picture of the protagonist's intellectual context and social milieu (for example his description of the university environment, especially in Heidelberg). Over and above Weber's biography, the volume provides a rich overview of an entire epoch. Yet it remains an open question whether this monumental, overwhelming, in end effect tiring study not only extends but also deepens our knowledge of Max Weber. To put it bluntly: we may learn more about Max Weber's person, but only a limited amount about his work and influence. As far as Weber's work is concerned, Radkau's biography is the counterpole to Wilhelm Hennis' interests. In Hennis' view, Weber's entire work must be approached from an Archimedean, or rather anthropological perspective: "What is man becoming in 'mental', 'qualitative' terms?" For Hennis, Weber's entire work is concerned with this central question, from the inaugural Freiburg address in 1895 to its unfinished end. Radkau, on the other hand, separates Weber's life and work into two clearly distinct phases, each of which reveals an entirely different personality with a correspondingly different body of work. Certainly, Radkau defends himself from the "deadly attack of 'biographical reductionism'", as if Hennis' warning were still ringing in his ears. But one can say without exaggeration that in this two-phase biography, Radkau connects not only Weber's scholarly creativity, but also the direction of his thinking, very closely to the emotional and erotic (psycho-physical, in fact psycho-sexual) sensitivity of his hero. There might be a certain plausibility in saying: how you feel is how you think - or write. But how many artworks have been wrested from an artist's naked desperation that fail to shed light on the artist's life? The very - at first sight oppressive - burden of evidence amassed by Radkau to establish a connection between emotion and creation gives one pause for thought, both for reasons of fact and method. To sum up Radkau roughly, Weber's first phase, leading up to his psycho-physical breakdown in 1898/99 which it took him years to recover from (at his own request he was finally relieved of teaching duties in 1903), is obsessively determined by his sexually unfulfilled, allegedly unconsummated marriage with Marianne Weber, by his impotence, and by his masochistic tendencies. Attendant to these are Weber's continual pollutions, or nocturnal ejaculations, which he saw as extremely detrimental to his creative powers. It's bad enough that Marianne Weber wrote innumerable letters on the subject to Weber's mother behind his back, thus providing the relevant source material for this biography. Reading the work, one is led to regard the Indian [7]custom of widow-burning with a certain, of course entirely politically incorrect, indulgence. But it is even worse that Radkau goes into such painful details. The word "pollution" or its German equivalent appears 29 times in five pages. Fine, the Webers were evidently deeply troubled by what is for us an incomprehensible pseudo-problem. But must we really have our noses dragged through this evidence? So Max Weber appears as a particularly hard nut case of - as they said in those days - neurasthenia. And this first period of what one might term pathologically-inflicted sexual asceticism corresponds with that part of Weber's work which deals with the inner asceticism of the Protestant ethic and the spirit of capitalism, with its strictly regimented lifestyle. In the autumn of 1909, Max Weber falls in love with Else Jaff?. But two months later they separate again because Else has started up an affair with Weber's brother Alfred. Then in summer of 1912 Max starts a love affair with the pianist Mina Tobler. Radkau sums up: "The relapses into his suffering now come to an end." Seven years later, Max falls in love with Else Jaff? once again, in what can be insinuated from the letters as a deeply servile love. Hardly a year later he dies. However this last decade of his short life is marked not only by an immense literary output, but also a change in direction. Max Weber, now erotically uninhibited, extra-marital and sexually fulfilled, busies himself with the religions of redemption and charisma. True, Radkau notes: "The new era is not, as far as we know, initiated by his love experience, but by an intellectual mood swing and a new feeling of physical well-being." Wouldn't it have been a good idea to ask whether Weber's neurasthenic suffering were not simply the cause, but also the consequence of his lack of productivity? And one could also ask whether his newfound productivity was not caused by his new-found sexual potency. Perhaps their interdependence was ultimately even more complex than that. The irritating, even maddening thing about Radkau's indiscreet inroads into Weber's private sphere is the countless number of times that something is apparent, that the supposition is justified, that one is entitled to assume... Assumption follows assumption. Some may be plausible, some entirely misleading. Wouldn't biographers do better to stick to what can be conclusively supported, rather than go out on conjectural limbs - or even repudiate their sources? Radkau points to evidence that Max Weber felt a sexual thrill when spanked by the family maid as a child. On two different occasions in the book he then feels entitled to correct Weber here. In fact it must have been Weber's mother (with the long-term consequences one might expect), because in such an upper-middle class household the maid would never dare punish the young master in such a way. This is nonsense of course, as the present writer can attest, who himself has no lower-middle class background, and who as a boy was occasionally chastised by both maid and mother - comparably the milder of the two - without thrills, without long-term negative consequences - and, it should be said, without it triggering off or repressing a major body of scholarly work. Some of the more intimate details of Weber's world could even be instructive for understanding the conditions of his intellectual production, if not its consequences. But their obsessive dissemination here - although it is precisely this information that will cause a sensation - is interfering, embarrasing and questionable in terms of whether they aid an understanding of the work. Whereas the unity of Weber's life and work is essential to Hennis' central approach from the outset, Radkau tries to constitute a kind of unity by letting the "true" Max Weber, "his" Weber, only appear in the second phase of his biography. It is here, in the last decade of Weber's life, after the productivity crisis and his erotic awakening with Else Jaff?, Mina Tobler and once again Else Jaff?, that Weber finds himself. But once you've adopted such a sceptical reading of Radkau's biography - which as I said is as imposing, entertaining and ingenious in its goals as in its method - a second reading will not fail to reveal many points where one is unsure whether to side with Weber or his biographer. One example is the word charisma. This cardinal term is treated in a twofold fashion. On the one hand it stands for the redemption of man - from his neurasthenic suffering too - through God's grace. On the other, it stands for the pre-conditions for a certain type of leadership. But the one has nothing to do with the other. The liberation from guilt vis-?-vis God has nothing to do with enforcing one's power on non-liberated subjects. Appealing to Weber's preference for the prophets of the Old Testament, Radkau parades these figures as prototypes of charismatic leadership. These people, however, didn't feel they had been freed by God, rather they felt constrained by him against their will. In addition, they didn't have a chance to demand the people's allegiance (the essence of leadership according to Weber). They were severe critics of leadership, but unsuccessful ones, and in later epochs they were self-critically presented as such in the writings of the people of Israel. Radkau is right to put so much emphasis on theology. But when he only cites [8]Karl Barth's critique of the "liberal theology" before and during World War I from the "Lectures on 19th Century Theology", he misses Barth's real theological-political polemic, which is expressed far more clearly in his many pamphlets and "open letters". But Radkau's deficiencies are most apparent in his treatment of Weber's music sociology. No one who plays an instrument with a brass-type mouthpiece would ever think - with Helmholz, and following him Weber and Radkau - of seeing a complete harmony in the natural series of tones, which are primarily impressive for their purely mathematical proportions. And one would be even less enclined to draw wide-reaching consequences from them. At the seventh semi-tone at the very latest, such an approach becomes very hard to justify. All these irritating factors are exacerbated by Radkau's innumerable side-sweeps at traditional Max Weber research, and by Radkau's critique of everything Weber's traditional "admirers" praise. And conversely, it is clear Radkau believes he is the only one to really do justice to Weber's music sociology, for example. His explanations, by contrast, are often based on sentences which, in his view, the traditional Weber researchers have either not read carefully enough, or not understood correctly. Certainly, a measured lack of respect not only makes for amusing reading, it is also entirely justified. Weber's Freiburg address, for example, and many of his political judgements, can only be seen as embarrassing and borderline. But a biographer who can't stop poking fun at Weber scholarship in a work of a good 1,000 pages neither does justice to how Weber's work has been received, nor to its enduring legacy. Here the book would have needed a good editing, one that removes the superfluous and accounts for all that is lacking. At the end of the book, Radkau justifies his washing Weber's dirty laundry in public by saying that in the meantime even those mentioned only briefly are now dead. Objection, your honour! Even long after death a taboo remains which protects people from having their innermost secrets revealed. Especially when the revelations are not aimed at satisfying our thirst for knowledge, but our idle curiosity. Joachim Radkau: "[9]Max Weber. Die Leidenschaft des Denkens" is published by C. Hanser Verlag, Munich, 2005. 1,008 pages, 45.00 euros. * The article [10]originally appeared in German in the October 2005 literary supplement of Die Zeit. Robert Leicht served as editor in chief of Die Zeit from 1992 - 1997, and is now political correspondent for the paper. Translation: [11]jab. sign and sight funded by Bundeskulturstiftung References 1. http://www.newadvent.org/cathen/14283a.htm 2. http://www.faculty.rsu.edu/%7Efelwell/Theorists/Weber/Whome.htm 3. http://en.wikipedia.org/wiki/Wilhelm_Hennis 4. http://www.webster.edu/%7Ewoolflm/weber.html 5. http://www.hanser.de/buch.asp?isbn=3-446-20675-2&area=Literatur 6. http://www.hanser.de/buch.asp?isbn=3-446-19310-3&area=Literatur 7. http://www.kamat.com/kalranga/hindu/sati.htm 8. http://www.faithnet.org.uk/Theology/barth.htm 9. http://www.hanser.de/buch.asp?isbn=3-446-20675-2&area=Literatur 10. http://www.zeit.de/2005/42/P-Weber 11. http://www.signandsight.com/service/35.html From checker at panix.com Sun Dec 4 03:21:49 2005 From: checker at panix.com (Premise Checker) Date: Sat, 3 Dec 2005 22:21:49 -0500 (EST) Subject: [Paleopsych] TCS: Internet Killed the Alien Star Message-ID: Internet Killed the Alien Star http://www.techcentralstation.com/110905A.html By Douglas Kern Published 11/09/2005 If you're looking for one of those famous, big-eyed alien abductors, try looking on the sides of milk cartons. The UFO cultural moment in America is long since over, having gone out with the Clintons and grunge rock in the 90s. Ironically, the force that killed the UFO fad is the same force that catapulted it to super-stardom: the Internet. And therein hangs a tale about how the Internet can conceal and reveal the truth. It's hard to remember just how large UFOs loomed in the public mind a mere ten years ago. The X-Files was one of the hottest shows on television; [26]Harvard professors solemnly intoned that the alien abduction phenomenon was a real, objective fact; and Congressmen made serious inquiries about a downed alien spacecraft in [27]Roswell, New Mexico. Still not enough? You could see the "Roswell" movie on Showtime; you could play "Area 51" at the arcade; you could gawk at stunning pictures of [28]crop circles in any number of magazines; and you could watch any number of lurid UFO specials on Fox or the Discovery Channel. And USENET! Egad! In the days when USENET was something other than a spam swap, UFO geeks hit "send" to exchange myths, sightings, speculations, secret documents, lies, truths, and even occasionally facts about those strange lights in the sky. The modern UFO era began with [29]Kenneth Arnold's 1947 UFO sighting near Mount Rainier, Washington. National interest in the subject waxed and waned in the following years -- sometimes spiking dramatically, as during the Washington, D.C. "flap" of 1952 or the Michigan sightings in 1966 (which captured the attention of [30]Gerald Ford). Steven Spielberg popularized the modern mythology of UFOs in 1977's "[31]Close Encounters of the Third Kind." And with the publication of [32]Whitley Strieber's "Communion" in 1987, alien abduction moved from a freakish, nutty concern to a mainstream phenomenon. Eccentrics had claimed to be in [33]mental contact with aliens since the fifties, and alien abductions had been a part of the American UFO scene since the [34]Betty and Barney Hill case of 1961, but Strieber's runaway bestseller fused the traditional alien abduction tale to a chilling narrative and a modern spiritual sensibility -- thus achieving huge credibility for our friends with the wraparound peepers. Yet in recent years, interest in the UFO phenomenon has withered. Oh, the websites are still up, the odd UFO picture is still taken, and the usual hardcore UFO advocates make the same tired arguments about the same tired cases, but the thrill is gone. What happened? Why did the saucers crash? The Internet showed this particular emperor to be lacking in clothes. If UFOs and alien visitations were genuine, tangible, objective realities, the Internet would be an unstoppable force for detecting them. How long could the vast government conspiracy last, when intrepid UFO investigators could post their prized pictures on the Internet seconds after taking them? How could the Men in Black shut down every website devoted to scans of secret government UFO documents? How could marauding alien kidnappers remain hidden in a nation with millions of webcams? Just as our technology for finding and understanding UFOs improved dramatically, the manifestations of UFOs dwindled away. Despite forty-plus years of alleged alien abductions, not one scrap of physical evidence supports the claim that mysterious visitors are conducting unholy experiments on hapless victims. The technology for sophisticated photograph analysis can be found in every PC in America, and yet, oddly, recent UFO pictures are rare. Cell phones and instant messaging could summon throngs of people to witness a paranormal event, and yet such paranormal events don't seem to happen very often these days. For an allegedly real phenomenon, UFOs sure do a good job of acting like the imaginary friend of the true believers. How strange, that they should disappear just as we develop the ability to see them clearly. Or perhaps it isn't so strange. The Internet taught the public many tricks of the UFO trade. For years, hucksters and mental cases played upon the credulity of UFO investigators. Bad science, shabby investigation, and dubious tales from unlikely witnesses characterized far too many UFO cases. But the rise of the Internet taught the world to be more skeptical of unverified information -- and careful skepticism is the bane of the UFO phenomenon. It took UFO experts over a decade to determine that the [35]"Majestic-12" documents of the eighties were a hoax, rather than actual government documents proving the reality of UFOs. Contrast that decade to the mere days in which the blogosphere disproved the Mary Mapes Memogate documents. Similarly, in the nineties, UFO enthusiasts were stunned when they learned that [36]a leading investigator of the Roswell incident had fabricated much of his research, as well as his credentials. Today, a Google search and a few e-mails would expose such shenanigans in minutes. Thus, the rise of the Internet in the late nineties corresponded with the fall of many famous UFO cases. Roswell? A crashed, top-secret weather balloon, misrepresented by dreamers and con men. [37]The Mantell Incident? A pilot misidentified a balloon, with tragic consequences. Majestic-12? Phony documents with a demonstrably false signature. [38]The Alien Autopsy movie? Please. As access to critical evidence and verifiable facts increased, the validity of prominent UFO cases melted away. Far-fetched theories and faulty evidence collapsed under the weight of their provable absurdity. What the Internet gave, the Internet took away. The Internet processes all truth and falsehood in just this fashion. Wild rumors and dubious pieces of evidence are quick to circulate, but quickly debunked. The Internet gives liars and rumor mongers a colossal space in which to bamboozle dolts of every stripe -- but it also provides a forum for wise men from all across the world to speak the truth. Over the long run, the truth tends to win. This fact is lost on critics of the blogosphere, who can only see the exaggerated claims and gossip. These critics often fail to notice that, on the 'net, the truth follows closely behind the lies. A great many of us accept Internet rumors and hoaxes in exchange for fast access to the truth. But is there any validity to the UFO phenomenon? Perhaps, but so what? The need for weird is hard-coded into the human condition. In every society, a few unlikely souls appear to make contact with an invisible world, communing with goblins or ghosts or aliens or gods or monsters. And in every society, some fool always tries to gather scales from the dragon tracks, or droppings from the goblins, or pictures of the aliens. The dream world is always too elusive to be captured, and yet too tantalizingly close to be dismissed. And so the ancient game continues, with weirdness luring us to introspection and subjectivity, even as reality beckons us to exploration and objectivity. The appeal of chimerical mysteries and esoteric knowledge tends to diminish when the need for moral clarity and direction grows acute. And our need for such guidance is acute indeed. We're at war now. We don't have the time for games. The weird ye shall have with you always. But right now, the introspection of weirdness isn't needed. I'm quite happy to leave the aliens in the nineties, and on the milk cartons. References 26. http://en.wikipedia.org/wiki/John_Edward_Mack 27. http://en.wikipedia.org/wiki/Roswell_incident 28. http://en.wikipedia.org/wiki/Crop_circles 29. http://en.wikipedia.org/wiki/Kenneth_Arnold 30. http://www.ufoevidence.org/documents/doc883.htm 31. http://www.imdb.com/title/tt0075860 32. http://en.wikipedia.org/wiki/Whitley_Strieber 33. http://en.wikipedia.org/wiki/Contactees 34. http://en.wikipedia.org/wiki/Betty_Hill 35. http://en.wikipedia.org/wiki/Majestic_12 36. http://www.roswellfiles.com/storytellers/RandleSchmitt.htm 37. http://en.wikipedia.org/wiki/Mantell_Incident 38. http://en.wikipedia.org/wiki/Alien_autopsy 39. mailto:interview at techcentralstation.com 40. http://www2.techcentralstation.com/1051/feedback.jsp?CID=1051-110905A From checker at panix.com Sun Dec 4 03:21:56 2005 From: checker at panix.com (Premise Checker) Date: Sat, 3 Dec 2005 22:21:56 -0500 (EST) Subject: [Paleopsych] FRB of Richmond: Interview with James M. Buchanan Message-ID: FRB of Richmond: Interview with James M. Buchanan Interview - Federal Reserve Bank of Richmond http://www.richmondfed.org/publications/economic_research/region_focus/spring_2004/interview.cfm [Bioblography appended.] Region Focus Spring 2004 Interview James Buchanan --- Economists have long treated people in the marketplace as rational actors pursuing their own self-interest. But, until the mid-20th century, it was common to view people in government in a very different light. They were perceived as selfless public servants who acted on behalf of the general interest. Such a distinction, argued James Buchanan, was unnecessary and incorrect. People in the public sector are self-interested just like everybody else. Using this basic assumption, Buchanan and others were able to apply the tools of economics to politics. This line of inquiry soon become known as "public choice" and spread rapidly throughout the United States, Europe, and Asia. The majority of public choice theorists are trained as economists, but more and more come from the ranks of political science. Most of Buchanan's academic career has been spent in Virginia: first at the University of Virginia in Charlottesville, then at the Virginia Polytechnic Institute in Blacksburg, and later at George Mason University in Fairfax. As a result, he and his colleagues are often referred to as members of the "Virginia School." In the early 1960s, Buchanan was one of the founders of the Public Choice Society (PCS). The PCS holds annual meetings where papers are presented and discussed. It is also loosely affiliated with the academic journal Public Choice, which was long edited by Gordon Tullock, one of Buchanan's most frequent collaborators. Buchanan was awarded the Nobel Prize in Economics in 1986. Although he is now in his mid-80s, he still pursues an active research agenda and continues to lecture regularly. Aaron Steelman interviewed Buchanan at George Mason University on February 2, 2004. RF: Public choice is often described as "politics without romance." Could you please describe what this phrase means? Buchanan: I actually used that as the title of a lecture I gave at the Institute for Advanced Studies in Vienna in 1978. I think that if you had to boil public choice down to three words, that's a pretty good description, but on the other hand it's not complete either. The phrase captures the idea that public choice does not look at politics through rose-colored glasses -- it is skeptical that the actions of people in politics are necessarily focused on promoting the public interest. Instead, it takes a more hard-nosed, realistic view of government. But what it leaves out is that we must have a legitimizing argument that politics is worthwhile -- that politics is an exchange in the sense that we give up something but we also get back something. RF: Public choice is now a recognized subdiscipline within economics. But when you first started doing work in public choice, how was that research greeted by the profession? Buchanan: It was certainly outside the mainstream. I think many of my colleagues at the University of Virginia didn't particularly like using economics to analyze politics. But I have to say that when Gordon Tullock and I published The Calculus of Consent in 1962, the book received quite warm reviews by both economists and political scientists. And, between the two groups, I think the book's impact was greater among political scientists in the following respect: They had further to go. Economists were familiar with the tools we were using and the basic assumptions about rationality that we were making, but to many political scientists, these ideas were rather novel. Also, I think you can't leave personalities out of this either. Bill Riker was very active in introducing public choice and positive political economy to other political scientists and to his students at the University of Rochester. The fact that he came onboard very early was extremely important. RF: People working in the public choice tradition are often referred to as members of the "Virginia School." Could you please explain how and when that term came into being? Buchanan: Mancur Olson came up with that term. He was the one who first characterized us as the Virginia School -- I don't know exactly when but it was probably sometime in the mid-1970s, after we had already moved from Charlottesville to Blacksburg. It was fine by us. So we went with it, as did other people. But we didn't coin the term ourselves. RF: Richard Wagner, who was one of your students at the University of Virginia and has been your colleague at both the Virginia Polytechnic Institute (VPI) and George Mason University, has written that VPI was the most fertile place for public choice scholarship. Do you agree? Buchanan: I think you have to look at this on different dimensions. The public choice program originated at the University of Virginia from 1956 to 1968. Warren Nutter and I set up the Thomas Jefferson Center for Studies in Political Economy. The research program at the Center was broader in scope -- it wasn't confined to public choice per se. That was a very productive and exciting time. We had a great group of people there: Ronald Coase, Leland Yeager, Tullock, and Nutter were all on the faculty. And, without question, we had the best graduate students I have ever worked with -- really top-notch kids. We were never that productive in terms of producing good graduate students at VPI. But the public choice program became more developed there. We enjoyed tremendous support from the university administration, which in some ways had been lacking at Virginia. And Tullock, who had left Virginia a few years before I did, came to VPI. He and I started collaborating on a lot of projects, and we set up the Center for the Study of Public Choice along with Charlie Goetz. One of the things that I think was really important about VPI was the unique atmosphere and geography: We were all located close to each other and had constant interaction. Plus, at VPI there was a young man named Winston Bush whose enthusiasm and intellect really inspired a lot of interesting projects, such as our work on the political economy of anarchy. Winston was a great mathematical economist, who unfortunately died quite young in a car accident, but for a few years was a real live wire who really kept things going. We also had a great visiting fellow program. It wasn't unusual for us to have eight or nine visitors at one time. So, in the sense of sheer output, I think Wagner is right: VPI was the most productive place. RF: At last year's meetings of the Public Choice Society in Nashville, I was struck by the large percentage of participants from continental Europe. Did public choice take off internationally during the period you were at VPI? Buchanan: Yes. Many of the visiting fellows who came to Blacksburg were from Europe or Asia. It was also around this time that they set up their own organizations: the European Public Choice Society and the Japanese Public Choice Society. In some ways, the Europeans were more eager to work on constitutional political economy issues than were the Americans. In fact, I think that if the Nobel Prize were decided by American economists, I never would have been awarded it. My work has been much more warmly received in Europe than in the United States. RF: Could you describe how Frank Knight and Knut Wicksell have affected your thinking and career? Buchanan: They were certainly the two most important influences on my work. Knight's influence was more as a role model than as someone whose work I tried to build on, although he certainly made very important contributions of his own. Knight and I had very similar backgrounds: He was a farm boy from central Illinois who spent some time in school in Tennessee and who ultimately rejected the religious milieu in which he had been raised. I really liked his attitude toward the world and his willingness to question anything and anybody. He had a real passion for ideas. Wicksell, on the other hand, was more of an accidental discovery. I was going through the stacks of the old University of Chicago library after I had finished my dissertation and I ran across his dissertation, which had never been translated from the very difficult German. In that book, he was saying things that I felt inside me but I never dared to say. He really reinforced a lot of things that were sort of inchoate in my thinking. The central idea I got from Wicksell is that we can't improve politics by simply expecting politicians to do good. There are no interests other than those of individuals, and politicians will pursue their own interests just like anyone else, by trying to get re-elected, advance their careers, and so on. This means that economists ought to stop acting as if they were advising benevolent despots. If you want to improve government, you must try to improve the rules of the game rather than the individual players. RF: Looking back over the past 40 years, what do you think are some of the most important contributions that public choice theorists have made? Buchanan: I think that the most important contribution, by far, is to simply change the way that people look at politics. I often have been asked if public choice had a causal influence in the decline of confidence in politics and politicians compared to, say, 40 years ago. My answer is: yes and no. Once governments began, in the 1960s and 1970s, to overstep their bounds and take on projects that ultimately proved to be great failures -- and this is true not only in the socialist states but also in the democratic states of the West -- public choice came along and gave people a systematic way to analyze and explain these failures. So public choice wasn't the cause of distrust in government but it did help us understand the deficiencies of the political process. It changed the way that we look at collective action. RF: "Rent seeking" is one of the more common terms one encounters in articles written by public choice theorists. Could you give a basic description of what that term means? Buchanan: "Rent seeking" is a very basic concept: If there is value out there, someone is going to invest time and effort trying to attain it. The same is true with "profit seeking" -- if there is profit to be had, people will go after it. But the term rent seeking applies to a special kind of value -- value that is created artificially through the political process. Gordon Tullock has a great ability to take personal experiences and translate them into ideas. He had spent some time in China and while he was there he noticed that the Chinese imperial bureaucracy had these very severe standards that people had to pass in order to be admitted to the civil service. Candidates would spend a tremendous number of hours studying and learning this stuff. But most of the effort was completely wasted, because only a few could obtain a government position. This was a prime example that Gordon used. Likewise, let's say that the government can issue a monopoly on the production of playing cards. Then a lot of people are going to spend time courting the government to get that privilege. It may be rational but it's socially wasteful. The point was so obvious, but also so important, that once it was made it became a standard term used by economists and especially by public choice economists. RF: Public choice scholars, of course, are quite concerned with procedural issues, and have done important work explaining how various constitutional rules affect political and economic outcomes. Yet it seems that public choice theorists have been less successful explaining the conditions necessary to sustain those rules. Consider the United States, for instance. In the area of economic regulation, Congress' authority is virtually plenary. What accounts for the breakdown of the constitutional order in the United States? Buchanan: I think that we have had a breakdown in the traditional role of the judiciary and how the judiciary views itself as part of the larger political structure. We began to get that with the post-New Deal courts, which let the legislative branch do pretty much whatever it saw fit. Why did that happen? I'm not sure. Part of it is ideology. Law schools started to teach students that the Constitution was malleable -- that it said whatever judges claimed it said. The judiciary then became much more activist, as judges began to use their own political views as a basis for making decisions. This process has turned us much more toward a simple majoritarian-type political order. So I think that's part of the reason for the breakdown. But as for a more generalizable explanation, I don't have one. RF: Many commentators frequently decry voter turnout rates of, say, 50 percent as "too low." But, actually, it's surprising that this many people go to the polls because the chance of being instrumental is virtually zero. Does public choice have a good explanation for why people vote? Buchanan: That is one of the central puzzles we have faced since Anthony Downs and Gordon Tullock raised the question in the 1950s. >From a purely rational standpoint, people don't have much of an incentive to vote but, as you said, about half of them do. Why? I think this gets us into social psychology. People may vote simply as a means of expression rather than as a way of influencing the outcome of an election. They also may feel some sort of duty is involved. But, given the framework that economists would traditionally look at this sort of question, it's hard to come up with a satisfactory answer. RF: How would public choice explain political outliers -- people who get elected to Congress even though they run on quite radical platforms, either from the right or the left? According to median voter theory, it seems, these people shouldn't be chosen by the electorate. Buchanan: This is another good question to which we don't have an adequate answer. It may just be that these people act very differently in Washington than they do in their own districts. The average voter is not going to pay much attention to what politicians say in front of certain activist groups, but they may pay attention to what these politicians have to say when they come home to campaign. RF: Many people who have done important academic work in the public choice tradition have subsequently gone on to hold high-level appointed offices in the federal government. Is there something ironic about this, in your view? Or is this training useful? Buchanan: I'm not sure that it helps much. If you're on the inside, maybe you don't want to be trained in public choice. For instance, if you are going into the bureaucracy, perhaps you wouldn't want to have read the public choice literature on bureaucracy. I certainly wouldn't get excited about more public choice people filling government positions. Absorbing and doing are quite different things in this context. I think that there is little doubt that public choice has been enriched by people who have used government experience to inform their academic work. But I don't know that public choice has done much to influence the way that government officials actually behave. RF: How would you describe the differences between the allocationist-maximization paradigm, within which many neoclassical economists work, and the catallactic-coordination paradigm, within which most of your research has been done? Buchanan: Economics, as it was transformed by Paul Samuelson into a mathematical discipline, required practitioners to have something to maximize subject to certain constraints. This contrasts with the catallactic-coordination paradigm, which starts out with individuals simply trading with each other. You examine this process and build up into a system of how markets emerge and become integrated. It's a very different conceptualization of the whole economic process. I have argued, at least in the last three or four years, that the really big contributions to come will be from game theory. For a long time, I think economists didn't really understand what game theory was all about. The core insight, it seems to me, is that people choose among strategies and out of that emerges outcomes that are not part of anyone's choice set. It is a different way of looking at economics and it gets us to focus on fundamental issues of economic coordination that have been neglected. This, I think, is the direction that formal economic theory ought to take. RF: A recent article in PS: Political Science and Politics titled "Are Public Choice Scholars Different?" discussed the results of a survey given to members of the Public Choice Society (PCS), American Economic Association (AEA), and American Political Science Association (APSA). The survey asked for opinions on a wide variety of economic issues. The differences between PCS and AEA members were relatively small on most questions, but in a few cases, they were statistically significant. For instance, PCS members found the following proposition substantially more agreeable: "Government does more to protect and create monopoly power than it does to prevent it." Does this, in your mind, confirm the widely held notion that public choice theorists are more suspicious of government action and more friendly toward market solutions than economists generally? Buchanan: Yes, to some degree. But a continuing critique of public choice is that the whole research program is ideologically driven. I think that is completely wrong. It all goes back to the first question you asked about public choice being described as "politics without romance." If you look at politics in a realistic way, no matter your underlying ideological preferences, you are going to come out more negative than you started. There are many public choice people whose normative views are not at all market-oriented. But, as scientists, they reach conclusions that may not particularly support those normative preferences. RF: What do you think of the various "heterodox" schools of economics that are challenging the basic assumption of neoclassical economics? Buchanan: For more than 20 years, I have predicted that you would see more collaboration between psychologists and economists. That prediction is finally becoming realized with the widespread emergence of "behavioral economics," as characterized by the work of Dick Thaler, Bob Frank, and others. They pick out particular anomalies and use them to try to chip away at the neoclassical edifice. Many of those anomalies are interesting, but they are just that -- anomalies and thus not very generalizable. I don't think that behavioral economics is a spent force yet, but I don't know how much further they can go with it, because what they have to offer are critiques rather an alternative program of inquiry. Still, I'm sympathetic to the idea that economists have pushed this homo-economicus model too much. RF: In a series of articles on what he calls "rational irrationality," Bryan Caplan has tried to reorient public choice to focus more on voter-driven political failure and less on the perverse influence of special interests. What do you think of this line of inquiry? Buchanan: I don't know Caplan's work very well. But I think there is something to what he is trying to argue. For instance, I think there is the following bifurcation in the choice process: We may want to do things collectively that we are not willing to sustain privately. It may be true that the welfare state represents what people actually want. They may want the government to take care of everybody and so they vote for candidates who run on such a platform, including the higher tax rates needed to pay for it. At the same time, given those high levels of taxation, they may decide to quit working, like the Swedes, and spend time at their summer home. So even though they voted for the whole program -- on both the spending and taxation sides -- they are not willing to support it through their private actions. RF: What, in your view, is the proper role of government? Buchanan: Well, I think the state should fund the classic public goods and you could probably do that with government spending at a level of roughly 15 percent of gross domestic product (GDP). But I'm not willing to say that that is all government should do. As long as government grows within a proper set of rules, then I would rather not put limits on its size. I am reluctant to say, for instance, that having public spending at 40 percent of GDP -- which is about what we have now -- is necessarily wrong. RF: Why do so many voters hold views that are at odds with mainstream economic theory? Buchanan: Part of the blame falls on economists. As scientists, we are incredibly attracted to grapple with interesting puzzles that may have little immediate practical application. And, indeed, we are rewarded for doing that through greater academic promotions and greater prestige within the profession. So that type of work has a lot of private value to economists. Contrast that with making basic economic truths -- such as the benefits of free trade -- accessible to a wider audience. Economists gain very little from doing that -- for instance, it probably won't get you tenure. But there is an enormous public value associated with having an economically literate society. We need more Bastiats who are willing to talk to the public. As it stands, economists are losing the battle. Biography - Federal Reserve Bank of Richmond http://www.richmondfed.org/publications/economic_research/region_focus/spring_2004/interview_biography.cfm Biography James Buchanan Present Position Distinguished Professor Emeritus of Economics, George Mason University, and Distinguished Professor Emeritus of Economics and Philosophy, Virginia Polytechnic Institute and State University Previous Faculty Appointments Virginia Polytechnic Institute and State University (1969-1983); University of California at Los Angeles (1968-1969); University of Virginia (1956-1968); Florida State University (1951-1956); University of Tennessee (1948-1951) Education B.S., Middle Tennesee State College (1940); M.A., University of Tennesee (1941); Ph.D., University of Chicago (1948) Selected Publications Author or co-author of more than 20 books, including The Calculus of Consent: Logical Foundations of Constitutional Democracy (1962); Cost and Choice: An Inquiry in Economic Theory (1969); The Limits of Liberty: Between Anarchy and Leviathan (1975); and Better than Plowing: And Other Personal Essays (1992) Awards and Offices Winner, 1986 Nobel Memorial Prize in Economic Sciences; Fellow, American Academy of Arts and Sciences; Former President of the Southern Economic Association, Western Economic Association, Mont Pelerin Society, and Public Choice Society From checker at panix.com Sun Dec 4 03:22:03 2005 From: checker at panix.com (Premise Checker) Date: Sat, 3 Dec 2005 22:22:03 -0500 (EST) Subject: [Paleopsych] Science Daily: Mildly Depressed People More Perceptive Than Others Message-ID: Mildly Depressed People More Perceptive Than Others http://www.sciencedaily.com/print.php?url=/releases/2005/11/051121164438.htm Source: Queen's University Date: 2005-11-22 _________________________________________________________________ Mildly Depressed People More Perceptive Than Others Surprisingly, people with mild depression are actually more tuned into the feelings of others than those who aren't depressed, a team of Queen's psychologists has discovered. "This was quite unexpected because we tend to think that the opposite is true," says lead researcher Kate Harkness. "For example, people with depression are more likely to have problems in a number of social areas." The researchers were so taken aback by the findings, they decided to replicate the study with another group of participants. The second study produced the same results: People with mild symptoms of depression pay more attention to details of their social environment than those who are not depressed. Their report on what is known as "mental state decoding" - or identifying other people's emotional states from social cues such as eye expressions - is published today in the international journal, Cognition and Emotion. Also on the research team from the Queen's Psychology Department are Professors Mark Sabbagh and Jill Jacobson, and students Neeta Chowdrey and Tina Chen. Drs. Roumen Milev and Michela David at Providence Continuing Care Centre, Mental Health Services, collaborated on the study as well. Previous related research by the Queen's investigators has been conducted on people diagnosed with clinical depression. In this case, the clinically depressed participants performed much worse on tests of mental state decoding than people who weren't depressed. To explain the apparent discrepancy between those with mild and clinical depression, the researchers suggest that becoming mildly depressed (dysphoric) can heighten concern about your surroundings. "People with mild levels of depression may initially experience feelings of helplessness, and a desire to regain control of their social world," says Dr. Harkness. "They might be specially motivated to scan their environment in a very detailed way, to find subtle social cues indicating what others are thinking and feeling." The idea that mild depression differs from clinical depression is a controversial one, the psychologist adds. Although it is often viewed as a continuum, she believes that depression may also contain thresholds such as the one identified in this study. "Once you pass the threshold, you're into something very different," she says. Funding for this study comes from a New Opportunities Grant from the Canada Foundation for Innovation. Editor's Note: The original news release can be found [3]here. _________________________________________________________________ This story has been adapted from a news release issued by Queen's University. References 1. http://a.tribalfusion.com/i.click?site=ScienceDailyMagazine&adSpace=ROS&size=468x60&requestID=1024095902 2. http://www.sciencedaily.com/releases/2005/11/051121164438.htm 3. http://qnc.queensu.ca/story_loader.php?id=4381d1aa783bb From checker at panix.com Mon Dec 5 02:45:09 2005 From: checker at panix.com (Premise Checker) Date: Sun, 4 Dec 2005 21:45:09 -0500 (EST) Subject: [Paleopsych] Review of Business: Transforming a University from a Teaching Organization to a Learning Organization Message-ID: Transforming a University from a Teaching Organization to a Learning Organization Review of Business Volume 26, Number 3 [I'm one of the peer reviewers of this publication.] Fall 2005 (Special Issue: Applications of Computer Information Systems and Decision Sciences) Hershey H. Friedman, Brooklyn College of the City University of New York Linda W. Friedman, Baruch College of the City University of New York Simcha Pollack, The Peter J. Tobin College of Business, St. John?s University Abstract Successful 21-century universities will have to be lean, flexible and nimble. In fact, Peter Drucker claims that 30 years from now the ?big universities will be relics" and will not survive. In the corporate world, businesses are becoming learning organizations in order to survive and prosper. This paper explains why it is necessary for universities to become learning organizations and provides ideas as how to make the transformation. Introduction Peter Drucker noted in an interview that: "Thirty years from now the big university campuses will be relics. Universities won?t survive. It?s as large a change as when we first got the printed book" [18]. This may be an exaggeration, but there is no question that universities that refuse to change may not survive. The rise of for-profit universities (e.g., the University of Phoenix), decreased government support for universities, the rising costs of education, the globalization of education, technological change, the growing number of working adults who need continuing education to avoid obsolescence and distance education are forcing universities to transform themselves. In fact, Andrews et al. [1] urge academia to respond to the "wake-up call" and recognize that inflexibility and the failure to respond quickly and decisively to environmental change can be dangerous. For colleges to change, they not only have to learn to run their organizations in a more business-like fashion, they have to be willing, when necessary, to add and shrink programs quickly. This is not easy when the organizational structure of today?s university has more to do with the convenience of establishing accounting budgets than with the demands of intellectual growth and education [12,14]. Edwards [9] notes that "the actual elimination of departments is extremely rare and usually generates a wave of unflattering national news, so the substitution strategy is driven toward less visible, more surreptitious methods." It is becoming quite apparent that being inflexible and resistant to change in an extremely fast-moving environment is a prescription for disaster, whether we are dealing with a business or academic institution. Several visionaries believe that the university of the future will be very different from the university of today: more interdisciplinary programs, and the substantial modification of the current prevalent academic organizational structure. Duderstadt [7] suggests that the university of the future will be divisionless, i.e., there will be many more interdisciplinary programs. There will also be "a far more intimate relationship between basic academic disciplines and the professions." He asks "whether the concept of the disciplinary specialist is relevant to a future in which the most interesting and significant problems will require ?big think? rather than ?small think?" [8]. Kolodny [16:40-41] asserts that the antiquated way of organizing colleges?by departments?will have to "evolve into collaborative and flexible units." Students with narrowly defined majors will have great difficulty comprehending a world in which the knowledge required of them is complex, interconnected and, by its very nature, draws from many areas. Edwards [9] maintains that "in so many cases, the most provocative and interesting work is done at the intersections where disciplines meet, or by collaborators blending several seemingly disparate disciplines to attack real problems afresh." The Learning Organization Clearly, there are great changes ahead for higher education, but changing the culture of an organization is a daunting task. Forward-thinking institutions have to consider what can be done to make their organizations more responsive to change. In the corporate world, many firms are recognizing that the ability of an organization to learn is the key to survival and growth, and "organizational learning" has become the mantra of many companies [3,21]. What is organizational learning? Organizational learning has been defined in many ways: Stata [24] asserts that: "organizational learning occurs through shared insights, knowledge and mental models [and] builds on past knowledge and experience." Senge [21] writes: "learning organizations are not only adaptive, which is to cope, but generative, which is to create." Pedler et al. [20] state: "A learning company is an organization that facilitates the learning of all its members and continually transforms itself." Garvin [11] believes that a learning organization is "an organization skilled at creating, acquiring, and transferring knowledge, and at modifying its behavior to reflect new knowledge and insights." What should we find in a learning organization? The following briefly summarizes what one would expect: oAwareness of the external environment. Knowing what the competition is doing. oBelief that individuals can change their environment. A learning culture. oShared vision. One that encourages individuals to take risks. oLearning from past experience and mistakes?experience is the best teacher. oLearning from the experiences of others in the organization. Organizational memory in order to know what worked in the past and what did not. oWillingness to experiment and take chances. Tolerance for failure. oDouble-loop or generative learning. With double-loop, as opposed to single-loop, learning, assumptions are questioned. "Double loop learning asks questions not only about objective facts but also about the reasons and motives behind those facts" [2]. oConcern for people. Respect for employees. Diversity is seen as a plus since it allows for new ideas. Empowerment of employees. oInfrastructure allowing the free flow of knowledge, ideas and information. Open lines of communication. Sharing of knowledge, not just information. Team learning where colleagues respect and trust each other. An organization where one employee will compensate for another?s weaknesses, as in a successful sports team. oUtilization of shared knowledge. Emphasis on cooperation, not turf. oCommitment to lifelong learning. Constant learning and growth. oAbility to adapt to changing conditions. Ability to renew, regenerate and revitalize an organization. Knowledge sharing is a necessary condition for having a learning organization. To foster the sharing of knowledge, computer software has been developed to make it easy for coworkers to share their expertise. For instance, the AskMe Transforming a University from a Teaching Organization to a Learning Organization Corporation (http://www.askmecorp.com/) claims that it is "the leading provider of software solutions that enable global 2000 companies to create and manage Employee Knowledge Networks (EKNs)." AskMe notes on its website that creating EKNs helps ensure that employees do not have to solve problems that others have already solved, i.e., "reinventing the wheel." It also enables employees in a firm to quickly find the individual with the appropriate expertise to solve a problem. One thing the AskMe company discovered is that knowledge sharing is difficult in pyramid-shaped organizations with tall organizational structures, i.e., characterized by numerous layers of management. Knowledge sharing works much better where there is a flat organizational structure with a relatively short chain of command. However, managers have to be willing to accept suggestions, ideas and answers from their employees. When information flows in all directions? even from the bottom of the organizational pyramid to the top?some managers might feel that they are losing some of the status and authority of their position. After all, conceivable that someone in the mailroom might be able to answer a question that stumps top management. Knowledge can be found anywhere and everywhere. The power of knowledge sharing should not be underestimated. Linux, the extremely successful computer operating system, was developed by the collaboration of programmers all over the globe. Are Universities Learning Organizations? Before discussing universities, it might be instructive to examine whether schools?especially primary and secondary ones?are learning organizations. The evidence, albeit limited, indicates that they are not. Shields and Newton [22] examined schools that participated in the Saskatchewan School Improvement Program and found that they were not learning organizations. Isaacson and Bamburg [15] also came to the same conclusion. Schools rarely have visions, teachers rarely share knowledge with colleagues, and schools are managed with a top-down approach. Many others agree that schools have not functioned as learning organizations [5,10]. When Senge was asked by O?Neill [19] whether or not schools were learning organizations, he replied: "definitely not." Universities are not run like high schools or elementary schools and stress research/learning as much as (or more than) teaching. Despite this, it seems that very few universities would qualify as learning organizations. It is quite ironic that teaching organizations do not know how to learn. Most universities have little knowledge sharing and Smith [23] asserts that: "Academic departments serve as organizations that exhibit all the segmentary politics described by anthropologists: segmentation for largely demographic reasons, balanced opposition among themselves, and unitary resistance to a superordinate entity, usually the college or university as a whole." Harrington [13] believes that departments encourage loyalty to the discipline rather than to the university. Apparently, most universities are not learning organizations. Transforming the University into a Learning Organization The following are some suggestions that can be used to help transform the university into a learning organization. 1. Establish a message board to function as a research matchmaking service. As noted above, the most exciting research is often at the interface of two disciplines. Furthermore, researchers with expertise in one area (e.g., biology) might need to collaborate with a faculty member with expertise in another area (e.g., computer simulation or geology) in order to write a paper. Universities could provide a central message board where faculty members could state the area(s) in which they are doing research and the kind of co-author, if any, they seek. This site could also be used to find ideas for research. Senior faculty members might be willing to provide ideas for research in return for a byline on any resulting article. If successful, this service can be extended to include faculty in other colleges. Universities have to understand that discouraging professors from writing co-authored papers is counterproductive. It is certainly not consistent with a key philosophy of a learning organization: sharing knowledge. Moreover, working with scholars from other disciplines creates a synergy that can result in truly innovative research. It is not uncommon in academe to find professors who continue to write essentially the same paper over and over with very little new information. There is nothing wrong with collaboration if it produces exciting research. One wonders whether James Watson and Francis Crick would have been as successful if they had worked alone. The Human Genome Project took 13 years and involved researchers from at least 18 countries. 2. Establish an online archive where faculty can post papers for review by colleagues before submitting them to journals. If the faculty at a university work together as a team and want their institution to flourish, they are more likely to provide helpful criticism. The late OpenTextProject (www.opentextproject.org) was an international site that allowed individuals to post their papers for pre-submission review. 3. There could be a Web site for every course, especially multiple-section courses taught by a number of different faculty. Faculty could submit their best ideas on how to teach the course and their best lectures. This site would then be a resource for students who have difficulties with the course and would also be a resource for faculty teaching the course. Most professors teaching a course have gotten useful ideas from other faculty teaching the same course. For instance, suppose we have a site for elementary statistics. This might be a course taught by 10 different instructors. Faculty teaching the course would be encouraged to post material dealing with statistics. This might take the form of syllabi, lectures, interesting examples, humorous ways to illustrate difficult concepts, computer programs to solve statistics problems, solved exercises, etc. One of the authors has a site for his corporate finance class and has heard that students taking the course with other instructors go to the site since it contains dozens of problems with solutions in the area of mathematics of finance. Professor N. Duru Ahanotu created a Web site (http://www.drduru.com/knowledge.html) for anyone interested in learning organizations and knowledge management. The corporate world is learning the value of the Web for e-training. The type of Web site described above can be especially useful to faculty teaching a course for the first time. Rather than learning the best way to teach a course through trial and error, they can go to the Web site for a particular course and see how colleagues have been teaching it. Many professors do indeed go to the Web to examine syllabi and course material from the same courses taught at schools all over the country. The problem is that the caliber of student may not be exactly the same. While it is still a good idea to see how a particular course is taught at other colleges, it will often be more useful to examine the materials used by colleagues in the same school. Interestingly, Dill [6] notes that a major weakness of universities has been in the "internal transfer of new knowledge." This is why it is not uncommon to find that "within most universities there are better performing units that have knowledge about improving teaching and learning from which other units could learn." 4. Knowledge sharing should not be limited to a university. Knowledge should be shared with the public. A Web site could be created providing helpful information for the general public. For instance, this site could have links to subjects such as small business management, marketing, personal finance, ESL, etc. Outsiders could learn these subjects online for free. Brew and Doud [4] assert that work-based learning is important for students. This means that there has to be a partnership between educators and workplace supervisors, especially with professional education. This, of course, requires knowledge sharing between academics and practitioners. 5. University administrators have to realize that the pyramid-shaped organizational structure makes little sense for an academic institution. Information should not only flow from the top to the bottom, i.e., president to provost to dean to chairs to faculty. The biggest impediments to the creation of learning organizations are the twin fears of change and of things that are new. Senior faculty often resist change. Indeed, Kuhn [17] found a similar phenomenon in the sciences. Kuhn described "normal science," as where scientists who adhere to the old dominant paradigm resist the adoption of a new paradigm. Kuhn [17:52] notes that "normal science does not aim at novelties of fact or theory and, when successful, finds none." Some of the best ideas might originate from junior faculty who often have a new perspective. Universities that want to be innovative have to allow information to flow from the bottom to the top, otherwise they will stagnate. Knowledge-sharing software could be used by administrators to get fresh ideas from the entire faculty. 6. Students have to be part of the knowledge sharing for a university to become a true learning organization. Many faculty members resist providing students with e-mail addresses and brick-and-mortar office hours of three hours per week are ludicrous in the age of asynchronous communication. How many faculty members today would deal with a bank that was only open from 9 a.m. to 3 p.m., had no ATM machines and no online banking? Information about majors can be automated. There could be a Web site where students can find out about any major, including requirements for the major and opportunities in the field. Sites consisting of FAQs (frequently asked questions) could be provided for students. Expert systems could be used to advise students as to whether they have the necessary prerequisites for a course. When you purchase a book at Amazon.com, the next time you come back you are greeted by name and other books are recommend to you based on your purchase history. Students could also automatically receive recommendations for courses based on their major and their registration history. 7. As noted above, many futurists believe that interdisciplinary majors will be vital to the future of universities. Many of the newer programs being developed at colleges all over the country are interdisciplinary. It is often very difficult to get academic departments to create interdisciplinary majors when each department is interested in protecting its own turf. Learning organizations stress cooperation, not protection of turf, and this might require a new organizational structure not based on departments. Alternatively, department chairs could report to a "super" chair or dean with the responsibility for an entire school. The job of the "super" chair or dean would be to ensure that departments work together to create interdisciplinary programs and focus on what is best for the university as a whole, not just their own department. A discussion group in which faculty members could provide ideas for new programs could be established. Administrators could reward faculty and departments that create successful programs. 8. A learning organization cannot last long if members of the organization have no interest in learning. Unfortunately, a significant number of faculty (one number often quoted is 60%) never publish an article after they receive tenure and become associate professors. Incentives must be put in place to ensure that faculty continue to learn even after being promoted to full professor. Lifelong learning is now necessary in many professions, including medicine and law. It should also be encouraged in academe. Conclusion Establishing a paradigm of knowledge sharing and continuous growth through lifelong learning is not easy even, or perhaps especially, in academe. Interestingly, in these very turbulent times, many academicians are complacent and feel that there is no compelling need to make any serious changes. This is definitely a myopic way of thinking. Transforming colleges into learning organizations will not solve all problems, but it is certainly an important first step. References 1. Andrews, R. L., M. Flanigan, and D. S. Woundy. "Are Business Schools Sleeping Through a Wake-Up Call?" Decision Sciences Institute 2000 Proceedings, 1. 2000, 194-196. 2. Argyris, C. "Good Communication that Blocks Learning." Harvard Business Review, Vol. 72, No.4, 1994, 77-85. 3. Argyris, C. and D. Schoen. Organizational Learning II: Theory, Method, and Practice. Reading, MA: Addison-Wesley, 1996. 4. Brew, A. and D. Boud "Preparing for New Academic Roles: A Holistic Approach to Development." The International Journal for Academic Development, Vol. 1 No. 2, 17-25. 5. Conzemius, A. and W. Conzemius. "Transforming Schools into Learning Organizations." Adult Learning, Vol. 7 No. 4, 1996, 23-25. 6. Dill, D. D. "Academic Accountability and University Adaptation: The Architecture of the Academic Learning Organization." Higher Education, 38, 1999, 127-154. 7. Duderstadt, J. J. "A Choice of Transformations for the 21st-Century University." The Chronicle of Higher Education, 46, Feb. 4, 2000, B6-B7. 8. Duderstadt, J. J. "The Future of the University in an Age of Knowledge." The Journal of Asynchronous Learning Networks, 1, 1997, 78-88. 9. Edwards, R. "The Academic Department: How Does it Fit into the University Reform Agenda?" Change 31, 1999, 17-27 10. Fullan, M. The School as Learning Organization: Distant Dreams. Theory into Practice, Vol. 34, 1995, No. 4, 230-235. 11. Garvin, D. A. "Building a Learning Organization." Harvard Business Review, Vol. 71, No. 4, 1993, 78-91. 12. Gazzaniga, M. "How to Change the University." Science, 1998, 237. 13. Harrington, F. H. "Shortcomings of Conventional Departments." In D. E. McHenry (Ed.), Academic Departments: Problems, Variations, and Alternatives. San Francisco: Jossey-Bass, 1977, 53?62. 14. Hollander, S. "Second Class Subjects? Interdisciplinary Studies at Princeton." The Daily Princetonian, April 24, 2000, 3. 15. Isaacson, N. and J. Bamburg "Can Schools Become Learning Organizations?" Educational Leadership, Vol. 50, No. 3, 1992, 42-44. 16. Kolodny, A. Failing the Future: A Dean Looks at Higher Education in the Twenty-First Century. Durham, NC: Duke University Press, 1998. 17. Kuhn, T. The Structure of Scientific Revolutions, 2nd ed. Chicago: University of Chicago Press, 1970. 18. Lenzner, R. and S. S. Johnson. "Seeing Things as They Really Are." Forbes, March 10, 1997, 122-131. 19. O?Neil, J. "On Schools as Learning Organizations." Educational Leadership, Vol. 52, No. 7, April 1995, 20-23. 20. Pedler, M., J. Burgoyne and T. Boydell. The Learning Company: A Strategy for Sustainable Development. New York: McGraw-Hill, 1991. 21. Senge, P.M. The Fifth Discipline. New York: Doubleday, 1990. 22. Shields, C. and E. E. Newton. "Empowered Leadership: Realizing the Good News." Journal of School Leadership, Vol. 4, No. 2, 1994, 171-196. 23. Smith, J. Z. "To Double Business Bound." In C. G. Schneider and W. S. Green (Eds.), Strengthening the College Major. San Francisco, CA: Jossey-Bass Inc. Publishers, 1993, 13-23. 24. Stata, R. "Organizational Learning?The Key to Management Innovation." Sloan Management Review, Spring 1989, 63-74. From checker at panix.com Mon Dec 5 02:45:15 2005 From: checker at panix.com (Premise Checker) Date: Sun, 4 Dec 2005 21:45:15 -0500 (EST) Subject: [Paleopsych] J. Philippe Rushton: Ethnic nationalism, evolutionary psychology and Genetic Similarity Theory Message-ID: J. Philippe Rushton: Ethnic nationalism, evolutionary psychology and Genetic Similarity Theory Nations and Nationalism 11 (4), 2005, 489-507. r ASEN 2005 Department of Psychology, University of Western Ontario, London, Ontario, Canada This article builds on a paper prepared for the American Psychological Association (APA) and the Canadian Psychological Association (CPA) joint 'Initiative on Ethno-Political Warfare' (APA/CPA, 1997). I thank Aurelio J. Figueredo, Henry Harpending, Frank Salter and Pierre L. van den Berghe for comments on an earlier draft. ABSTRACT. Genetic Similarity Theory extends Anthony D. Smith?s theory of ethno-symbolism by anchoring ethnic nepotism in the evolutionary psychology of altruism. Altruism toward kin and similar others evolved in order to help replicate shared genes. Since ethnic groups are repositories of shared genes, xenophobia is the 'dark side' of human altruism. A review of the literature demonstrates the pull of genetic similarity in dyads such as marriage partners and friendships, and even large groups, both national and international. The evidence that genes incline people to prefer others who are genetically similar to themselves comes from studies of social assortment, differential heritabilities, the comparison of identical and fraternal twins, blood tests, and family bereavements. DNA sequencing studies confirm some origin myths and disconfirm others; they also show that in comparison to the total genetic variance around the world, random co-ethnics are related to each other on the order of first cousins. Introduction Most theories of ethno-political conflict and nationalism focus on cultural, cognitive and economic factors, often with the assumption that modernisation will gradually reduce the effect of local antagonisms and promote the growth of more universalistic societies (Smith 1998). However, purely socio-economic explanations seem inadequate to account for the rapid rise of nationalism in the former Soviet Bloc and too weak to explain the lethality of the conflicts between Tutsis and Hutus in Rwanda, Hindus, Muslims and Sikhs in the Indian subcontinent, and Croats, Serbs, Bosnians and Albanians in the former Yugoslavia, or even the level of animosity between Blacks, Whites and Hispanics in the US. Typically, analysts have also failed to consider the ethno-political repercussions of the unprecedented movement of peoples taking place in the world today (van den Berghe 2002). One of the hallmarks of true science is what Edward O. Wilson (1998) termed the unity of knowledge through the principle of consilience, in which the explanations of phenomena at one level are grounded in those at a lower level. Two prominent examples are the understanding of genetics in terms of biochemistry once the structure of the DNA molecule was worked out and, in turn, of chemistry in terms of atomic physics. Anthony D. Smith's theory of ethno-symbolism unifies knowledge in the consilient manner through its integration of history and psychology, thereby solving the problem that nationalism poses for purely socio-economic theories - the phenomena of mass devotion and the belief that one's own group is favourably unique, even 'chosen' (e.g. Smith 2000 and 2004; Guibernau and Hutchinson 2004; Hutchinson 2000). With its emphasis on a group's preexisting kinship, religious and belief systems fashioned into a sense of common identity and shared culture, however mythologised, Smith's theory explains what purely socio-economic theories do not, why the 'glorious dead' fought and died for their country. It is more robust than other theories because its research analyses show that myths, memories and especially symbols, foment and maintain a sense of common identity among the people unified in a nation. The ethno-symbolic perspective further unifies knowledge by highlighting interactions between ethnicity and nationhood. For example, Hutchinson (2000) described the episodic element in the history of countries as when national pride is augmented by events such as sudden new archaeological discoveries. By studying the ethnic character of modern nations over the long term, it is possible to identify recurring causes of national revivals, the role of cultural differences within nations, and the salience of national identities with respect to other allegiances. The current article presents 'Genetic Similarity Theory' to explain ethnic nepotism and people's need to identify and be with their 'own kind' (Rushton et al. 1984 and 1986; Rushton 1989a, 1995, 2004; Rushton and Bons 2005). Nationalists often claim that their nation has organic continuity and 'ties of blood' that make them 'special' and different from outsiders, a view not fully explained by ethno-symbolism. Although the term 'ethnicity' is recent, the sense of kinship, group solidarity and common culture to which it refers is often as old as the historical record (Hutchinson and Smith 1996). Genetic Similarity Theory extends Smith's theory and the unity of knowledge by providing the next link, the necessary biological mooring. Patriotism is almost always seen as a virtue and extension of family loyalty and is typically preached using kinship terms. Countries are called the 'motherland' or the 'fatherland'. Ethnic identity builds on real as well as putative similarity. At the core of human nature, people are genetically motivated to prefer others genetically similar to themselves. I will support this contention with current findings from evolutionary psychology and population genetics. The evolutionary background Starting with Charles Darwin's The Origin of Species (1859) and The Descent of Man (1871), evolutionary explanations of the moral sentiments have been offered for both humans and other animals. Nineteenth century evolutionists such as Herbert Spencer and William Graham Sumner built on the concepts of in-group-amity and out-group-enmity, group competition and group replacement. Tribes, ethnic groups, even nations were seen as extended families (see van der Dennen 1987, for a review). However, evolutionary explanations went out of favour during the 1920s and 1930s with the rise of fascism in Europe, largely because they were seen as providing a justification for racially based politics (Degler 1991). During the 1960s and 1970s, most biologists eschewed theories of group competition in favour of the mathematically 'cleaner' theories of individual adaptation, since the genetic mechanisms necessary for ethnocentrism to evolve remained quantitatively problematic. After several decades of neglect, evolutionary psychology has now regained scientific respectability (e.g. Badcock 2000; Buss 2003; Pinker 2002; Wilson 1998). In The Descent of Man (1871: 489-90), Darwin proposed the radical and far-reaching hypothesis that human morality rested on the same evolutionary basis as did the behaviour of other animals - reproductive success - described as the 'general good': The term, general good, may be defined as the rearing of the greatest number of individuals in full vigour and health, with all their faculties perfect, under the conditions to which they are subjected. As the social instincts both of man and the lower animals have no doubt been developed by nearly the same steps, it would be advisable, if found practicable, to use the same definition in both cases, and to take as the standard of morality, the general good or welfare of the community, rather than the general happiness; but this definition would perhaps require some limitation on account of political ethics. Historian Carl Degler (1991) observed that Darwin's equating of human and animal morality with the reproductive success of the community had the effect of biologising ethics. Suddenly, far-flung notions of economics, demographics, politics and philosophy, some of which had been centuries in the making, now revolved around a Darwinian centre, capturing the nineteenth century imagination and inspiring new analyses of the way society worked. The philosophy termed 'Social Darwinism', with its emphasis on the reproductive success of groups as well as of individuals, was taken up at every point along the political spectrum - from laissez-faire capitalism to communist collectivism to National Socialism (again see van der Dennen 1987, for a review). It was crucial for Darwin to emphasise the moral continuity between humans and other animals because the opponents of human evolution had argued for their discontinuity in both the moral and the intellectual spheres. Darwin departed from utilitarian philosophers such as John Stuart Mill and Jeremy Bentham who believed that human morality was based on making informed choices about the greatest happiness for the greatest number. As Darwin pointedly observed, that basis was rational rather than instinctive. Since human beings alone were said to follow it, Darwin took exception to it. In The Descent, Darwin provided numerous examples of how animal morality led to reproductive success. All animals fight by nature in some circumstances but are altruistic in others. Acts of altruism include parental care, mutual defence, rescue behaviour, co-operative hunting, food sharing and self-sacrificial altruism. Darwin described how leaders of monkey troops act as sentinels and utter cries of danger or safety to their fellows; how even male chimpanzees might rush to the aid of infants that cried out under attack, even though the infants were not their own. Animal altruism - even to the point of self-sacrifice - has been massively confirmed since Darwin wrote The Descent (see E. O. Wilson 1975, for extended discussion). Altruism involves self-sacrifice. Sometimes the altruist dies. For example, when bees defend their hive and sting intruders, the entire stinger is torn from the bee's body. Stinging an intruder is an act of altruistic self-sacrifice. In ants, if nest walls are broken open, soldiers pour out to combat foragers from other nests; at the same time, worker ants repair the broken walls leaving the soldiers outside to die in the process. Human warfare appears to be rooted in the evolved behaviour of our nearest primate relatives. Male chimpanzees patrol their territories in groups to keep the peace within the group and to repel invaders. Such patrols, of up to twenty bonded males at a time, raid rival groups, kidnap females and annex territory, sometimes fighting pitched battles in the process (Wrangham and Peterson 1996). Solving the paradox of altruism In The Origin, Darwin (1859) saw that altruism posed a major enigma for his theory of evolution. How could altruism evolve through 'survival of the fittest' if altruism means self-sacrifice? If the most altruistic members of a group sacrifice themselves for others, they will have fewer offspring to pass on the genes that made them altruistic. Altruism should not evolve, but selfishness should. Darwin was unable to resolve the paradox of altruism to his satisfaction because to do so required greater knowledge of how heredity worked than he had available (the word 'genetics' was not coined until 1905). Nonetheless, in The Descent, Darwin (1871) intuited the solution when he wrote, 'sympathy is directed solely towards members of the same community, and therefore towards known, and more or less loved members, but not all the individuals of the same species' (Vol. 1: 163). In 1964, evolutionary biologist William Hamilton finally provided a generally accepted solution to the problem of altruism based on the concept of inclusive fitness, not just individual fitness. It is the genes that survive and are passed on. Some of the individual's most distinctive genes will be found in siblings, nephews, cousins and grandchildren as well as in offspring. Siblings share fifty per cent, nephews and nieces twenty-five per cent, and cousins about twelve and a half per cent of their distinctive genes. So when an altruist sacrifices its life for its kin, it ensures the survival of these common genes. The vehicle has been sacrificed to preserve copies of its precious cargo. From an evolutionary point of view, an individual organism is only a vehicle, part of an elaborate device, which ensures the survival and reproduction of genes with the least possible biochemical alteration. 'Hamilton's Rule' states that across all species, altruism (or, conversely, reduced aggression) is favoured when rb . c40, where r is the genetic relatedness between two individuals, b is the (genetic) fitness benefit to the beneficiary, and c is the fitness cost to the altruist. Evolutionary biologists have used Hamilton's 'gene's eye' point of view to carry out research on a wide range of social interactions including altruism, aggression, selfishness and spite. The formulation was dubbed 'kin selection theory' by John Maynard Smith (1964) and became widely known through influential books such as The Selfish Gene by Richard Dawkins (1976) and Sociobiology: the New Synthesis by Edward O. Wilson (1975). In 1971, Hamilton extended his formulation and hypothesised that altruism would result from any degree of genetic relatedness, not just that based on immediate kin. Hamilton equated his genetic relatedness variable r to Sewall Wright's FST measure of within-group variance (typically r . 2FST), and cited an experimental study of semi-isolated groups of mice where even random mating produced an FST of 0.18. Hamilton concluded that the within-group mice should therefore favour each other over those in the out-group, treating 'the average individual encountered as a relative closer than a grandchild (or half-sib) but more distant than an offspring (or full-sib)'. In order to favour near kin over distant kin and distant kin over non- relatives, the organism must be able to detect degrees of genetic similarity in others. Hamilton (1964 and 1971) proposed several mechanisms by which detection could occur: (1) location or proximity to self as in the rule 'if it's in the nest, it's yours'; (2) familiarity, which is learning through social interaction; (3) similarity-to-self through imprinting on self, parents or nest mates as in the rule 'look for physical features that are similar to self' - dubbed the 'armpit effect' by Dawkins (1976); and (4) 'recognition alleles' or innate feature detectors that allow detection of genetic similarity in strangers in the absence of any mechanism of learning - dubbed the 'green beard effect' by Dawkins (1976). In this latter, a gene produced two effects: (a) creating a unique trait such as a green beard, and (b) preferring others who also have that trait. Hamilton and Dawkins both favoured an imprinting mechanism, which Hamilton (1971) suggested would be most effective if it occurred on the more heritable traits because these best indicate the underlying genotype. There is dramatic evidence that many animal species do detect and then act on genetic similarity (Fletcher and Michener 1987; Hauber and Sherman 2001). In a classic study of bees, Greenberg (1979) bred for fourteen degrees of closeness to a guard bee, which blocks the nest to intruders. Only the more genetically similar intruders got through. A classic study of frog tadpoles separated before hatching and reared in isolation found the tadpoles moved to the end of the tank where their siblings had been placed, even though they had never encountered them previously, rather than to the end of the tank with non-siblings (Blaustein and O'Hara 1981). Squirrels produce litters that contain both full-siblings and half-siblings. Even though they have the same mother, share the same womb, and inhabit the same nest, full-siblings fight less often than do half-siblings. Full-siblings also come to each other's aid more often (Hauber and Sherman 2001). Similarity detection is also required for assortative mating, which occurs in insects, birds, mammals and even plants. Optimal outbreeding in some plants is promoted by acceptance of pollen from source plants that are neither too similar nor too dissimilar molecularly from the host plant's own pollen (see Hauber and Sherman 2001, for review). Even in species that disperse, the offspring typically show strong aversion to mating with close relatives. One study of wild baboons showed that paternal kin recognition occurs as frequently as maternal kin recognition even though identifying paternal kin is much more difficult in species where the mother mates with more than one male (Alberts 1999). Although in 1975 Hamilton extrapolated his ideas to human warfare, his formulations have only seldom been taken beyond immediate kin. In The Selfish Gene, Dawkins (1976) argued that the mathematics of kin selection soon made coefficients of relatedness, even between kin, vanishingly small. One example he offered was that Queen Elizabeth II, while a direct descendant of William the Conqueror (1066), is unlikely to share a single one of her ancestor's genes. In a 1981 editorial for Nature, Dawkins used similar arguments to rebut claims made by Britain's far-right National Front that kin selection theory provided a genetic justification for ethnocentrism. Perhaps feeling a moral obligation to condemn racism, some evolutionists minimised the theoretical possibility of a biological underpinning to ethnic or national favouritism. Hamilton himself (1987: 426) pithily commented, 'in civilized cultures, nepotism has become an embarrassment'. These qualifications turn out to have been overstated. Through assortative mating and other cultural practices, the selfish gene's capacity to replicate itself in combination with those clusters of other genes with which it works well may be extended for hundreds of generations, not three. Elizabeth II is considerably more genetically similar to William the Conqueror than she is to an average person alive today. Genetic Similarity Theory In 1984, the current author, along with Robin Russell and Pamela Wells, began to apply the Hamiltonian perspective to human dyads, small groups and even larger national and international entities (Rushton et al. 1984; Rushton 1986, 1989a, 2004; Rushton and Bons 2005). We dubbed our approach 'Genetic Similarity Theory' and reasoned that if genes produced effects that allowed bearers to recognise and favour each other, then altruistic behaviour could evolve well beyond 'kin selection'. By matching across the entire genome, people can maximise their inclusive fitness by marrying others similar to themselves, and like, make friends with and help the most similar of their neighbours, as well as engage in ethnic nepotism. As the English language makes clear, 'likeness goes with liking'. Social-assortment studies Of all the decisions people make that affect their environment, choosing friends and spouses are among the most important. Genetic Similarity Theory was first applied to assortative mating, which kin-selection theory sensu stricto does not readily explain since individuals seldom mate with 'kin'. Yet, the evidence for assortative mating is pervasive in other animals as well as in humans. For humans, both spouses and best friends are most similar on socio-demographic variables such as age, ethnicity and educational level (r 5 0.60), next most on opinions and attitudes (r 5 0.50), then on cognitive ability (r 5 0.40), and least, but still significantly, on personality (r 5 0.20) and physical traits (r 5 0.20). Even marrying across ethnic lines 'proves the rule'. In Hawaii, men and women who married cross-ethnically were more similar in personality than those marrying within their group, suggesting that couples 'make up' for ethnic dissimilarity by choosing spouses more similar to themselves in other respects (Ahern et al. 1981). Evolution has also set an upper limit on 'like marrying like' - incest avoidance (van den Berghe 1983). Too close genetic similarity between mates increases the probability of 'double doses' of harmful recessive genes. The ideal mate is one who is genetically similar but not a close relative. Several studies have shown that people prefer genetic similarity in social partners, and assort on the more heritable components of traits, rather than on the most intuitively obvious ones, just as Hamilton (1971) predicted they would if genetic mechanisms were involved. This occurs because more heritable components better reflect the underlying genotype. These studies have used homogeneous sets of anthropometric, cognitive, personality and attitudinal traits measured within the same ethnic group. Examples of varying heritabilities are: for physical attributes, eighty per cent for middle-finger length vs. fifty per cent for upper-arm circumference; for intelligence, eighty per cent for the general factor vs. less than fifty per cent for specific abilities; for personality items, seventy-six per cent for 'enjoying meeting people' vs. twenty per cent for 'enjoying being unattached'; and for social attitudes, fifty- one per cent for agreement with the 'death penalty' vs. twenty-five per cent for agreement with 'Bible truth'. In a study of married couples, Russell et al. (1985) found that across thirty- six physical traits, spousal similarity was greater on attributes with higher heritability such as wrist circumference (seventy-one per cent heritable) than it was on attributes with lower heritability such as neck circumference (fortyeight per cent heritable). On fifty-four indices of personality and leisure time pursuits, Rushton and Russell (1985) found that spousal similarity was greater on items such as 'enjoying reading' (forty-one per cent heritable) than on items such as 'having many hobbies' (twenty per cent heritable). On twenty-six cognitive ability tests, Rushton and Nicholson (1988) found that spousal resemblance was greater on more heritable subtests from the Hawaii Family Study of Cognition and the Wechsler Adult Intelligence Scale (WAIS). When spouses assort on more heritable items, they report greater marital satisfaction (Russell and Wells 1991). In a study of best friends, Rushton (1989b) found that across a wide range of anthropometric and social attitude measures, such as agreement with 'military drill' (forty per cent heritable) and with 'church authority' (twentyfive per cent heritable) the similarity of the friends was more pronounced on the more heritable measures. These results were extended to liking in acquaintances by Tesser (1993) who manipulated people's beliefs about how similar they were to others on attitudes pre-selected as being either high or low in heritability. Tesser found that people liked others more when their similarity had been chosen (by him) on the more heritable items. The above results cannot be explained by culturalist theories. Genetic Similarity Theory and culturalist theory make opposite predictions about social assortment. Cultural theory predicts that phenotype matching by spouses will be greater on those traits that spouses have become more similar on through the shared experiences that shape attitudes, leisure time activities and waist and bicep size (e.g. through diet and exercise). Genetic Similarity Theory, on the other hand, predicts greater matching on the more heritable traits (e.g. wrist size and middle finger length, not easily changed). Twin and adoption studies Several twin and adoption studies show that the preference for genetic similarity is heritable, that is, people are genetically inclined to prefer similar partners. In one of these studies, Rowe and Osgood (1984) analysed data on delinquency from several hundred adolescent monozygotic (MZ) twin pairs, who share one hundred per cent of their genes, and dizygotic (DZ) twin pairs, who share fifty per cent of their genes. They found that adolescents genetically inclined to delinquency were also genetically inclined to seek out similar others as friends. Dovetailing with these results, Daniels and Plomin (1985) examined friendships in several hundred pairs of siblings from both adoptive and non-adoptive homes, and found that whereas biological siblings (who share genes as well as environments) had friends who resembled each other, adoptive siblings (who share only their environment) had friends who were not at all similar to each other. These results show that shared genes lead to similar friends. Rushton and Bons (2005) analysed a 130-item questionnaire on personality and social attitudes gathered from several hundred pairs of identical twins, fraternal twins, their spouses and their best friends. They found that: (a) spouses and best friends are about as similar as siblings, a level of similarity not previously recognised; and (b) identical twins choose more similar spouses and best friends to their co-twin than do non-identical twins. The preference for similarity is about thirty per cent heritable. Moreover, once again, matching for similarity was greater on the more heritable items showing that social assortment is based on the underlying genotype. Similarity was greater on items such as preferring 'business to science' (heritability 5 0.60) than on liking to 'travel the world alone' (twenty-four per cent heritable). Blood group studies Yet another way of testing the hypothesis that humans typically choose mates and friends who are genetically similar is to examine blood antigens. In one study, Rushton (1988) analysed seven polymorphic marker systems at ten blood loci across six chromosomes (ABO, Rhesus [Rh], MNSs, Kell, Duffy [Fy], Kidd [Jk] and HLA) in a study of 1,000 cases of disputed paternity, limited to people of North European appearance (judged by photographs). Couples who produced a child together were fifty-two per cent similar but those that did not were only forty-three per cent similar. Subsequently, Rushton (1989b) used these blood tests with pairs of male best friends of similar background and found the friends were significantly more similar to each other than they were to randomly matched pairs from the same database. Bereavement studies Within-family bereavement studies show just how fine-tuned human preferences for genetic similarity can be. One study of 263 child bereavements found that (a) spouses agreed seventy-four per cent of the time on which side of the family a child 'took after' the most, their own or that of their spouse, and (b) the grief intensity reported by mothers, fathers and grandparents was greater for children who resembled their side of the family than it was for children who resembled the other side of the family (Littlefield and Rushton 1986). A study of bereavement in twins found that MZ twins who share one hundred per cent of their genes, compared to DZ twins who share fifty per cent of their genes: (a) work harder for their co-twin; (b) show more physical proximity to their co-twin; (c) express more affection to their co-twin; and (d) show greater loss when their co-twin dies (Segal 2000). Other lines of research Women prefer the bodily scents of men with genes similar to their own more than they do those of men with nearly identical genes or genes totally dissimilar to their own (Jacob et al. 2002). Each woman's choice was based upon the human leukocyte antigen (HLA) gene sequence - the basis for personal odours and olfactory preferences - inherited from her father but not her mother. Another study found that both men and women rated versions of their own face as the most attractive after they had been computer-morphed into faces of the opposite-sex, even though they did not recognise the photos as images of themselves (Penton-Voak et al. 1999). Similarly, people whose faces were morphed with strange faces trusted others most when they looked like themselves (DeBruine 2002). Familiarity was ruled out by using morphs of celebrities; only self-resemblance mattered. The gravity of groups The pull of genetic similarity does not stop at family and friends. Group members move into ethnic neighbourhoods and join together in clubs and societies. Since people of the same ethnic group are genetically more similar to each other than to members of other groups, they favour members of their own group over outsiders. In his groundbreaking book, The Ethnic Phenomenon, van den Berghe (1981) applied kin-selection theory to explain why people everywhere are prone to develop ethnocentric attitudes toward those who differ in dress, dialect and other appearance, and how even relatively open and assimilative ethnic groups 'police' their boundaries against invasion by strangers by using 'badges' as markers of group membership. Van den Berghe hypothesised that these 'badges' would typically be cultural, such as scarification, linguistic accent and clothing style rather than physical. He agreed that shared traits of high heritability could provide more reliable indicators than cultural, flexible ones, but he thought these heritability indices would likely only be relevant to modern times when they could be used to discriminate between widely differing groups such as the Boers and Xhosa. The studies I reviewed above on kin recognition in animals and social assortment in humans shows that the preference for similarity is fine-tuned. It takes place within ethnic groups, even families, and it occurs on the more heritable items from sets of homogeneous traits. As such, the process is considerably more variegated, subtle and powerful than van den Berghe (1981) conjectured. (His 1989 position paper went further toward acknowledging the more 'primordial' elements involved.) The reviewed data confirms Hamilton's (1971) prediction that kin-recognition systems would use the more heritable attributes of others if they were based on mechanisms such as imprinting-onself (Dawkins's 'armpit effect') and recognition alleles (Dawkins's 'green beard effect'). Detecting degrees of genetic similarity is much more fine-tuned than simply determining whether someone is a Boer or a Xhosa. The question is: How similar to one is the particular Boer (or Xhosa)? In his 2003 book On Genetic Interests, Frank Salter, a political ethologist at the Max Planck Institute in Munich, extrapolated genetic similarity theory and the logic of taking all shared genes into account to also explain ethnic nepotism. He showed how Hamilton's (1964, 1971, 1975) coefficient of relatedness (r) equated to the FST estimates of genetic variance (on average r . 2 FST ) that had become available (e.g. Cavalli-Sforza et al. 1994). Since FST provides both a measure of genetic distance between populations and of kinship within them, it followed that in comparison to the total genetic variance around the world, random members of any one population group are related to each other on the order of r . 0.25 or 1/4 or about the same as half- siblings. (A general rule would be: If a fellow ethnic looks like you, then on average, he or she is genetically equivalent to a cousin.) Salter's analysis of Cavalli-Sforza's FST data showed that if the world population were wholly English then the kinship between any random pair of Englishmen would be zero. But if the world population consisted of both English people and Germans, then two random English people (or Germans) would have a kinship of 0.0044, or that of 1/32 of a cousin. As genetic distances between populations become larger, the kinship coefficient between random co-ethnics within a population increases. Two English people become the equivalent of 3/8 cousin by comparison with people from the Near East; 1/ 2 cousin by comparison with people from India; half-sibs by comparison with people from China or East Africa; and like full-sibs (or children) compared with people from South Africa. Since people have many more co-ethnics than relatives, the aggregate of genes they share with their fellow ethnics dwarfs those they share with their extended families. Rather than being a mere poor relation of family nepotism, ethnic nepotism is virtually a proxy for it. In two other books, Salter (2002 and 2004) and his colleagues found that ethnic bonds are central to explaining such diverse phenomena as ethnic mafias, minority middlemen networks, heroic freedom fighters, the welfare state, generous foreign aid and charity in all its more unstinting manifestations. One study examined street beggars in Moscow. Some were ethnic Russians, just like the vast majority of the pedestrians. Others were dressed in the distinctive garb of Moldova, a small former Soviet republic, ethnically and linguistically kin to Romania. Finally, some of the beggars were darker- skinned Roma (Gypsies). The Russian pedestrians preferred to give to their fellow Russians, with their fellow Eastern European Moldavians, second. The Gypsies were viewed so negatively that they had to resort to a wide variety of tactics ranging from singing and dancing, to importuning tightwads, to sending out groups of young children to beg. In an earlier study, anthropologist Colin J. Irwin (1987) tested formulations of in-group co-operation in inbred populations by calculating coefficients of consanguinity within and between various Eskimo tribes and sub- tribes in the western Hudson's Bay region of Canada. He found that prosocial behaviour such as wife exchange, and anti-social behaviour, such as the genocidal killing of women and children during warfare, followed lines of genetic distance, albeit mediated by ethnic badging such as dialect and appearance. Even very young children typically show a clear preference for others of their own ethnic heritage (Aboud 1988). In fact, the process of making racial groupings has been shown to result from a natural tendency to classify people into 'kinds'. Children quickly begin to sort people into 'basic kinds' by sex, age, size and occupation. Experiments show that at an early age children clearly expect race to run in families (Hirschfield 1996). Very early in life, a child knows which race it belongs to, and which ones it doesn't. The whisper of the genes The history of the Jewish people provides a well-documented example of how genetic similarity theory intersects with Anthony D. Smith's (2000 and 2004) ethno-symbolic approach. As shown by Batsheva Bonne-Tamir at Tel Aviv University (e.g. 1992; and others, such as Thomas et al. 2002), Jewish groups are genetically similar to each other even though they have been scattered around the world for two millennia. Jews from Iraq and Libya share more genes with Jews from Germany, Poland and Russia than either group shares with the non-Jewish populations among whom they have lived for centuries. Although the Ethiopian Jews turn out not to be 'genetically Jewish', many other far removed Jewish communities share a similar genetic profile despite large geographic distances between the communities and the passage of hundreds of years. Genetic Similarity Theory predicts that many other seemingly purely cultural divides are, in fact, rooted in the underlying population genetics. Recent DNA sequencing of the ancient Hindu caste system has confirmed that higher castes are more genetically related to Europeans than are lower castes who are genetically more related to other south Asians (Bamshad et al., 2001). Although outlawed in 1960, the caste system continues to be the main feature of Indian society, with powerful political repercussions. Genetic studies can thus confirm (or disconfirm) people's ideas about their origins. In the case of Jews and the Indian caste system, traditional views have been confirmed. Israel is a new state, yet one which is built on an ancient tradition of ethnicity and nationhood. Much recent analysis of Israeli society, however, has tended to downplay connections between modern Israel and pre-modern Jewish identity, seeing Israel rather as an unambiguously modern phenomenon (cf. Smith 2000). Some Jews have greeted the genetic 'validation' positively because it affirms the organic nature of the Jewish people. However, it is also recognised as a two-edged sword, that could be invoked by claims from certain quarters that a 'Jewish Race is working to dominate the world'. Hindu nationalists have expressed similar mixtures of feelings. While pleased to confirm 'Aryan' origins, they fear a backlash over elitism and exclusivity. In other cases, genetic evidence refutes origin myths, such as that the Chinese gene-pool goes back a quarter of a million years to Beijing Man, or that Amerindians have always existed on the American continent rather than being only the most ancient of 'immigrants' (Rushton 1995). Genetic distance studies are likely to play an increasing role in debates about ancestral custodial rights over disputed territory. People can be predicted to adopt ideologies that work in their genetic self- interest. Examples of ideologies that have been shown, on analysis, to increase genetic fitness are religious beliefs that regulate dietary habits, sexual practices, marital customs, infant care and child rearing (Lumsden and Wilson 1981). Amerindian tribes that cooked maize with alkali had higher population densities and more complex social organisations than tribes that did not, partly because alkali releases the most nutritious parts of the cereal, enabling more people to grow to reproductive maturity. The Amerindians did not know the biochemical reasons for the benefits of alkali cooking but their cultural beliefs had evolved for good reason, enabling them to replicate their genes more effectively than would otherwise have been the case. Political interests are typically presented in terms of high ethical standards, no matter how transparent these appear to opponents. Consider the competing claims of Palestinians and Israelis, or the Afrikaners and the Bantus. Psychological explanation is made especially difficult since the rival groups construct very different histories of the conflict and all parties tend to see themselves as victims whose story has not been told. Because ethnic aspirations are rarely openly justified in terms of naked self-interest, analyses need to go deeper than surface ideology. Political issues are especially explosive when survival and reproduction are at stake. Consider the growth of Middle Eastern suicide bombers. Polls conducted among Palestinian adults from the Gaza Strip and the West Bank show that about seventy-five per cent support suicidal attacks, whereas only about twelve per cent are opposed (Margalit 2003). Many families state that they are proud of their kin who become martyrs. Most analyses of the motives of suicide bombings emphasise unique aspects such as the Palestinian or Iraqi political situation, the teachings of radical Islam, or a popular culture saturated with the glorification of martyrs. These political factors play an indispensable role but from an evolutionary perspective aspiring to universality, people have evolved a 'cognitive module' for altruistic self-sacrifice that benefits their gene pool. In an ultimate rather than proximate sense, suicide bombing can be viewed as a strategy to increase inclusive fitness. What reasons do suicide bombers themselves give for their action? Many invoke the rhetoric of Islam while others appeal to political and economic grievances. Mahmoud Ahmed Marmash, a twenty-one-year-old bachelor from Tulkarm who blew himself up near Tel Aviv in May 2001 said in a videocassette recorded before he went on his mission (cited in Margalit, 2003): I want to avenge the blood of the Palestinians, especially the blood of the women, of the elderly, and of the children, and in particular the blood of the baby girl Iman Hejjo, whose death shook me to the core. Many other national groups have produced suicide warriors. The term 'zealot' originates in a Jewish sect that existed for about 70 years in the first century CE. According to the classical historian Flavius Josephus (1981), an extreme revolutionary faction among them assassinated Romans and Jewish collaborators with daggers; this likely reduced their chances of staying alive. A group of about 1,000 Zealots, including women and children, chose to commit suicide at the fortress of Masada rather than surrender to the Romans. Masada today is one of the Jewish people's greatest symbols. Israeli soldiers take an oath there: 'Masada shall not fall again'. Soldier armies - the Japanese kamikaze, or the Iranian basaji - have carried out suicide attacks against enemy combatants. Winston Churchill contemplated the use of suicide bombers against the Germans if they invaded Britain (see Cornwell 2003). Some of the Tamil Tigers of Sri Lanka, who are Hindus, have killed themselves in attacks on politicians and army installations, and they have done so with utter disregard for the lives of civilians who happened to be around. Genes, of course, typically only 'whisper' their wishes rather than shout. They keep cultures on a long rather than a short leash (to use Lumsden and Wilson's 1981 metaphor). This allows for pragmatism and flexibility in the strategies that groups adopt to serve their aspirations. For example, Zubaida (2004) noted that the ideological weapons Arabs have employed to further their cause against political dominance by the Ottoman Turks (who were fellow Muslims), the Western Great Powers, the United States and now Israel have alternated between Islam and nationalism, with all the continuities and contradictions in between. Zubaida (2004) also noted that Turkish, Egyptian and Iranian Islamisms (and sometimes anti-Islamisms) have often been national, and often nationalistic. Across the Muslim world, Arabs have often seen themselves as the mainstay of Islam, and Islam as the national culture of the Arabs. Nationalism became unpopular when it failed to satisfy Arab aspirations and is now often seen as an import from the West to 'divide and conquer'. Although fundamentalism is typically seen as subversive by Arab regimes, ethnic nationalists often celebrate it as a demonstration of revolutionary power. The Shi'ite Revolution in the non-Arabic but Islamic Republic of Iran, for example, served as an example not only for Islamists, but also for many nationalists and leftists in the Arab world. The political pull of ethnic identity and genetic similarity also explains voting behaviour. The re-election victory of George W. Bush in the 2004 US presidential election was largely attributed to White votes and to the higher value placed by these voters on 'values' than on the economy. A closer look at the demographics reveals that 'values' may be, at least in part, a proxy for ethnic identity and genetic similarity. The majority of White Americans voted based on which candidate - and candidate's family - they believed most appeared to look, speak and act like them (Brownstein and Rainey 2004). Another timely example is the growth of Christian fundamentalism in the United States. Analyses show that it represents a reaction to what is perceived as the moral breakdown of society (Marty and Appleby 1994). Because of trends in the mass media and education system, many religious people believe they now live in a hostile culture where their core values are under siege. The issue on which they are most politically active is opposition to abortion. One hypothesis to be investigated is that if estimates of genetic similarity could be obtained, fundamentalists would prove close to each other and to the basic Anglo-Saxon gene pool. If so, it would be informative to know what percentage of the estimated fifty million women who have had legal abortions in the United States since 1973 were predominantly of that ethnic background. Conclusion Genetic similarity, of course, is only one of many possible influences operating on political alliances. Causation is complex and there is no value in reducing relationships between ethnic groups to a single factor. Fellow ethnics will not always stick together, nor is conflict inevitable between groups any more than it is between genetically distinct individuals. In addition to reproductive success, individuals also work for motives such as economic success. However, as van den Berghe (1981) pointed out, from an evolutionary perspective, the ultimate measure of human success is not production but reproduction. Behavioural outcomes are always mediated by multiple causes. Nonetheless, genetic similarity can be expected to play a clear role in the social behaviour of small groups and even of large ones, both national and international. The hypothesis presented here is that because fellow ethnics carry copies of the same genes, ethnic consciousness is rooted in the biology of altruism and mutual reciprocity. Thus ethnic nationalism, xenophobia and genocide can become the 'dark side' of altruism. Moreover, shared genes can govern the degree to which an ideology is adopted (e.g. Rushton 1986 and 1989a). Some genes will replicate better in some cultures than in others. Religious, political and class conflicts become heated because they affect genetic fitness. Karl Marx did not take his analysis far enough: ideology may be the servant of economic interest, but genes influence both. Since individuals have a greater concentration of genetic interest (inclusive fitness) in their own ethnic group than they do in other ethnic groups, they can be expected to adopt ideas that promote their group over others. Political ethologist Frank Salter (2003) refers to ideologies as 'fitness portfolios', and psychologist Kevin MacDonald (2001) has described co-ethnics as engaging in 'group evolutionary strategies'. It is because genetic interests are a powerful force in human affairs that ethnic insults so easily lead to violence. Although social scientists and historians have been quick to condemn the extent to which political leaders or would-be leaders have been able to manipulate ethnic identity, the questions they never ask, let alone attempt to answer are, 'Why is it always so easy?' and 'Why can a relatively uneducated political outsider set off a riot simply by uttering a few well-delivered ethnic epithets?' Many caveats must be noted to the theoretical approach described here. Thus, Salter (2003) concluded that although (a) ethnic bonds can be adaptive because they unite people in defence of shared interests, and (b) down-sizing ethnicity through multiculturalism might change the competitive advantage of particular groups for dominance but is unlikely to eliminate ethnic identity from our nature as social beings, nonetheless (c) there are many examples of how maladapted modern humans are for defending their ethnic interests due to the competing demands of family and immediate kin and the sheer complexity of modern societies including the impacts of cultural factors (see his Chapter 6). It would be incorrect to over-generalise findings on genetic similarity and reify primordialism or resurrect ideas of organic nationalism. Rather, the potential is provided for an even more nuanced ethno-symbolic approach to the forces operating both within and between countries, many of which can otherwise seem irrational. Although the modern idea of citizenship has replaced the bond of ethnicity ('people who look and talk like us') with that of values ('people who think and behave like us'), the politics of ethnic identity are increasingly replacing the politics of class as the major threat to the stability of nations. Patriotic feeling is much more than a delusion constructed by elites for their own purpose. The ethno-symbolic approach anchors the psychology of social identity in national identities and in previously existing ethnicities and their 'sacred' traditions and customs (e.g. Smith 2000 and 2004). Ethnic communities have been present in every period and have played an important role in all societies on every continent. The sense of common ethnicity remains a major focus of identification for individuals today. Genetic Similarity Theory helps to explain why. References Aboud, Frances. 1988. Children and Prejudice. London: Blackwell. Ahern, F. M., R. E. Cole, R. C. Johnson and B. Wong. 1981. 'Personality attributes of males and females marrying within vs. across racial/ethnic groups', Behavior Genetics 11: 181-94. Alberts, Susan C. 1999. 'Paternal kin discrimination in wild baboons', Proceedings of the Royal Society of London, B 266: 1501-6. APA/CPA (1997). Ethnopolitical Warfare: Origins, Intervention, and Prevention. A Joint Presidential Initiative of the Presidents-Elect of the American Psychological Association and the Canadian Psychological Association. Washington, DC: American Psychological Association. Badcock, Christopher. 2000. Evolutionary Psychology: a Critical Introduction. Cambridge: Polity Press. Bamshad, Michael, Toomas Kivisild, W. Scott Watkins, Mary E. Dixon, Chris E. Ricker, Baskara B. Rao, J. Mastan Naidu, B. V. Ravi Prasad, P.Govinda Reddy, Arani Rasanayagam, Surinder S. Papiha, Richard Villems, Alan J. Redd, Michael F. Hammer, Son V. Nguyen, Marion L. Carroll, Mark A. Batzer and Lynne B. Jorde. 2001. 'Genetic evidence on the origins of Indian caste populations', Genome Research 11(6): 994-1004. Blaustein, A. R. and R. K. O'Hara. 1981. 'Genetic control for sibling recognition?', Nature 290: 246-8. Bonne-Tamir, Batsheva and Avinoam Adam (eds.). 1992. New Perspectives on Genetic Markers and Diseases among Jewish People. Oxford: Oxford University Press. Brownstein, R. and R. Rainey. 2004. 'Bush's huge victory in the fast-growing areas beyond the suburbs alters the political map', Los Angeles Times 22 November: A1, A14-A15. Buss, David M. 2003. Evolutionary Psychology: the New Science of the Mind. Needham Heights, MA: Allyn & Bacon. Cavalli-Sforza, Luigi L., Paolo Menozzi and Albert Piazza. 1994. The History and Geography of Human Genes. Princeton, NJ: Princeton University Press. Cornwell, John. 2003. Hitler's Scientists: Science, War, and the Devil's Pact. New York: Penguin. Daniels, Denise and Robert Plomin. 1985. 'Differential experience of siblings in the same family', Developmental Psychology 21: 747-60. Darwin, Charles. 1859. The Origin of Species. London: Murray. Darwin, Charles. 1871. The Descent of Man. London: Murray. Dawkins, Richard. 1976. The Selfish Gene. Oxford: Oxford University Press. Dawkins, Richard. 1981. 'Selfish genes in race or politics', Nature 289: 528. DeBruine, Lisa M. 2002. 'Facial resemblance enhances trust', Proceedings of the Royal Society of London, B 269: 1307-12. Degler, Carl N. 1991. In Search of Human Nature. New York: Oxford. Fletcher, D. J. C. and C. D. Michener. 1987. Kin Recognition in Animals. New York: Wiley. Greenberg, L. 1979. 'Genetic component of bee odor in kin recognition', Science 206: 1095-7. Guibernau, Montserrat and John Hutchinson (eds.). 2004. History and National Destiny: Ethnosymbolism and its Critics. London: Blackwell. Hamilton, William D. 1964. 'The genetical evolution of social behavior. I and II', Journal of Theoretical Biology 7: 1-52. Hamilton, William D. 1971. 'Selection of selfish and altruistic behaviour in some extreme models' in J. F. Eisenberg and W. S. Dillon (eds.), Man and Beast: Comparative Social Behavior. Washington, DC: Smithsonian Press, 57-91. Hamilton, William D. 1975. 'Innate social aptitudes of man: an approach from evolutionary genetics' in R. Fox (ed.), Biosocial Anthropology. London: Malaby Press, 133-55. Hamilton, William D. 1987. 'Discriminating nepotism: expectable, common, overlooked' in D. J. C. Fletcher and C. D. Michener (eds.), Kin Recognition in Animals. New York: Wiley, 417-37. Hauber, Mark E. and Paul W. Sherman. 2001. 'Self-referent phenotype matching: theoretical considerations and empirical evidence', Trends in Neuroscience 24(10): 609-16. Hirschfield, Lawrence A. 1996. Race in the Making: Cognition, Culture, and the Child's Construction of Human Kinds. Cambridge, MA: MIT Press. Hutchinson, John. 2000. 'Ethnicity and modern nations', Ethnic and Racial Studies 23(4): 651-69. Hutchinson, John and Anthony D. Smith (eds.). 1996. Ethnicity. Oxford: Oxford University Press. Irwin, Colin J. 1987. 'A study in the evolution of ethnocentrism' in V. Reynolds, V. S. E. Falger and I. Vine (eds.), The Sociobiology of Ethnocentrism: Evolutionary Dimensions of Xenophobia, Discrimination, Racism, and Nationalism. London: Croom Helm, 131-56. Jacob, Suma, Martha K. McLintock, Bethanne Zelano and Carole Ober. 2002. 'Paternally inherited HLA alleles are associated with women's choice of male odor', Nature Genetics 30(2): 175-9. Josephus, Flavius. 1981. The Jewish War (revised edn by E. M. Smallwood of G. A. Williamson translation). New York: Penguin. Littlefield, Christine H. and J. Philippe Rushton. 1986. 'When a child dies: the sociobiology of bereavement', Journal of Personality and Social Psychology 51: 797-802. Lumsden, Charles J. and Edward O. Wilson. 1981. Genes, Mind, and Culture: the Coevolutionary Process. Cambridge, MA: Harvard University Press. MacDonald, Kevin. 2001. 'An integrative evolutionary perspective on ethnicity', Politics and the Life Sciences 20(1): 67-80. Margalit, Avishai. 2003. 'The suicide bombers', The New York Review of Books 50(1, 16 January). Marty, M. and R. S. Appleby. 1994. Fundamentalism Observed: the Fundamentalism Project. Chicago, IL: University of Chicago Press. Maynard Smith, John. 1964. 'Group selection and kin selection', Nature 201: 1145-7. Penton-Voak, I. S., D. I. Perret and J. W. Pierce. 1999. 'Computer graphic studies of the role of facial similarity in judgements of attractiveness', Current Psychology 18: 104-17. Pinker, Steven. 2002. The Blank Slate: the Modern Denial of Human Nature. New York: Viking. Rowe, David C. and D. W. Osgood. 1984. 'Heredity and sociological theories of delinquency: a reconsideration', American Sociological Review 49: 526-40. Rushton, J. Philippe. 1986. 'Gene - culture coevolution and genetic similarity theory: Implications for ideology, ethnic nepotism, and geopolitics', Politics and the Life Sciences 4(2): 144-8. Rushton, J. Philippe. 1988. 'Genetic similarity, mate choice, and fecundity in humans', Ethology and Sociobiology 9(6): 329-33. Rushton, J. Philippe. 1989a. 'Genetic similarity, human altruism, and group selection', Behavioral and Brain Sciences 12(3): 503-59. Rushton, J. Philippe. 1989b. 'Genetic similarity in male friendships', Ethology and Sociobiology 10(5): 361-73. Rushton, J. Philippe. 1995. Race, Evolution, and Behavior. New Brunswick, NJ: Transaction. Rushton, J. Philippe. 2004. 'Genetic and environmental contributions to prosocial attitudes: a twin study of social responsibility', Proceedings of the Royal Society of London, B 271: 2583-5. Rushton, J. Philippe and Trudy A. Bons. 2005. 'Mate choice and friendship in twins: evidence for genetic similarity', Psychological Science 16(7): 555-9. Rushton, J. Philippe and Ian R. Nicholson. 1988. 'Genetic similarity theory, intelligence, and human mate choice', Ethology and Sociobiology 9(1): 45-57. Rushton, J. Philippe and Robin J. H. Russell. 1985. 'Genetic similarity theory: a reply to Mealey and new evidence', Behavior Genetics 15: 575-82. Rushton, J. Philippe, Robin J. H. Russell and Pamela A. Wells. 1984. 'Genetic similarity theory: beyond kin selection', Behavior Genetics 14: 179-93. Rushton, J. Philippe, Christine H. Littlefield and Charles J. Lumsden. 1986. 'Gene-culture coevolution of complex social behavior: human altruism and mate choice', Proceedings of the National Academy of Sciences, USA 83(19): 7340-3. Russell, Robin J. H and Pamela A. Wells. 1991. 'Personality similarity and quality of marriage', Personality and Individual Differences 12: 406-12. Russell, Robin J. H., Pamela A. Wells and J. Philippe Rushton. 1985. 'Evidence for genetic similarity detection in human marriage', Ethology and Sociobiology 6(3): 183-87. Salter, Frank. 2002. Risky Transactions: Trust, Kinship and Ethnicity. London: Berghahn. Salter, Frank. 2003. On Genetic Interests: Family, Ethny and Humanity in an Age of Mass Migration. Frankfurt, Germany: Peter Lang. Salter, Frank (ed.). 2004. Welfare, Ethnicity, and Altruism: New Findings and Evolutionary Theory. New York: Frank Cass. Segal, Nancy L. 2000. Entwined Lives: Twins and What They Tell Us About Human Behavior. New York: Plume. Smith, Anthony D. 1998. Nationalism and Modernism: a Critical Survey of Recent Theories of Nations and Nationalism. London: Routledge. Smith, Anthony D. 2000. The Nation in History: Historiographical Debates about Ethnicity and Nationalism. Hanover, NH: University Press of New England. Smith, Anthony D. 2004. Chosen Peoples: Sacred Sources of National Identity. Oxford: Oxford University Press. Tesser, Abraham. 1993. 'The importance of heritability in psychological research: the case of attitudes', Psychological Review 93(1): 129-42. Thomas, Mark G., Michael E. Weale, Abigail L. Jones, Martin Richards, Alice Smith, Nicola Redhead, Antonio Torroni, Rosaria Scozzari, Fiona Gratix, Ayele Tarakegn, James F. Wilson, Christian Capelli, Neil Bradman and David B. Goldstein. 2002. 'Founding mothers of Jewish communities: geographically separated Jewish groups were independently founded by very few female ancestors', American Journal of Human Genetics 70(6): 1411-20. van den Berghe, Pierre L. 1981. The Ethnic Phenomenon. New York: Elsevier. van den Berghe, Pierre L. 1983. 'Human inbreeding avoidance', Behavioral and Brain Sciences 6: 91-123. van den Berghe, Pierre L. 1989. 'Heritable phenotypes and ethnicity', Behavioral and Brain Sciences 12: 544-55. van den Berghe, Pierre L. 2002. 'Multicultural democracy: can it work?', Nations and Nationalism 8(4): 433-49. van der Dennen, Johan M. G. 1987. 'Ethnocentrism and in-group/out-group differentiation: a review and interpretation of the literature' in V. Reynolds, V. S. E. Falger and I. Vine (eds.), The Sociobiology of Ethnocentrism: Evolutionary Dimensions of Xenophobia, Discrimination, Racism, and Nationalism. London: Croom Helm, 1-47. Wilson, Edward O. 1975. Sociobiology: the New Synthesis. Cambridge, MA: Harvard University Press. Wilson, Edward O. 1998. Consilience: the Unity of Knowledge. New York: Knopf. Wrangham, R. and D. Peterson. 1996. Demonic Males: Apes and the Origins of Human Violence. Boston, MA: Houghton Mifflin. Zubaida, Sami. 2004. 'Islam and nationalism: continuities and contradictions', Nations and Nationalism 10(4): 407-20. From checker at panix.com Mon Dec 5 02:45:24 2005 From: checker at panix.com (Premise Checker) Date: Sun, 4 Dec 2005 21:45:24 -0500 (EST) Subject: [Paleopsych] NS: Why we cannot rely on firearm forensics Message-ID: Why we cannot rely on firearm forensics http://www.newscientist.com/article.ns?id=mg18825274.300&print=true * 23 November 2005 * Robin Mejia TYRONE JONES is serving a life sentence, in part because of a microscopic particle that Baltimore police found on his left hand. At his trial for murder in 1998 the crime-lab examiner gave evidence that the particle was residue from a gunshot. He claimed Jones must have held or fired a gun shortly before his arrest. Jones denies this and still protests his innocence. His defence team is appealing the conviction, claiming that the science of gunshot residue (GSR) analysis is not as robust as the prosecution claims. Now, a New Scientist investigation has found that someone who has never fired a gun could be contaminated by someone who has, and that different criminal investigators use contradictory standards. What's more, particles that are supposedly unique to GSR can be produced in other ways. Forensic scientists often testify that finding certain particle types means the suspect handled or fired a weapon. Janine Arvizu, an independent lab auditor based in New Mexico, reviewed the Baltimore county police department's procedures relating to GSR. Her report concludes: "The BCPD lab routinely reported that gunshot residue collected from a subject's hands 'most probably' arose from proximity to a discharging firearm, despite the fact that comparable levels of gunshot residue were detected in the laboratory's contamination studies." The BCPD did not return calls requesting comment. Some specialists argue for a more cautious approach. "None of what we do can establish if anybody discharged a firearm," says Ronald Singer, former president of the American Academy of Forensic Sciences and chief criminalist at the Tarrant county medical examiner's office in Fort Worth, Texas. Peter De Forest of John Jay College of Criminal Justice in New York goes further. "I don't think it's a very valuable technique to begin with. It's great chemistry. It's great microscopy. The question is, how did [the particle] get there?" GSR analysis is commonly used by forensic scientists around the world. In Baltimore alone, it has been used in almost 1000 cases over the past decade. It is based on identifying combinations of heavy metals in microscopic particles that are formed when the primer in a cartridge ignites. The primer sets off the main charge, which expels the bullet. There is no standardised procedure to test for GSR, but the organisation ASTM International, which develops standards that laboratories can look to for guidance, has developed a guide for performing the technique that was approved in 2001. This states that particles made only of lead, barium and antimony, or of antimony and barium are "unique" to gunshot residue. The particles are identified using a scanning electron microscope and their composition analysed using energy-dispersive spectrometry. But recent studies have shown that a non-shooter can become contaminated without going near a firearm. Lubor Fojt?sek and Tom?s Kmjec at the Institute of Criminalistics in Prague, Czech Republic, fired test shots in a closed room and attempted to recover particles 2 metres away from the shooter. They detected "unique" particles up to 8 minutes after a shot was fired, suggesting that someone entering the scene after a shooting could have more particles on them than a shooter who runs away immediately (Forensic Science International, vol 153, p 132). A separate study reported in 2000 by Debra Kowal and Steven Dowell at the Los Angeles county coroner's department reported that it was also possible to be contaminated by police vehicles. Of 50 samples from the back seats of patrol cars, they found 45 contained particles "consistent" with GSR and four had "highly specific" GSR particles. What's more, they showed that "highly specific" particles could be transferred from the hands of someone who had fired a gun to someone who had not. This doesn't surprise Arvizu. "If I was going to go out and look for gunshot residue, police stations are the places I'd look," she says. Scientists using the technique are aware of the potential contamination problem, but how they deal with it varies. In Baltimore, for example, the police department crime lab's protocol calls for at least one lead-barium-antimony particle and a few "consistent" particles to be found to call the sample positive for GSR. The FBI is more cautious. Its protocol states: "Because the possibility of secondary transfer exists, at least three unique particles must be detected...in order to report the subject/object/surface 'as having been in an environment of gunshot primer residue'." So a person could be named as a potential shooter in Baltimore, but given the benefit of the doubt by the FBI. Even worse, it is possible to pick up a so-called "unique" particle from an entirely different source. Industrial tools and fireworks are both capable of producing particles with a similar composition to GSR. And several studies have suggested that car mechanics are particularly at risk of being falsely accused, because some brake linings contain heavy metals and can form GSR-like particles at the temperatures reached during braking. In one recent study, Bruno Cardinetti and colleagues at the Scientific Investigation Unit of the Carabinieri (the Italian police force) in Rome found that composition alone was not enough to tell true GSR particles from particles formed in brake linings (Forensic Science International, vol 143, p 1). At an FBI symposium last June, GSR experts discussed ways to improve and standardise the tests. The bureau would not discuss the meeting, but special agent Ann Todd says the FBI's laboratory is preparing a paper for publication that "will make recommendations to the scientific community regarding accepting, conducting and interpreting GSR exams". Singer maintains that the technique is useful if used carefully. "I think it's important as part of the investigative phase," he says, though not necessarily to be presented in court. But he adds: "There are people who are going to be a bit more, shall we say, enthusiastic. That's where you're going to run into trouble." Related Articles Television shows scramble forensic evidence http://www.newscientist.com/article.ns?id=mg18725163.800 09 September 2005 'Sperm clock' could pinpoint time of a rape http://www.newscientist.com/article.ns?id=dn7079 05 March 2005 DNA duplication trick may lead to faster testing http://www.newscientist.com/article.ns?id=dn6132 09 July 2004 Weblinks American Academy of Forensic Sciences http://www.aafs.org/ John Jay College of Criminal Justice http://www.jjay.cuny.edu/ ASTM International http://www.astm.org/ Forensic Science International http://www.sciencedirect.com/science/journal/03790738 From checker at panix.com Mon Dec 5 02:45:31 2005 From: checker at panix.com (Premise Checker) Date: Sun, 4 Dec 2005 21:45:31 -0500 (EST) Subject: [Paleopsych] Independent: Revealed: the chemistry of love Message-ID: Revealed: the chemistry of love http://news.independent.co.uk/world/science_technology/article329619.ece The good news: they've discovered the love chemical inside us all. The bad news: it only lasts a year The very source of love has been found. And is it that smouldering look exchanged across a crowded room? Those limpid eyes into which you feel you could gaze for ever? No. It's NGF, say unromantic spoilsport scientists who have made the discovery, - that's short for nerve growth factor. And now, the really deflating news: its potent, life-enhancing, brain-scrambling effect doesn't last. It subsides within the year of first falling in love - presumably within the same period it takes lovers to notice that the object of their affections can't get the lid on the toothpaste. "We have demonstrated for the first time that circulating levels of NGF are elevated among subjects in love, suggesting an important role for this molecule in the social chemistry of human beings," says Enzo Emanuele of the University of Pavia in Italy. Dr Emanueleand his researchers compared 58 men and women, aged 18 to 31, who had recently fallen in love with people in established relationships and those who were single. "Potential participants required to be truly, deeply and madly in love," said the researchers. Only people whose relationships had begun within six months were studied. The "in love" had to be spending at least four hours a day thinking about their partner. When the levels of blood chemicals were measured, it was found that both men and women who had recently fallen in love showed very high levels of NGF - 227 units compared with 123 units recorded in those in long-lasting relationships. The study also found that those who reported the most intense feelings had the highest NGF levels. However, when researchers revisited people from the "in love" group who were still in the same relationship more than a year later, the levels of NGF had declined to the same levels as the established relationship and singles groups. Love is a neglected area of research and little work has been done on its mechanisms. Dr Emanuele and his team believe they have conducted the first investigation into the peripheral levels of neurotrophins in people in love. While the role of NGF in falling in love remains unclear, the researchers suggest that some behavioural or psychological features associated with falling in love could be related to the higher chemical levels. "The raised NGF levels when falling in love could be related to specific emotions typically associated with intense early-stage romantic love, such as emotional dependency and euphoria," the researchers say. "The specificity of NGF increase during early-stage love seems to suggest that it could be involved in the formation of novel bonds, whereas it does not appear to play a major role in their maintenance.'' Rocketing NGF, however, could be a necessary step on the way to an enduring love because NGF is thought to play an important part in the release of another chemical which plays a pivotal role in social bonding. In a report about to be published in the journal Psychoneuroendocrinology, the research team ends with a justification for more love research that seemsquintessentially Italian: "Given the complexity of the sentiment of romantic love, and its capacity to exhilarate, arouse, disturb, and influence so profoundly our behaviour, further investigations on the neurochemistry and neuroendocrinology of this unique emotional state are warranted." From checker at panix.com Mon Dec 5 02:45:36 2005 From: checker at panix.com (Premise Checker) Date: Sun, 4 Dec 2005 21:45:36 -0500 (EST) Subject: [Paleopsych] NYTBR: The Capitalist Manifesto Message-ID: The Capitalist Manifesto http://www.nytimes.com/2005/11/27/books/review/27easterbrook.html THE MORAL CONSEQUENCES OF ECONOMIC GROWTH By Benjamin M. Friedman. 570 pp. Alfred A. Knopf. $35. Review by GREGG EASTERBROOK ECONOMIC growth has gotten a bad name in recent decades - seen in many quarters as a cause of resource depletion, stress and sprawl, and as an excuse for pro-business policies that mainly benefit plutocrats. Some have described growth as a false god: after all, the spending caused by car crashes and lawsuits increases the gross domestic product. One nonprofit organization, Redefining Progress, proposes tossing out growth as the first economic yardstick and substituting a "Genuine Progress Indicator" that, among other things, weighs volunteer work as well as the output of goods and services. By this group's measure, American society peaked in 1976 and has been declining ever since. Others think ending the fascination with economic growth would make Western life less materialistic and more fulfilling. Modern families "work themselves to exhaustion to pay for stuff that sits around not being used," Thomas Naylor, a professor emeritus of economics at Duke University, has written. If economic growth were no longer the goal, there would be less anxiety and more leisurely meals. But would there be more social justice? No, says Benjamin Friedman, a professor of economics at Harvard University, in "The Moral Consequences of Economic Growth." Friedman argues that economic growth is essential to "greater opportunity, tolerance of diversity, social mobility, commitment to fairness and dedication to democracy." During times of expansion, he writes, nations tend to liberalize - increasing rights, reducing restrictions, expanding benefits for the needy. During times of stagnation, they veer toward authoritarianism. Economic growth not only raises living standards and makes liberal social policies possible, it causes people to be optimistic about the future, which improves human happiness. "It is simply not true that moral considerations argue wholly against economic growth," Friedman contends. Instead, moral considerations argue that large-scale growth must continue at least for several generations, both in the West and the developing world. Each American, the World Wildlife Federation calculates, demands more than four times as much of the earth as the global average for all men and women, most of this demand being resource consumption. Some think such figures mean American resource consumption must go down; to Friedman's thinking, any reduction would only harm the rest of the world by slowing global growth. What the statistic actually tells you, he would say, is that overall global resource consumption must go up, up, up - to bring reasonable equality of living standards to the developing world and to encourage the liberalization and increased human rights that accompany economic expansion. If by the middle of the 21st century everyone on earth were to realize the living standard of present-day Portugal (taking into account expected population expansion), Friedman calculates, global economic output must quadruple. That's a lot of growth. "The Moral Consequences of Economic Growth" is an impressive work: commanding, insistent and meticulously researched. Much of it is devoted to showing that in the last two centuries, periods of growth have in most nations coincided with progress toward fairness, social mobility, openness and other desirable goals, while periods of stagnation have coincided with retreat from progressive goals. These sections sometimes have a history-lesson quality, discoursing on period novels, music and other tangential matters. And sometimes the history lesson gets out of hand, as when the author pauses to inform readers that the Federal Republic of Germany was commonly known as West Germany. More important, Friedman's attempt to argue that there is something close to an inevitable link between economic growth and social advancement is not entirely successful, a troublesome point since such a link is essential to his thesis. For example, Friedman contends that economic growth aided American, French and English social reforms of the second half of the 19th century. Probably, but there was also a recession in the United States beginning in 1893, yet pressure for liberal reforms continued: the suffrage, good-government and social-gospel movements strengthened during that time. It was in the midst of a depression, in 1935, that Social Security, a huge progressive leap, was enacted. Economic growth has sometimes been weak in the United States for much of the last three decades, yet in this period American society has become significantly more open and tolerant - discrimination appears at an all-time low. On the flip side, the 20's were the heyday of the Klan in the United States, though the "roaring" economy of the decade was growing briskly. None of this disproves Friedman's hypothesis, only clouds its horizon. Surely liberalization works better where there is growth, while growth works better where there is liberalization - as China is learning. But the relationship between the two forces may always be fuzzy; the modern era might have seen movement toward greater personal freedom and social fairness regardless of whether high-output industrial economies replaced low-growth agrarian systems. Repressive forces, from skinheads to Nazis and Maoists, may spring more from evil in the human psyche than from any economic indicator. Friedman's thesis is now being tested in China, home of the world's most impressive economic growth. If he's right, China will rapidly become more open, gentle and democratic. Let's hope he's right. Though "The Moral Consequences of Economic Growth" may not quite succeed in showing an iron law of growth and liberalization, Friedman is surely correct when he contends that economic expansion must remain the world's goal, at least for the next few generations. Growth, he notes, has already placed mankind on a course toward the elimination of destitution. Despite the popular misconception of worsening developing-world misery, the fraction of people in poverty is in steady decline. Thirty years ago 20 percent of the planet lived on $1 or less a day; today, even adjusting for inflation, only 5 percent does, despite a much larger global population. Probably one reason democracy is taking hold is that living standards are rising, putting men and women in a position to demand liberty. And with democracy spreading and rising wages giving ever more people a stake in the global economic system, it could be expected that war would decline. It has. Even taking Iraq into account, a study by the Center for International Development and Conflict Management, at the University of Maryland, found that the extent and intensity of combat in the world is only about half what it was 15 years ago. Friedman concludes his book by turning to psychology, which shows that people's assumptions about whether their lives will improve are at least as important as whether their lives are good in the present. Right now, American living standards and household income are the highest they have ever been; but because middle-class income has been stagnant for more than two decades, while the wealthy hoard society's gains, many Americans have negative expectations. "America's greatest need today is to restore the reality. . . that our people are moving ahead," Friedman writes. How? He recommends lower government spending (freeing money for private investment), repealing upper-income tax cuts (to shrink the federal deficit), higher Social Security retirement ages, choice-based Medicare and big improvements in the educational system (educated workers are more productive, which accelerates growth). Friedman doesn't worry that we will run out of petroleum, trees or living space. What he does worry about is that we will run out of growth. Gregg Easterbrook is a visiting fellow at the Brookings Institution, a contributing editor of The Atlantic Monthly and the author, most recently, of "The Progress Paradox." From checker at panix.com Mon Dec 5 02:45:43 2005 From: checker at panix.com (Premise Checker) Date: Sun, 4 Dec 2005 21:45:43 -0500 (EST) Subject: [Paleopsych] ABC (au): Ancient Germans weren't so fair, 2004.7.16 Message-ID: Ancient Germans weren't so fair, 2004.7.16 http://www.abc.net.au/science/news/stories/s1154815.htm] [I found this when looking for articles on red hair for the meme on The Maureen Dowd Theory of Western Civilization I sent yesterday. I'm not expert enough to comment on this, but the Maureen Dowd theory might suggest that light hair and eyes, in the proportions of, say, 1900, could be recent indeed.] Anna Salleh in Brisbane Friday, 16 July 2004 Researchers may be able to make more accurate reconstructions of what ancient humans looked like with the first ever use of ancient DNA to determine hair and skin colour from skeletal remains. The research was presented today at an [4]international ancient DNA conference in Brisbane, Australia, by German anthropologist, Dr Diane Schmidt of the [5]University of G?ttingen. She said her research may also help to identify modern day murderers and their victims. "Three thousand years ago, nobody was doing painting and there was no photography. We do not know what people looked like," Schmidt told ABC Science Online. She said most images in museums and books were derived from comparisons with living people from the same regions. "For example, when we make a reconstruction of people from Africa we think that they had dark skin or dark hair," she said. "But there's no real scientific information. It's just a guess. It's mostly imagination." She said this had meant, for example, that the reconstruction of Neanderthals had changed over time. "In the 1920s, the Neanderthals were reconstructed as wild people with dark hair and dumb, not really clever," she said. "Today, with the same fossil record, with the same bones and no other information - just a change in ideology - you see reconstructions of people with blue eyes and quite light skin colour, looking intelligent and using tools. "Most of the reconstructions you see in museums are a thing of the imagination of the reconstructor. Our goal is to make this reconstruction less subjective and give them an objective basis with scientific data." Genetic markers for hair colour In research for her recently completed PhD, Schmidt built on research from the fields of dermatology and skin cancer that have found genetic markers for traits such as skin and hair colour in modern humans. In particular, Schmidt relied on the fact that different mutations (known as single nucleotide polymorphisms, or SNPs) in the melanocortin receptor 1 gene are responsible for skin and hair colour. Redhead DNA analysis showed this skull belonged to someone with red hair (Image: Sussane Hummel) "There is a set of SNPs that tells you that a person was a redhead and a different set of markers tell you they were fair skinned." She extracted DNA from ancient human bones as old as 3000 years old from three different locations in Germany and looked for these SNPs. Her findings suggest that red hair and fair skin was very uncommon among ancient Germans. Out of a total of 26 people analysed, Schmidt found only one person with red hair and fair skin, a man from the Middle Ages. All the other people had more UV-tolerant skin that tans easily. She said she was excited when she "coloured in" the faces that once covered the skulls, and had even developed "a kind of a personal relationship" with one of them. "It's not so anonymous," she said. "I think this is the reason why people in museums can do reconstruction because our ancestors are not so anonymous any more; they have a face you can look into." Unfortunately the genetic markers Schmidt used could not distinguish which of the ancient humans had blond versus black hair, and she could not determine eye colour. But, she said she was confident that this will be possible in a few years. Schmidt said that such research could also be used to help build up identikit pictures to help identify skeletons or criminals. The research has been submitted for publication. References 4. http://www.ansoc.uq.edu.au/index.html?page=15259 5. http://www.uni-goettingen.de/?lang=en&PHPSESSID=73500f612e2d0c6d193256491f49401e From shovland at mindspring.com Mon Dec 5 05:13:13 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 4 Dec 2005 21:13:13 -0800 Subject: [Paleopsych] Neurosphere- book of interest Message-ID: Neurosphere: The Convergence of Evolution, Group Mind, and the Internet (Paperback) by Donald P. Dulchinos Editorial Reviews >From Publishers Weekly Dulchinos, a manager in the cable television industry and longtime participant in the WELL, one of the first online communities, sees communication technology leading humanity toward global consciousness. Questions of whether the Internet might constitute a "group mind" have been newsgroup fodder for years, supplying a range of online material excerpted here. Dulchinos is also inspired by Teilhard de Chardin, whose concept of "noosphere" has been reworked into "neurosphere" to represent "a mature religious view commensurate with the evolutionary stage at which we find ourselves." Rather than developing a single line of argument, the text presents a collage of metaphysical speculation, punctuated with a touch of whimsy: "We may very well be on the verge of a consistent and simultaneous human experience... the ability to act with a single will. Hitler, among others, exploited this. Consider, on the other hand, that perhaps apparently benign 'personalities' like Madonna and Barney the Dinosaur likewise wield a perverse influence on large populations." Yet Dulchinos maintains the courage of his convictions, hoping to convince others "that each of them, even the most miserable and destitute, is an equally important part of this massively parallel, loosely affiliated, but still cohesive 6-billion-parts-strong Being. All of us together, we are God." From aandrews at hvc.rr.com Mon Dec 5 11:34:28 2005 From: aandrews at hvc.rr.com (Alice Andrews) Date: Mon, 5 Dec 2005 06:34:28 -0500 Subject: [Paleopsych] Re: ginger gene/Neanderthal References: Message-ID: <009f01c5f98f$da389cf0$6401a8c0@callastudios> Hi Frank-- For your Dowd (but not dowdy) meme: http://www.aulis.com/news13.htm all the best! -alice ----- Original Message ----- From: "Premise Checker" To: Sent: Sunday, December 04, 2005 9:45 PM Subject: [Paleopsych] ABC (au): Ancient Germans weren't so fair, 2004.7.16 Ancient Germans weren't so fair, 2004.7.16 http://www.abc.net.au/science/news/stories/s1154815.htm] [I found this when looking for articles on red hair for the meme on The Maureen Dowd Theory of Western Civilization I sent yesterday. I'm not expert enough to comment on this, but the Maureen Dowd theory might suggest that light hair and eyes, in the proportions of, say, 1900, could be recent indeed.] Anna Salleh in Brisbane Friday, 16 July 2004 Researchers may be able to make more accurate reconstructions of what ancient humans looked like with the first ever use of ancient DNA to determine hair and skin colour from skeletal remains. The research was presented today at an [4]international ancient DNA conference in Brisbane, Australia, by German anthropologist, Dr Diane Schmidt of the [5]University of G?ttingen. She said her research may also help to identify modern day murderers and their victims. "Three thousand years ago, nobody was doing painting and there was no photography. We do not know what people looked like," Schmidt told ABC Science Online. She said most images in museums and books were derived from comparisons with living people from the same regions. "For example, when we make a reconstruction of people from Africa we think that they had dark skin or dark hair," she said. "But there's no real scientific information. It's just a guess. It's mostly imagination." She said this had meant, for example, that the reconstruction of Neanderthals had changed over time. "In the 1920s, the Neanderthals were reconstructed as wild people with dark hair and dumb, not really clever," she said. "Today, with the same fossil record, with the same bones and no other information - just a change in ideology - you see reconstructions of people with blue eyes and quite light skin colour, looking intelligent and using tools. "Most of the reconstructions you see in museums are a thing of the imagination of the reconstructor. Our goal is to make this reconstruction less subjective and give them an objective basis with scientific data." Genetic markers for hair colour In research for her recently completed PhD, Schmidt built on research from the fields of dermatology and skin cancer that have found genetic markers for traits such as skin and hair colour in modern humans. In particular, Schmidt relied on the fact that different mutations (known as single nucleotide polymorphisms, or SNPs) in the melanocortin receptor 1 gene are responsible for skin and hair colour. Redhead DNA analysis showed this skull belonged to someone with red hair (Image: Sussane Hummel) "There is a set of SNPs that tells you that a person was a redhead and a different set of markers tell you they were fair skinned." She extracted DNA from ancient human bones as old as 3000 years old from three different locations in Germany and looked for these SNPs. Her findings suggest that red hair and fair skin was very uncommon among ancient Germans. Out of a total of 26 people analysed, Schmidt found only one person with red hair and fair skin, a man from the Middle Ages. All the other people had more UV-tolerant skin that tans easily. She said she was excited when she "coloured in" the faces that once covered the skulls, and had even developed "a kind of a personal relationship" with one of them. "It's not so anonymous," she said. "I think this is the reason why people in museums can do reconstruction because our ancestors are not so anonymous any more; they have a face you can look into." Unfortunately the genetic markers Schmidt used could not distinguish which of the ancient humans had blond versus black hair, and she could not determine eye colour. But, she said she was confident that this will be possible in a few years. Schmidt said that such research could also be used to help build up identikit pictures to help identify skeletons or criminals. The research has been submitted for publication. References 4. http://www.ansoc.uq.edu.au/index.html?page=15259 5. http://www.uni-goettingen.de/?lang=en&PHPSESSID=73500f612e2d0c6d193256491f49401e -------------------------------------------------------------------------------- > _______________________________________________ > paleopsych mailing list > paleopsych at paleopsych.org > http://lists.paleopsych.org/mailman/listinfo/paleopsych > From checker at panix.com Tue Dec 6 23:43:54 2005 From: checker at panix.com (Premise Checker) Date: Tue, 6 Dec 2005 18:43:54 -0500 (EST) Subject: [Paleopsych] Thought for Today Message-ID: The immovable truths that are there [in the Eroica Symphony]--and there are truths in the arts as well as in theology--became truths when Beethoven formulated them. They did not exist before. They cannot perish hereafter. --H.L. Mencken, "Brahms," Baltimore Evening Sun, 1926.8.2 From checker at panix.com Wed Dec 7 01:24:42 2005 From: checker at panix.com (Premise Checker) Date: Tue, 6 Dec 2005 20:24:42 -0500 (EST) Subject: [Paleopsych] ABC: Ancient hair gives up its DNA secrets Message-ID: News in Science - Ancient hair gives up its DNA secrets - 22/06/2004 http://www.abc.net.au/science/news/stories/s1135104.htm] [This and several following are just articles that the one I just sent linked to. They should also be of interest, though I can't comment on them.] Anna Salleh ABC Science Online Tuesday, 22 June 2004 Analysing DNA from ancient strands of hair is a new tool for learning about the past, molecular archaeologists say, including whether hair samples belonged to Sir Isaac Newton. Until now, scientists had thought analysing the hair shaft was of relatively little use as it contained so little DNA. Dr Tom Gilbert of the [4]University of Arizona led an international team that reported its work in the latest issue of the journal [5]Current Biology. The researchers said they had extracted and sequenced mitochondrial DNA from 12 hair samples, 60 to 64,800 years old, from ancient bison, horses and humans. The researchers said their results confirmed that hair samples previously thought to belong to Sir Isaac Newton were not his, a finding that backed previous isotopic analysis. But the focus of their research was to explore the potential of extracting ancient DNA from hair samples. The most common samples used for ancient DNA analyses are taken from bone, teeth and mummified tissue. Until now, when the hair root hadn't been available for analysis, scientists had thought analysing the hair shaft was of relatively little use as it contained so little DNA. But isolated strands of hair are often the only clues to human habitation in ancient times. Now Gilbert's team said it had developed a method to extract and sequence ancient DNA from hair shafts. The researchers said the ancient DNA in hair was much less degraded than DNA from other tissues. They argued this was because it was protected from water by the hair's hydrophobic keratin, the protein polymer that gives hair its structure. The team also found that hair DNA had a low level of contamination and argued that keratin may protect the DNA from contamination with modern DNA sequences, like DNA from human sweat. The scientists also said that analysing hair DNA, and potentially DNA from other keratin-containing samples like ancient feathers and scales, would minimise the destruction of valuable archaeological samples caused by sampling teeth or bones. Hairy development "It's a nice development," said Dr Tom Loy, an Australian expert in ancient DNA from the [6]University of Queensland. He said that molecular archaeologists had generally ignored extracting DNA from hair. "[But] on the basis of their article it looks as if it's quite, quite feasible," he told ABC Science Online. He said the method may be useful in shedding light on the origin of strands of ancient hair discovered a decade ago at the Pendejo Cave site in New Mexico. "It would be very important to find out whose hair it was," said Loy, who said previous attempts had been unsuccessful. He was enthusiastic about the idea of being able to extract ancient DNA from feathers. "Often times feathers are found in caves and in some cases as residues on artefacts," he said. But Loy was sceptical about using the method to extract ancient DNA from scales and was not convinced by the argument that keratin protected ancient DNA from contamination. "People still don't fully understand how things get contaminated," he said. References 4. http://www.arizona.edu/ 5. http://www.current-biology.com/ 6. http://www.uq.edu.au/ From checker at panix.com Wed Dec 7 01:24:48 2005 From: checker at panix.com (Premise Checker) Date: Tue, 6 Dec 2005 20:24:48 -0500 (EST) Subject: [Paleopsych] ABC: Ancient DNA may be misleading scientists Message-ID: News in Science - Ancient DNA may be misleading scientists - 18/02/2003 http://www.abc.net.au/science/news/stories/s786146.htm] Tuesday, 18 February 2003 Ancient DNA in skeletons has a tendency to show damage in a particular region, resulting in misleading genetic data and mistaken conclusions about the origin of the skeleton, British scientists said. A group of researchers at the [4]Henry Wellcome Ancient Biomolecules Centre of the University of Oxford, in Britain, made the finding while studying Viking specimens. They found that about half of the specimens had DNA that suggested they were of Middle Eastern origin. But more detailed analysis revealed that many of the genetic sequences in the double helix molecule, which carries the genetic information of every individual, were damaged at a key base that separates European sequences from Middle Eastern genetic types - damage which made the skeletons appear to have originated in the Levant. The results are published in the February 2003 issue of the [5]American Journal of Human Genetics. Damage events appear to be concentrated in specific 'hotspots', indicating that a high proportion of DNA molecules can be modified at the same point. These hotspots appear to be in positions that also differ between different human groups. In other words, the DNA damage discovered affects the same genetic positions as evolutionary change. "Now that this phenomenon has been recognised, it is possible to survey the ancient sequences for damage more accurately, and determine the correct original genetic type - open the way for more reliable future studies," said Professor Alan Cooper, director of the centre. Cooper has hopes the finding may have implications for future research. "It also appears that we can use damage cause after death to examine how DNA damage occurs during life - a completely unanticipated, and somewhat ironic result," he said. "Potentially this allows us to get uniquely separate views of the two major evolutionary processes, mutation and selection." Danny Kingsley - ABC Science Online References 4. http://abc.zoo.ox.ac.uk/ 5. http://www.journals.uchicago.edu/AJHG/home.html From checker at panix.com Wed Dec 7 01:24:52 2005 From: checker at panix.com (Premise Checker) Date: Tue, 6 Dec 2005 20:24:52 -0500 (EST) Subject: [Paleopsych] ABC (au): A faster evolutionary clock? Message-ID: A faster evolutionary clock? http://www.abc.net.au/science/news/stories/s510398.htm] [Analogy: when carbon 14 dating was first employed, it put Stonehenge later than the Egyptian pyramids, though archeologists knew in their hearts that this couldn't be true. When the Unchecked Premise, that cosmic rays triggering off mutations comes in at a steady rate was Checker, new tables were calibrated cross-dating from overlapping sets of tree rings. The result was that Stonehenge turned out to be earlier than the pyramids after all. [So maybe these out-of-Africa theories will get considerably revised and theories of raciation before speciation into homo sapiens, like Carleton Coon's, will get revisited. Stay tuned.] Friday, 22 March 2002 A discovery by scientists studying ancient DNA from Antarctic penguins may change our understanding of how fast the tree of life grew. New Zealand scientist, Dr David M. Lambert, and colleagues report in this week's [4]Science on a new method of measuring the rate of DNA evolution. They believe their method of using numerous samples of ancient DNA is much more accurate than the current method of "calibrating" the "molecular clock". The team studied over 20 colonies of Ad?lie Penguins whose home is the ice-free areas of Antarctica. "This is the best source of ancient DNA found yet," said Dr Lambert, of the Institute of Molecular BioSciences at Massey University in Palmerston North. By taking blood samples, Dr Lambert and colleagues were able to analyse a particular segment of genetic material in the mitochondria of the penguins and find two different groups whose DNA differed from each other by 8%. The team then set out to find when the two lineages diverged. Conveniently, right beneath the very feet of the living penguins lay the bones of their long gone ancestors - dating back to 7,000 years. The researchers analysed equivalent DNA segments from carbon-dated ancestral penguin bones of nearly 100 different ages ranging from 88 years to around 7,000 years old. By plotting the degree of change in the DNA over time, they estimated a rate of evolution equivalent to 96% per million years. This meant the two groups of penguins diverged 60,000 years ago, in the middle of the last ice age. "This rate is 2 7 times faster than previous estimates for this particular segment of mitochondrial DNA," said Dr Lambert. "According to the standard rate of evolution, the penguins diverged 300,000 years which is more than two ice ages ago." The conventional method of calibrating the molecular clock involves measuring the percentage difference between the DNA of two living creatures and comparing it to DNA from a fossil counterpart of one particular age. "This only gives you one data point a datum, not a distribution of points," said Dr Lambert, "It is not statistically reliable whereas in our method there is greater confidence in the numbers arrived at." "We believe we've got a more accurate way of measuring the rate of evolution," he said. The findings may or may not have implications for other species. "Maybe the Ad?lie penguins have evolved particularly fast," speculates Dr Lambert. "We won't know until we apply the method to other species." Dr Lambert and team now intend to test kiwis, Antarctic fish, the Tuatara (a NZ reptile), and even humans. Penguin colony Cape Adare in Antarctica is the largest colony of Ad?lie penguins (Pic: J. Macdonald) However, it won't necessarily be easy since the conditions required for such an approach are quite particular. The penguins in Antarctica were a perfect opportunity because they provided a living population at the same location as dead ancestors, the location was undisturbed by human influence, and the environment was optimal for preserving DNA. "Antarctica is not only cold, but it's drier than a desert," said Dr Lambert. "It's not surprising it was the best source of ancient DNA." "It'll be harder to do it for the other species but we've learnt a lot, and we're going to give it our best shot," he said. If the new faster rate of evolution proves correct for other organisms this will change the our understanding of when different organisms evolved, how fast the tree of life grew and even how different animals responded to environmental change. Anna Salleh - ABC Science Onlinee From checker at panix.com Wed Dec 7 01:24:57 2005 From: checker at panix.com (Premise Checker) Date: Tue, 6 Dec 2005 20:24:57 -0500 (EST) Subject: [Paleopsych] ABC (au): "Out of Africa" in Asia Message-ID: "Out of Africa" in Asia http://www.abc.net.au/science/news/stories/s293857.htm [Now this is earlier than the piece about changing the rate of mutations. It may be stale but perhaps worth revisiting. I append two other articles.] Friday, 11 May 2001 The origins of man debate continues The hotly-debated notion that modern humans arose from Africa and replaced all other populations of early humans across the globe has been bolstered by new research. A genetic study lead by Yuehai Ke from Fudan University in China, of more than 12,000 men from 163 populations in East Asia strongly suggests the so-called "Out of Africa" theory of modern human origins is correct, according to a report in today's [18]Science. The "Out of Africa" model states that anatomically-modern humans originated in Africa about 100,000 years ago and then spread out to other parts of the world where they completely replaced local archaic populations. Among the evidence for this notion are recent DNA tests which ruled out the contribution of primitive humans or Neanderthals to modern Europeans. But others argue that the distribution and morphology of ancient human fossils found in China and other regions of East Asia support a competing theory - that modern humans evolved independently in many regions of the world. Now Yuehai Ke and his team from China, Indonesia, the United States and Britain, tested the Y chromosomes of 12,127 East Asian men for the presence of three specific mutations - types of genetic markers. The three mutations are derived from a single earlier mutation in African humans. The team found every individual they tested carried one of the three later mutations and no ancient non-African DNA was found. They therefore rule out even a "minimal contribution" from the primitive East Asian humans to their anatomically-modern counterparts. Dr Colin Groves, from the anthropology department at the Australian National University, described the new data from such a large sample of men as "absolutely decisive". "I'm a supporter [of the Out of Africa model] but I can't for the life of me think how any multi-regional model could fit this," he told ABC Science Online. The new data analysing male genes was "telling exactly the same story" as previously-reported data analysing genes in cell structures called mitochondria, passed from one generation to the next via females, he added. Dr Groves' ANU colleague and well-known opponent of the Out of Africa model, Dr Alan Thorne, was not available to comment on the new research. Further controversy on human origins Tuesday, 16 January 2001 Mungo man - analysis of DNA from this fossil announced last week reignited a controversy over the origins of modern humans. New research supports the theory that the ancestors of modern humans came from many different regions of the world, not just a single area -- but critics remain far from convinced. The study, published in the current issue of [18]Science by University of Michigan anthropologist Milford H. Wolpoff and colleagues, is the second study in a week to fuel the debate on the origin of the human species. Australian researchers set off a storm last week when they announced that their analysis of mitochondrial DNA from 'Mungo Man' also supported the so-called 'regional continuity theory'. Their study is due to be published this month in the Proceedings of the National Academy of Sciences. The study presented in this week's Science comes to the same conclusion following a comparison of early modern and archaic fossil skulls from around the world. "Ancient humans shared genes and behaviours across wide regions of the world, and were not rendered extinct by one 'lucky group' that later evolved into us," says Wolpoff. "The fossils clearly show that more than one ancient group survived and thrived." The researchers analysed the similarities and differences between fossil skulls from Australia and Central Europe, and peripheral regions far from Africa, where according to the dominant "Out of Africa" theory -- also known as the "Eve" or "Replacement" theory -- modern humans evolved. "Basically we wanted to see if this comparison could disprove the theory of multiple ancestry for the early European and Australian moderns," said Wolpoff. The researchers said they found that the most recent European and Australian skulls shared characteristics with the ancient African and Near Eastern population and with the older fossils from within their own regions. They also found there were many more similarities than could be explained by chance alone -- a finding which amounted to "a smoking gun" for the regional continuity theory. The findings are the latest evidence in the continuing scientific controversy about the origin of modern humans (Homo sapiens). Most scientists believe that all living humans can trace their ancestry exclusively to a small group of ancient humans, probably Africans, living around 100,000 years ago. If this theory was true it would mean that all other early human groups, whose fossils date from this time back to almost two million years ago, must have become extinct, possibly wiped out in a prehistoric genetic holocaust. Other scientists, including Wolpoff and Australian National University anthropologist Dr Alan Thorne, maintain that there is little evidence that a small group originating in a single geographic region replaced the entire population of early humans. "In asking the question a different way, and directly addressing the fossils, this study provides compelling evidence that replacement is the wrong explanation," says Wolpoff. "Instead, the findings support the theory of multi-regional evolution. Modern humans are the present manifestation of an older worldwide species with populations connected by gene flow and the exchange of ideas." Palaeoanthropologist Associate Professor Peter Brown of the University of New England disputes the findings. "I'm amazed that Science has published this article. If it had been submitted to me by a third year student I would have failed them," he told ABC Science Online. Professor Brown said that Wolpoff and colleagues had chosen an Australian fossil that was unrepresentative of the skulls of that time. "It's pathologically different. It has a skull as thick as a bike helmet," he said. "They've just chosen a fossil that suits their theory". He said that the authors had also ignored literature that was contrary to their theory. Dr Alan Thorne, however, insists that the evidence is on his and Wolpoff's side. "What we've found is mitochondrial DNA in an Australian fossil that is much more primitive than anything that's been found in Africa," he said. "And there is no archaeological or physical evidence to support the idea that Aboriginal Australians originated from Africa." Anna Salleh - ABC Science Online News in Science - Another blow for Out of Africa? - 23/02/2001 X-URL: http://www.abc.net.au/science/news/stories/s250390.htm X-Spam-Checker-Version: SpamAssassin 3.0.4 (2005-06-05) on mailcrunch2.panix.com X-Spam-Level: X-Spam-Status: No, score=-2.4 required=5.0 tests=ALL_TRUSTED,FROM_AND_TO_SAME, NO_REAL_NAME autolearn=unavailable version=3.0.4 [1]ABC Home [2]Radio [3]Television [4]News [5]...More Subjects ____________________ Search the ABC [6]the Lab - ABC Science Online [7]Science Home [8]News in Science [9]Features [10]Explore [11]TV & Radio [12]Dr Karl [13]Play [14]Podcasts [15]News in Science [16]Print Print [17] Email Email to a friend Another blow for Out of Africa? Friday, 23 February 2001 Nanjing man Nanjing man Another Australian study - this time of Chinese fossils - has weighed into the controversy over the origins of modern humans, supporting the theory they evolved in many different regions of the world. Dr Jian-xin Zhao and Professor Ken Collerson from the [18]University of Queensland (UQ) have dated ancient human fossils in China as being at least 620,000 years old - much older than previously thought. The researchers from the Earth Sciences Department say the findings support the theory that Asian populations evolved directly from Asian Homo Erectus, rather than evolving from populations out of Africa. A major argument against this regional continuity theory, Dr Zhao told ABC Science Online, is that the age of Homo Erectus fossils found in Asia did not allow enough time for Homo Sapiens to evolve. "This new date gives plenty of time for Home Erectus to evolve into Homo Sapiens," he says. The researchers measured the decay of uranium into thorium in samples of calcite flowstone directly above recently discovered Homo Erectus fossils called Nanjing Man, in the Tangshan Cave 250 kilometres northwest of Shanghai. In the past this method has been used to date teeth and bones, however the researchers say applying it to calcite flowstone samples has provided much more accurate dates and challenged the reliability of using fossil teeth for the purposes of dating. "Age estimates derived from teeth or bones depend very much on how and when uranium was taken up during fossilisation process, and are often younger than the true ages," Professor Collerson says. "In contrast, the University of Queensland dates were derived from dense and pure crystalline flowstone that was closed to uranium and thorium mobility and are therefore more reliable." The findings, developed in collaboration with Dr Kai Hu and Dr Han-kui Xu from Nanjing, were published recently in the international journal Geology. Anna Salleh - ABC Science Online From checker at panix.com Wed Dec 7 01:25:02 2005 From: checker at panix.com (Premise Checker) Date: Tue, 6 Dec 2005 20:25:02 -0500 (EST) Subject: [Paleopsych] CHE: Duping the Brain Into Healing the Body Message-ID: Duping the Brain Into Healing the Body The Chronicle of Higher Education, 5.12.2 http://chronicle.com/weekly/v52/i15/15a01201.htm [I read somewhere that placebos work on dogs, a surprising result, since dogs are immune from the sort of verbal propaganda humans are subject to. One way it could work is this: a dog is given a medicine that has actual medicinal effects. But most medicines don't directly go to whatever part of the body is causing the difficulty. Rather the medicine triggers off a chain of brain and nerve events. If this has happened a good many times, the grooves down the nerve chain (so to speak: I like something more medically-correct) repeatedly the nerve chains deepen. After a while, a smaller dose or even no dose could trigger off the chain and thus work on the dog. I recall a Russian guy named Pavlov who did something like this. [Another thing: I read in _Science_ years and years ago that a major anomaly had been discovered, namely that the placebo effect tends to be proportional to the medicinal effect of the actual medicine, whereas one would think the two would be random with respect to each other. I failed to follow up on the controversy. Can anyone cue me in?] Researchers analyze the mysterious placebo effect By LILA GUTERMAN Washington The placebo effect -- it's all in your head. When you swallow sugar pills instead of powerful medicine and your symptoms disappear, it's all thanks to the power of your mind. How does the brain perform this parlor trick? In the past, scientists suspected that any apparent health benefits from placebos had little more basis in biology than did sleight of hand. In studies of new drugs, patients might tell their doctors they feel better because they think that is what their doctor wants to hear. Or perhaps they would have recovered without any treatment, real or sham. But researchers now know that the placebo effect is real and grounded in the physiology of the brain. Using techniques to peer inside the skull, they have begun to find regions of the brain that respond to placebos, and they have even watched a single nerve cell react to a sham medicine. Those studies show that placebos affect the brain in much the same way that actual treatments do, researchers reported here in November at the annual meeting of the Society for Neuroscience. In other words, the power to treat several troublesome disorders may be wrapped up in the three-pound spongy lump of tissue protected by the skull. The research points to the power of positive thinking -- even at the unconscious level. When the brain expects relief, it can manufacture some on its own. "The things you can change with a positive outlook are profound," says Tor D. Wager, an assistant professor of psychology at Columbia University. "They are deeper physiologically than we have previously appreciated." None of the researchers who study the mechanism of the placebo effect suggest that doctors should prescribe dummy pills instead of real medicine. But they say that the study of the placebo effect could change the way scientists perform clinical trials of new treatments and could even alter how we understand and treat pain, Parkinson's disease, and depression. By studying placebos, says Christian S. Stohler, dean of the school of dentistry at the University of Maryland at Baltimore, "you crack into disease mechanisms that might be very important for improving the lives of many pain patients." Fooling the Patient Researchers gained their first glimpse of the causes of the placebo effect in the late 1970s, when scientists discovered that under certain conditions they could cancel the effect. In a study of pain relievers, a drug called naloxone prevented patients on placebo pills from experiencing the usual benefit. Since naloxone blocks the action of painkillers called opioids, researchers figured that placebos must stimulate the brain to produce its own opioids. In the 1990s, another set of experiments provided more evidence that the placebo effect was a real physiological phenomenon. Fabrizio Benedetti, a professor of neuroscience at the University of Turin, and others studied the effect without using a placebo. Dr. Benedetti judged that a placebo's effect comes from the patient's psychosocial context: talking to a doctor, observing the treatment, and expecting improved health. So he took away that context by giving study participants real drugs, but on the sly. Patients were told that they would receive an active drug, a placebo, or nothing through intravenous needles, and consented to get any of the different treatments without knowing when any treatment would be supplied. The scientists compared the results when a doctor overtly gave the patient the drug and when a computer supplied the drug without the patient's knowledge. Bedside manner, it turned out, made a difference: Patients required far more painkiller if they unknowingly received the medicine from a computer. When the doctor gives a drug in full view, Dr. Benedetti said at the neuroscience conference, "there is an additive effect of the drug and of the placebo, the psychosocial component." He suggests that his experimental setup could be extended to become part of the testing procedure for new drugs. Clinical trials could then compare covert and overt administration, rather than comparing the active drug to a placebo. That way, none of the volunteers would go through the trouble of participating without receiving the real experimental treatment, and researchers could still demonstrate that the drug was effective by showing that it reduced symptoms when given covertly. Peering at the Brain With the recent advent of modern brain-scanning techniques, scientists gained the ability to look directly at the regions of the brain involved in the placebo effect. In 2002 researchers in Finland and Sweden published in Science the first brain images of the effect, using a technique called positron emission tomography, better known as PET. The researchers pressed a hot surface onto the hands of nine male volunteers, and then a doctor gave them injections of either a painkiller or a placebo. When the researchers performed PET scans on the men, both the drug and the dummy induced high blood flow -- indicating increased brain activity -- in an area of the brain called the rostral anterior cingulate cortex. That area plays a key role in the painkilling effects of opioid drugs. Then in 2004, also in Science, Mr. Wager reported using functional magnetic resonance imaging, or fMRI, to show that a placebo that relieved pain also decreased activity in the brain's pain-sensing areas. Different people felt varying amounts of pain relief from the placebo. The amount of pain reduction a volunteer experienced went hand in hand with the amount of change in activity in the brain. "Part of the effect of a drug," Mr. Wager said at the conference, "is it changes the way you think about drugs." Jon-Kar Zubieta, an associate professor of psychiatry and radiology at the University of Michigan at Ann Arbor, and several colleagues, including Dr. Stohler of the University of Maryland, peered deeper into the brain's workings by finding out where the brain produces opioids in response to placebo treatment. They used PET scans along with a stain that marks opioid activity in the brain. When the researchers gave male volunteers a painful injection of saline solution into their jaw muscles, the scans showed an increase of opioids in the brain. Most of the regions where the brain produced painkillers coincided with the ones that Mr. Wager identified as important. "Expectation releases substances, molecules, in your brain, that ultimately change your experience," says Dr. Stohler. "Our brain is on drugs. It's on our own drugs." The placebo effect helps not only people in pain but also patients with diseases. In fact, scientists got their most detailed look at the placebo effect by studying how single neurons responded to sham drugs given to Parkinson's patients. Parkinson's disease is a motor disorder caused by loss of brain cells that produce dopamine. Some patients experience temporary relief of symptoms from a placebo, and a previous study showed that the relief occurred because the brain produced dopamine in response. Patients who have Parkinson's disease sometimes receive surgery to implant electrodes deep within the brain. The electrodes can stimulate a neuron or record its activity. Dr. Benedetti, of the University of Turin, and his colleagues enrolled 11 patients who underwent surgery for this type of treatment. They gave the patients a placebo injection, telling them it was a powerful drug that should improve their motor control. The researchers then compared the activity of a single neuron before and after injection of the placebo. In the six patients who responded to the placebo -- who demonstrated less arm rigidity and said they felt better -- the rate of firing of the neuron went down. (Nerve cells "fire," or generate electrical impulses, in order to send signals to neighboring neurons.) The neurons' firing rate did not change for people who experienced no placebo effect. Another disorder that shows clinical improvement with placebos is depression. Depressed patients' moods often lift when they take a placebo, although the effect does not last, and they normally need to seek real treatment, according to Helen S. Mayberg, a professor of neurology and of psychiatry and behavioral sciences at Emory University. Dr. Mayberg became immersed in placebo research a few years ago, when she did a PET study of the brain's response to an antidepressant and to a placebo. In her study of 15 depressed men, four who had taken Prozac and four who had received a placebo experienced a remission of their symptoms. At the end of six weeks, after allowing the drug sufficient time to take effect, Dr. Mayberg took PET scans. For patients whose symptoms improved, the regions where the brain activity increased after a patient took a placebo formed a subset of the regions that increased after a patient took the true drug. "Drug is placebo plus," she said at the conference. In patients whose symptoms did not improve, whether they were on Prozac or on the placebo, the brain activity did not increase in those regions. She had published the results of that study in 2002, but at the conference she reported a new analysis of her data. In the study, she had also collected brain scans one week after patients had begun receiving their treatments, even though the drug hadn't yet taken its full effect. Still, people whose symptoms later improved, whether they took the placebo or Prozac, again had increased brain activity in similar areas. One week into treatment, she says, the men's state of mind could be interpreted as a "heightened state of expectation" since they were anticipating clinical improvements. Nonresponders did not show those patterns, so such expectation could be key to whether a depressed patient will recover. Raising Expectations Dr. Mayberg would like to find ways to help those who do not respond to antidepressant drugs, and she surmises that expectation could make the difference. Such patients, she says, perhaps should imagine themselves getting well. "What is expectation?" she asks. "How do you cultivate it?" Those are questions that all of the scientists involved in this research would like to answer. Patients with chronic pain, says Dr. Zubieta of Michigan, perhaps have lost the ability to produce the brain's natural painkillers. "If you are able to recruit mechanisms that help you cope with stress or pain, that's a good thing," he says, "The question is, how do things like this, or meditation, or biofeedback, work? We don't know." Dr. Stohler of Maryland agrees: "Getting a person to boost their own machinery to improve health -- that's something that medicine needs to know." It may be especially urgent for patients with dementia, according to Dr. Benedetti. At the conference, he reported preliminary results that patients with Alzheimer's disease may not experience placebo effects at all. He found that Alzheimer's patients felt no difference between overt and hidden administration of painkillers. To Dr. Benedetti, that suggests that the psychological components of treatments -- the expectation of health improvements, and the circuits that such expectations create in the brain -- are absent. Perhaps, he said at the conference, doctors need to take that loss into account when prescribing any drug for Alzheimer's patients. Those patients may need higher doses of many drugs, such as painkillers, if their brain has stopped aiding the drug's action. The mind, it seems, may play a critical role in treating diseases. And its services come free of charge, with no co-payment or deductible. From checker at panix.com Wed Dec 7 01:36:08 2005 From: checker at panix.com (Premise Checker) Date: Tue, 6 Dec 2005 20:36:08 -0500 (EST) Subject: [Paleopsych] NYT: Mapmakers and Mythmakers Message-ID: Mapmakers and Mythmakers http://www.nytimes.com/2005/12/01/business/01maps.html [John Ralston Saul's in his great book, Voltaire's Bastards, showed that far from the Enlightenment dream of knowledge for all, knowledge is held secretly, as something to be traded. Lot's of big-wheel bureaucrats play it "close to the chest" and are secretive, even when it is patently unnecessary. So being a Voltaire bastard is far from rare outside the Soviet Union and, as the story shows, continuing in Russia. [The Moscow police back in the bad old days kept CIA maps of their city on their walls, since those publicly available were nearly useless. This was an open secret: at least I heard about it. I also heard that the Soviets could not feed their army, but the Cold Warriors would not report this fact, nor would even the New York Times, which was a critic of a large part of the Cold War. It's amazing what doesn't get reported here, but anyone can now turn to foreign sources on the Web. Not so many as to matter vote-wise, though. And these foreign sources have their own biases. [Pilate's question remains. And he raised it two thousand years before postmodernism! [A good article!] By ANDREW E. KRAMER MOSCOW, Nov. 30 - Bruce Morrow worked for three years on the shores of Lake Samotlor, a tiny dot of water in a maze of oil wells and roads covering more than a thousand square miles of icy tundra in Siberia. From the maps the Russians gave Mr. Morrow, he could never really know where he was, a misery for him as an oil engineer at a joint venture between BP and Russian investors. The latitude and longitude had been blotted out from his maps and the grid diverged from true north. "It was like a game," Mr. Morrow said of trying to make sense of the officially doctored maps, holdovers from the cold war era provided by secretive men who worked in a special department of his company. Unofficially, anyone with Internet access can take a good look at the Samotlor field by zooming down through free satellite-imaging programs like Google Earth, to the coordinates 61 degrees 7 minutes north latitude and 76 degrees 45 minutes east longitude. Mr. Morrow's plight illustrates how some practices that once governed large regions of the former Soviet Union may still lurk in the hallways where bureaucrats from the Communist past cling to power. Not only do they carry over a history of secrecy, but they also serve to continue a tradition of keeping foreigners at bay while employing plenty of people made dependent on Moscow. The misleading maps also reflect the Kremlin's tightening grip on Russian oil, one of the world's critical supplies, and one that is to become even more important in the future with plans for direct shipments to the United States by 2010 from ports in the Far East and the Arctic. The secrecy rule over maps is enforced by the Federal Security Service, or F.S.B., a successor to the old K.G.B. It was written at a time the Russians were suspicious of virtually all foreign businesses and fearful of a missile strike on their Siberian wells. Those days are gone. But as the Russian government reasserts its control over strategic industries - particularly oil - it is not letting up on the rule. The doctored maps belong to a deep-rooted Russian tradition of deceiving outsiders, going back to the days of Potemkin villages in the 18th century and perhaps earlier. During the cold war it was called maskirovka, Soviet military parlance for deception, disinformation and deceit. For decades, government bureaucrats created false statistics and misleading place names. For instance, Baikonur, the Russian space center, was named for a village hundreds of miles away. Accurate maps of old Moscow's warren of back alleys appeared only after the breakup of the Soviet Union. Even now, Mr. Morrow and his colleagues can use only Russian digital map files that encrypt and hide the coordinates of his location. Officially, only Russians with security clearances are permitted to see oil field maps with real coordinates at scales greater than 1:2,500. "It was totally futile," Mr. Morrow said of the false coordinates on his F.S.B. maps, created through an encrypting system. "None of us was particularly keen on pushing it. There were rumors if you do that, you end up in the slammer." A spokeswoman for the F.S.B. confirmed that it controls maps around sites deemed important for national security, including oil fields. Asked whether the easy availability of accurate maps on the Internet made such continued secrecy obsolete, she said the agency was interested only in national security and would not elaborate on its practices. Foreign business executives, though, say there is a secret behind the secret maps, and it has little to do with national security. The rules are not only a way to maintain control over a strategic industry, but also form a subtle trade barrier and are a convenient way to increase Russian employment. After all, TNK-BP, the 50-50 joint venture where Mr. Morrow works, pays scores of cartographers to encode and decode the maps, said Frank Rieber, a former engineer there. The rules cover all oil companies, but are particularly pressing for TNK-BP. They provide a livelihood to hundreds of F.S.B.-licensed cartographers. Oil companies either outsource the work of stripping and restoring coordinates to independent institutes, or employ Russians with security clearances to do the work, as TNK-BP does. The map orientations are shifted from true north - the top of the map could be pointing slightly east, for example - and the grid does not correspond to larger maps. "It makes us pull our hair out," Mr. Rieber said. Yevgenia M. Albats, author of a 1994 book on the K.G.B., "The State Within a State," said the spy agency's interest in oil field mapping is just anther way of asserting its influence on society and business here, though one increasingly made obsolete by the Internet. "The F.S.B. knows about Google Earth as well as anybody," she said. "This doesn't have anything to do with national security. It's about control of the cash flow." The agency is guarding the wells as much from foreign business executives as from foreign missiles these days, she said. The laws about oil field secrets are used to persuade TNK-BP to replace foreign managers with Russians, more susceptible to pressure from the authorities, Ms. Albats said. "Russians are easier to manipulate," she continued. "They don't want to end up in Khodorkovsky's shoes," she said, referring to the former chief executive of the Yukos oil company, Mikhail B. Khodorkovsky, now in a Siberian penal colony serving an eight-year sentence. He was convicted of fraud and tax evasion after falling out with the Kremlin over taxes, oil-export routes and politics. The F.S.B. has also pursued scientists who cooperate with foreign companies in other industries. Last winter it charged a physicist, Oskar A. Kaibyshev, with exporting dual-use metal alloy technology to a South Korean company. Mr. Kaibyshev objected in vain that the technology had already been discussed in foreign journals. The case is pending. On Oct. 26, F.S.B. agents arrested three scientists at a Moscow aerospace company and accused them of passing secrets to the Chinese. Another physicist, Valentin V. Danilov, was convicted of selling technology for manned space flights to the same Chinese company last year, though he also protested that the information was available from published sources. At the same time, the Kremlin is using oil to recapture status lost with the collapse of the Soviet Union, which explains the close attention paid to the industry by the security services. Foreign Minister Sergey V. Lavrov told a Parliament committee in October that energy exports were Russia's most powerful diplomatic tool in relations with other nations, according to a report in the newspaper Nezavisimaya. BP bought into the Tyumen oil company, or TNK, in 2003. Friction over the use of oil field maps existed from early on, geologists at the company said, but intensified this year. The issue has risen to high levels in the government, with a faction that embraces foreign investment protesting that the F.S.B. is hobbling the work of Western engineers who come to help this country drill for oil, providing technology and expertise in the process. In October, Andrei V. Sharonov, a deputy economic and trade minister, said F.S.B. pressure on the oil venture over the classification of maps had disrupted production in western Siberia, an article in Vedomosti reported. It quoted Mr. Sharonov as saying that the agency was pressing TNK-BP to replace Western managers with Russians. A spokeswoman for Mr. Sharonov declined to comment. An F.S.B. spokeswoman denied any ulterior motives in policing oil field maps. Engineers call the practice a nuisance, but say it has not disrupted production. The licensed cartographers are skilled in accurately translating between real and false coordinates, and so far, they do not know of any major mistakes, they say. In a telephone interview from his home in Santa Barbara, Calif., Mr. Morrow, who worked as an engineer for TNK-BP from 2002 until May, said he left partly because he became frustrated with the police controls. He guided a reporter to Lake Samotlor on Google Earth. The lake lies just north of Nizhnevartovsk, a city on the Ob River, as it loops in silvery ribbons through a background of dark green Siberian wilderness. In the middle of the lake is an island, like a bull's eye. "That was the folly of it," Mr. Morrow said. "You could get this information anywhere. The bureaucracy got in the way of common sense. But that didn't make it any less illegal, or any less inconvenient." From shovland at mindspring.com Wed Dec 7 05:07:12 2005 From: shovland at mindspring.com (Steve Hovland) Date: Tue, 6 Dec 2005 21:07:12 -0800 Subject: [Paleopsych] Francis Crick and panspermia Message-ID: Life on a Meteor Ride Artist's depiction of the Chicxulub impact crater. The total number of objects a kilometer in diameter or larger, a size that could cause global catastrophe upon Earth impact, is now estimated to range between 900 and 1,230. Credit: NASA The British molecular biologist Francis Harry Crick died on Wednesday at the age of 88. Crick changed our understanding of life when, in 1953, he and James Watson announced that DNA came packaged in an elegant double helix structure. Crick reportedly claimed they had found 'the secret of life,' and many scientists agree. The double-helix structure explained how genetic material replicated through nitrogenous base pair bonds. Some see this as the most important development in biology in the 20th century, and Watson and Crick were awarded the Nobel Prize in Medicine for their discovery in 1962. Crick was not content to sit back on his laurels after winning one of the top prizes in science, however. He continued to study the mysteries of life, such as the nature of consciousness, or the possibility that RNA preceded the development of DNA. In 1973, he and the chemist Leslie Orgel published a paper in the journal Icarus suggesting that life may have arrived on Earth through a process called 'Directed Panspermia.' see Great Impact Debate Part 1 * Part 2 * Part 3 * Part 4 * Part 5 The Panspermia hypothesis suggests that the seeds of life are common in the universe and can be spread between worlds. This idea originated with the Greek philosopher Anaxagoras, and was later promoted by the Swedish physicist Svante Arrhenius and the British astronomer Fred Hoyle. Versions of this hypothesis have survived to the present day, with the discovery of proposed 'fossil structures' in the martian meteorite ALH84001. In a related project conducted by members of NASA's Astrobiology Institute, scientists have created primitive organic cell-like structures. They did it in their laboratory by duplicating the harsh conditions of cold interstellar space! Did comets carry such protocells to Earth? 'Directed Panspermia' suggests that life may be distributed by an advanced extraterrestrial civilization. Crick and Orgel argued that DNA encapsulated within small grains could be fired in all directions by such a civilization in order to spread life within the universe. Their abstract in the 1973 Icarus paper reads: "It now seems unlikely that extraterrestrial living organisms could have reached the earth either as spores driven by the radiation pressure from another star or as living organisms imbedded in a meteorite. As an alternative to these nineteenth-century mechanisms, we have considered Directed Panspermia, the theory that organisms were deliberately transmitted to the earth by intelligent beings on another planet. We conclude that it is possible that life reached the earth in this way, but that the scientific evidence is inadequate at the present time to say anything about the probability. We draw attention to the kinds of evidence that might throw additional light on the topic." The Miller-Urey experiment generated electric sparks -- meant to model lightning -- in a mixture of gases thought to resemble Earth's early atmosphere. Credit: AccessExcellence.org Crick and Orgel further expanded on this idea in their 1981 book, 'Life Itself.'. They believed there was little chance that microorganisms could be transported between planets and across interstellar distances by random accident. But a technological civilization could direct panspermia by stocking a spacecraft with a genetic starter kit. They suggested that a large sample of different microorganisms with minimal nutritional needs could survive the long journey between worlds. Many scientists are critical of the Panspermia hypothesis, because it does not try to answer the question of how life first originated. Instead, it passes the responsibility on to another place and another time, offering at best a partial solution to the question. Crick and Orgel suggested that Directed Panspermia might help resolve some mysteries about life's biochemistry. For instance, it could be the reason why the biological systems of Earth are dependent on molybdenum, when the chemically similar metals chromium and nickel are far more abundant. They suggested that the seeds for life on Earth could have originated from a location far richer in molybdenum. Other scientists have noted, however, that in seawater molybdenum is more abundant than either chromium or nickel. Coming full circle to his groundbreaking discovery of DNA's structure, Crick wondered, if life began in the great "primeval soup" suggested by the Miller/Urey experiment, why there wouldn't be a multitude of genetic materials among the different life forms. Instead, all life on Earth shares the same basic DNA structure. Crick and Orgel wrote in their book 'Life Itself,' "an honest man, armed with all the knowledge available to us now, could only state that in some sense, the origin of life appears at the moment to be almost a miracle, so many are the conditions which would have had to have been satisfied to get it going." From checker at panix.com Thu Dec 8 02:20:50 2005 From: checker at panix.com (Premise Checker) Date: Wed, 7 Dec 2005 21:20:50 -0500 (EST) Subject: [Paleopsych] Newsweek: Fighting Anorexia: No One to Blame Message-ID: Fighting Anorexia: No One to Blame http://www.msnbc.msn.com/id/10219756/site/newsweek/ [Interview and an article on "pro-ana" groups appended.] It's fascinating how the causes and blames for this disease get moved around, more so than with most other events and processes that have multiple causes. For me, anorexia is the best current example of a "socially constructed" disease. I do not deny that it is also a medical condition, but we must not think that the brain cannot play an active role. Well, we know that. What right-wingers do not want to admit is that the verbiage we take in shapes these diseases. Their medical model is germs to disease, or bottom-up causation. The wilder pomos say it's strictly mind to disease or, more strictly, verbiage to disease. [But it's plain that anorexia did not exist, in anything remotely like its current prevalence, until a few decades ago. To blame it on girls aping fashion models is a verbiage account, but that's English major metaphorism, and the author of the main article knows that. She does not hide the huge hereditary component, but this only means that, *at the present*, the hereditability is high. The larger historical problem is why the sudden increase. ["Socially constructed" connotes English major metaphorism in the minds of right wingers, but it is almost certain, in the case of anorexia, that changes *society* have far outweigh genetic changes or environmental changes like contaminants. "Constructed," though, implies a constructor or a Social Planner. I'd like a better term. [Multiple personality disorders is an earlier example of a socially constructed disease. Its entire existence spanned a few decades in the last century. Marianne Noble's _The Masochistic Values of Sentimental Literature_ convinced me that masochist was socially constructed. She's not merely an English major but an English Professor (American U.)! I met her at a party for Sarah's choir and got the book. I was one of the first pomo books I had read and found it rough going, though today I've picked up enough of the jargon to sail through it much more quickly. [I've decided to alter the meme I'm preparing on what it would take for me to abandon my three most cherished hypotheses. The first two, non-creation and co-evolution will remain, but I'm going to expand the third from the inability to precisely nail down our basic concepts to postmodernism, which includes that. It will be hard enough for me to describe what *I* mean by that term, and harder still to specify what it would take for me to abandon it. All three, as I work out my thoughts, are part of the broad movement from Western (mechanistic) to Darwinian (stochastic) Civilization. [As you wait impatiently for my meme, tell me what it would take for you to abandon your three most cherished hypotheses.] -------------------- The age of their youngest patients has slipped to 9 years old, and doctors have begun to research the roots of this disease. Anorexia is probably hard-wired, the new thinking goes, and the best treatment is a family affair. By Peg Tyre Newsweek Dec. 5, 2005 issue - Emily Krudys can pinpoint the moment her life fell apart. It was a fall afternoon in the Virginia suburbs, and she was watching her daughter Katherine perform in the school play. Katherine had always been a happy girl, a slim beauty with a megawatt smile, but recently, her mother noticed, she'd been losing weight. "She's battling a virus," Emily kept on telling herself, but there, in the darkened auditorium, she could no longer deny the truth. Under the floodlights, Katherine looked frail, hollow-eyed and gaunt. At that moment, Emily had to admit to herself that her daughter had a serious eating disorder. Katherine was 10 years old. Who could help their daughter get better? It was a question Emily and her husband, Mark, would ask themselves repeatedly over the next five weeks, growing increasingly frantic as Katherine's weight slid from 48 to 45 pounds. In the weeks after the school play, Katherine put herself on a brutal starvation diet, and no one?not the school psychologist, the private therapist, the family pediatrician or the high-powered internist?could stop her. Emily and Mark tried everything. They were firm. Then they begged their daughter to eat. Then they bribed her. We'll buy you a pony, they told her. But nothing worked. At dinnertime, Katherine ate portions that could be measured in tablespoons. "When I demanded that she eat some food?any food?she'd just shut down," Emily recalls. By Christmas, the girl was so weak she could barely leave the couch. A few days after New Year's, Emily bundled her eldest child into the car and rushed her to the emergency room, where she was immediately put on IV. Home again the following week, Katherine resumed her death march. It took one more hospitalization for the Krudyses to finally make the decision they now believe saved their daughter's life. Last February, they enrolled her in a residential clinic halfway across the country in Omaha, Neb.?one of the few facilities nationwide that specialize in young children with eating disorders. Emily still blames herself for not acting sooner. "It was right in front of me," she says, "but I just didn't realize that children could get an eating disorder this young." Most parents would forgive Emily Krudys for not believing her own eyes. Anorexia nervosa, a mental illness defined by an obsession with food and acute anxiety over gaining weight, has long been thought to strike teens and young women on the verge of growing up?not kids performing in the fourth-grade production of "The Pig's Picnic." But recently researchers, clinicians and mental-health specialists say they're seeing the age of their youngest anorexia patients decline to 9 from 13. Administrators at Arizona's Remuda Ranch, a residential treatment program for anorexics, received so many calls from parents of young children that last year, they launched a program for kids 13 years old and under; so far, they've treated 69 of them. Six months ago the eating-disorder program at Penn State began to treat the youngest ones, too?20 of them so far, some as young as 8. Elementary schools in Boston, Manhattan and Los Angeles are holding seminars for parents to help them identify eating disorders in their kids, and the parents, who have watched Mary-Kate Olsen morph from a child star into a rail-thin young woman, are all too ready to listen. At a National Institute of Mental Health conference last spring, anorexia's youngest victims were a small part of the official agenda?but they were the only thing anyone talked about in the hallways, says David S. Rosen, a clinical faculty member at the University of Michigan and an eating-disorder specialist. Seven years ago "the idea of seeing a 9- or 10-year-old anorexic would have been shocking and prompted frantic calls to my colleagues. Now we're seeing kids this age all the time," Rosen says. There's no single explanation for the declining age of onset, although greater awareness on the part of parents certainly plays a role. Whatever the reason, these littlest patients, combined with new scientific research on the causes of anorexia, are pushing the clinical community?and families, and victims?to come up with new ways of thinking about and treating this devastating disease. Not many years ago, the conventional wisdom held that adolescent girls "got" anorexia from the culture they lived in. Intense young women, mostly from white, wealthy families, were overwhelmed by pressure to be perfect from their suffocating parents, their demanding schools, their exacting coaches. And so they chose extreme dieting as a way to control their lives, to act out their frustration at never being perfect enough. In the past decade, though, psychiatrists have begun to see surprising diversity among their anorexic patients. Not only are anorexia's victims younger, they're also more likely to be black, Hispanic or Asian, more likely to be boys, more likely to be middle-aged. All of which caused doctors to question their core assumption: if anorexia isn't a disease of type-A girls from privileged backgrounds, then what is it? Although no one can yet say for certain, new science is offering tantalizing clues. Doctors now compare anorexia to alcoholism and depression, potentially fatal diseases that may be set off by environmental factors such as stress or trauma, but have their roots in a complex combination of genes and brain chemistry. In other words, many kids are affected by pressure-cooker school environments and a culture of thinness promoted by magazines and music videos, but most of them don't secretly scrape their dinner into the garbage. The environment "pulls the trigger," says Cynthia Bulik, director of the eating-disorder program at the University of North Carolina at Chapel Hill. But it's a child's latent vulnerabilities that "load the gun." Parents do play a role, but most often it's a genetic one. In the last 10 years, studies of anorexics have shown that the disease often runs in families. In a 2000 study published in The American Journal of Psychiatry, researchers at Virginia Commonwealth University studied 2,163 female twins and found that 77 of them suffered from symptoms of anorexia. By comparing the number of identical twins who had anorexia with the significantly smaller number of fraternal twins who had it, scientists concluded that more than 50 percent of the risk for developing the disorder could be attributed to an individual's genetic makeup. A few small studies have even isolated a specific area on the human genome where some of the mutations that may influence anorexia exist, and now a five-year, $10 million NIMH study is underway to further pinpoint the locations of those genes. Amy Nelson, 14, a ninth grader from a Chicago suburb, thinks that genes played a role in her disease. Last year Amy's weight dropped from 105 to a skeletal 77 pounds, and her parents enrolled her in the day program at the Alexian Brothers Behavioral Health Hospital outside Chicago. Over the summer, as Amy was getting better, her father found the diary of his younger sister, who died at 18 of "unknown causes." In it, the teenager had calculated that she could lose 13 pounds in less than a month by restricting herself to less than 600 calories a day. No salt, no butter, no sugar, "not too many bananas," she wrote in 1980. "Depression can run in families," says Amy, "and an eating disorder is like depression. It's something wrong with your brain." These days, Amy is healthier and, though she doesn't weigh herself, thinks she's around 100. She has a part in the school play and is more casual about what she eats, even to the point of enjoying ice cream with friends. Scientists are tracking important differences in the brain chemistry of anorexics. Using brain scans, researchers at the University of Pittsburgh, led by professor of psychiatry Dr. Walter Kaye, discovered that the level of serotonin activity in the brains of anorexics is abnormally high. Although normal levels of serotonin are believed to be associated with feelings of well-being, these pumped-up levels of hormones may be linked to feelings of anxiety and obsessional thinking, classic traits of anorexia. Kaye hypothesizes that anorexics use starvation as a mode of self-medication. How? Starvation prevents tryptophane, an essential amino acid that produces serotonin, from getting into the brain. By eating less, anorexics reduce the serotonin activity in their brains, says Kaye, "creating a sense of calm," even as they are about to die of malnutrition. Almost everyone knows someone who has trouble with food: extremely picky eating, obsessive dieting, body-image problems, even voluntary vomiting are well known. But in the spectrum of eating disorders, anorexia, which affects about 2.5 million Americans, stands apart. For one thing, anorexics are often delusional. They can be weak with hunger while they describe physical sensations of overfullness that make it physically uncomfortable for them to swallow. They hear admonishing voices in their heads when they do manage to choke down a few morsels. They exercise compulsively, and even when they can count their ribs, their image in the mirror tells them to lose more. When 12-year-old Erin Phillips, who lives outside Baltimore, was in her downward spiral, she stopped eating butter, then started eating with chopsticks, then refused solid food altogether, says her mother, Joann. Within two months, Erin's weight had slipped from 70 to 50 pounds. "Every day, I'd watch her melt away," Joann says. Before it struck her daughter, Joann had been dismissive about the disease. "I used to think the person should just eat something and get over it. But when you see it up close, you can't believe your eyes. They just can't." (Her confusion is natural: the term anorexia comes from a Greek word meaning "loss of appetite.") Anorexia is a killer?it has the highest mortality rate of any mental illness, including depression. About half of anorexics get better. About 10 percent of them die. The rest remain chronically ill?exhausting, then bankrupting, parents, retreating from jobs and school, alienating friends as they struggle to manage the symptoms of their condition. Hannah Hartney of Tulsa, Okla., was first hospitalized with anorexia when she was 10. After eight weeks, she was returned to her watchful parents. For the last few years, she was able to maintain a normal weight but now, at 16, she's been battling her old demons again. "She's not out of the woods," says her mother, Kathryn. While adults can drift along in a state of semi-starvation for years, the health risks for children under the age of 13 are dire. In their preteen years, kids should be gaining weight. During that critical period, their bones are thickening and lengthening, their hearts are getting stronger in order to pump blood to their growing bodies and their brains are adding mass, laying down new neurological pathways and pruning others?part of the explosion of mental and emotional development that occurs in those years. When children with eating disorders stop consuming sufficient calories, their bodies begin to conserve energy: heart function slows, blood pressure drops; they have trouble staying warm. Whatever estrogen or testosterone they have in their bodies drops. The stress hormone cortisol becomes elevated, preventing their bones from hardening. Their hair becomes brittle and falls out in patches. Their bodies begin to consume muscle tissue. The brain, which depends at least in part on dietary fat to grow, begins to atrophy. Unlike adult anorexics, children with eating disorders can develop these debilitating symptoms within months. Lori Cornwell says her son's descent was horrifyingly fast. In the summer of 2004, 9-year-old Matthew Cornwell of Quincy, Ill., weighed a healthy 49 pounds. Always a picky eater, he began restricting his food intake until all he would eat was a carrot smeared with a tablespoon of peanut butter. Within three months, he was down to 39 pounds. When the Cornwells and their doctor finally located a clinic that would accept a 10-year-old boy, Lori tucked his limp body under blankets in the back seat of her car and drove all night across the country. Matthew was barely conscious when he arrived at the Children's Hospital in Omaha. "I knew that I had to get there before he slipped away," she says. With stakes this high, how do you treat a malnourished third grader who is so ill she insists five Cheerios make a meal? First, say a growing number of doctors and patients, you have to let parents back into the treatment process. For more than a hundred years, parents have been regarded as an anorexic's biggest problem, and in 1978, in her book "Golden Cage," psychoanalyst Hilde Bruch suggested that narcissistic, cold and unloving parents (or, alternatively, hypercritical, overambitious and overinvolved ones) actually caused the disease by discouraging their children's natural maturation to adulthood. Thirty years ago standard treatment involved helping the starving and often delusional adolescents or young women to separate psychologically?and sometimes physically?from their toxic parents. "We used to talk about performing a parental-ectomy," says Dr. Ellen Rome, head of adolescent medicine at the Cleveland Clinic. Too often these days, parents aren't so much banished from the treatment process as sidelined, watching powerlessly as doctors take what can be extreme measures to make their children well. In hospitals, severely malnourished anorexics are treated with IV drips and nasogastric tubes. In long-term residential treatment centers, an anorexic's food intake is weighed and measured, bite by bite. In individual therapy, an anorexic tries to uncover the roots of her obsession and her resistance to treatment. Most doctors use a combination of these approaches to help their patients get better. Although parents are no longer overtly blamed for their child's condition, says Marlene Schwartz, codirector of the Yale eating-disorder clinic, doctors and therapists "give parents the impression that eating disorders are something the parents did that the doctors are now going to fix." Worse, the state-of-the-art protocols don't work for many young children. A prolonged stay in a hospital or treatment center can be traumatic. Talk therapy can help some kids, but many others are too young for it to be effective. Back at home, family mealtimes become a nightmare. Parents, advised not to badger their child about food, say nothing?and then they watch helpless and heartbroken as their child pushes the food away. In the last three years, some prominent hospitals and clinics around the country have begun adopting a new treatment model in which families help anorexics get better. The most popular of the home-based models, the Maudsley approach, was developed in the 1980s at the Maudsley Hospital in London. Two doctors there noticed that when severely malnourished, treatment-resistant anorexics were put in the hospital and fed by nurses, they gradually gained weight and began to participate in their own recovery. They decided that given the right support, family members could get anorexics to eat in the same way the nurses did. These days, family-centered therapy works like this: A team of doctors, therapists and nutritionists meets with parents and the child. The team explains that while the causes of anorexia are unclear, it is a severe, life-threatening disease like cancer or diabetes. Food, the family is told, is the medicine that will help the child get better. Like oncologists prescribing chemotherapy, the team provides parents with a schedule of calories, lipids, carbohydrates and fiber that the patient must eat every day and instructs them on how to monitor the child's intake. It coaches siblings and other family members on how to become a sympathetic support team. After a few practice meals in the hospital or doctor's office, the whole family is sent home for a meal. "I told my daughter, 'You're going to hate this'," says Mitzi Miles, whose daughter Kaleigh began struggling with anorexia at 10. "She said, 'I could never hate you, Mom.' And I said, 'We'll see'." The first dinner at the Miles home outside Harrisburg, Pa., was a battle?but Mitzi, convinced by Kaleigh's doctor she was doing the right thing, didn't back down. After 45 minutes of yelling and crying, Kaleigh began to eat. Over the next 20 weeks, Kaleigh attended weekly therapy sessions, and Mitzi got support from the medical team, which instructed her to allow Kaleigh to make more food choices on her own. Eleven months later, Kaleigh is able to maintain a normal weight. Mitzi no longer measures out food portions or keeps a written log of her daily food intake. Critics point out that the Maudsley approach won't work well for adults who won't submit to other people's making their food choices. And they charge that in some children, parental oversight can do more harm than good. Young anorexics and their parents are already locked in a battle for control, says Dr. Alexander Lucas, an eating-disorder specialist and professor emeritus at the Mayo Clinic in Minnesota. The Maudsley approach, he says, "may backfire" by making meals into a battleground. "The focus on weight gain," he says, "has to be between the physician and the child." Even proponents say that family-centered treatment isn't right for everyone: families where there is violence, sexual abuse, alcoholism or drug addiction aren't good candidates. But several studies both in clinics at the Maudsley Hospital and at the University of Chicago show promising results: five years after treatment, more than 70 percent of patients recover using the family-centered method, compared with 50 percent who recover by themselves or using the old approaches. Currently, a large-scale NIH study of the Maudsley approach is underway. Mental-health specialists say the success of the family-centered approach is finally putting the old stigmas to rest. "An 8-year-old with anorexia isn't in a flight from maturity," says Dr. Julie O'Toole, medical director of the Kartini Clinic in Portland, Ore., a family-friendly eating-disorder clinic. "These young patients are fully in childhood." Most young anorexics, O'Toole says, have wonderful, thoughtful, terribly worried parents. These days, when a desperately sick child enters the Kartini Clinic, O'Toole tries to set parents straight. "I tell them it's a brain disorder. Children don't choose to have it and parents don't cause it." Then she gives the parents a little pep talk. She reminds them that mothers were once blamed for causing schizophrenia and autism until that so-called science was debunked. And that the same will soon be true for anorexia. At the conclusion of O'Toole's speech, she says, parents often weep. Ironically, family dinners are one of the best ways to prevent a vulnerable child from becoming anorexic. Too often, dinner is eaten in the back seat of an SUV on the way to soccer practice. Parents who eat regular, balanced meals with their children model good eating practices. Family dinners also help parents spot any changes in their child's eating habits. Dieting, says Dr. Craig Johnson, director of the eating-disorder program at Laureate Psychiatric Hospital in Tulsa, triggers complex neurobiological reactions. If you have anorexia in the family and your 11-year-old tells you she's about to go on a diet and is thinking about joining the track team, says Johnson, "you want to be very careful about how you approach her request." For some kids, innocent-seeming behavior carries enormous risks. Children predisposed to eating disorders are uniquely sensitive to media messages about dieting and health. And their interpretation can be starkly literal. When Ignatius Lau of Portland, Ore., was 11 years old, he decided that 140 pounds was too much for his 5-foot-2 frame. He had heard that oils and carbohydrates were fattening, so he became obsessed with food labels, cutting out all fats and almost all carbs. He lost 32 pounds in six months and ended up in a local hospital. "I told myself I was eating healthier," Ignatius says. He recovered, but for the next three years suffered frequent relapses. "I'd lose weight again and it would trigger some of my old behaviors, like reading food labels," he says. These days he knows what healthy feels like. Ignatius, now 17, is 5 feet 11, 180 pounds, and plays basketball. Back in Richmond, Va., Emily Krudys says her family has changed. For two months Katherine stayed at the Omaha Children's Hospital, and slowly gained weight. Emily stayed nearby?attending the weekly therapy sessions designed to help integrate her into Katherine's treatment. After Katherine returned home, Emily home-schooled her while she regained her strength. This fall, Katherine entered sixth grade. She's got the pony, and she's become an avid horsewoman, sometimes riding five or six times a week. She's still slight, but she's gaining weight normally by eating three meals and three or four snacks a day. But the anxiety still lingers. When Katherine says she's hungry, Emily has been known to drop everything and whip up a three-course meal. The other day she was startled to see her daughter spreading sour cream on her potato. "I thought, 'My God, that's how regular kids eat all the time'," she recalls. Then she realized that her daughter was well on the way to becoming one of those kids. With Karen Springen, Ellise Pierce, Joan Raymond and Dirk Johnson Live Talk Transcript: Fighting Anorexia - Newsweek Society - MSNBC.com http://www.msnbc.msn.com/id/10216848/site/newsweek/ NEWSWEEK general editor Peg Tyre joined us for a Live Talk on this week's anorexia cover story on Thursday, Dec. 1. Anorexia, which affects 2.5 million Americans, isn't simply an eating disorder-it's a mental illness with a higher mortality rate than even depression. Patients who starve and deny themselves essential nutrients can cause long-term damage to their bodies. The disease's youngest victims, who are getting younger and younger, are also its most vulnerable. NEWSWEEK's Peg Tyre reports that the face of anorexia is no longer just the "type-A girls from privileged backgrounds" who confront pressures from parents, schools or coaches. Instead, they are more likely to be minorities, boys or middle-aged. There's also a genetic link to this disease, much like alcoholism and depression. As for treatment, researchers are saying parents need to be part of the process, instead of being viewed as contributing to the disease. Tyre, a NEWSWEEK general editor, will answer your questions on anorexia during a Live Talk on Thursday, Dec. 1, at noon ET. Peg Tyre: Hi All, Peg Tyre here. I'm the author of No One To Blame - Newsweek's cover story on anorexia. I'll try and answer your questions in the next hour. P. _______________________ Brooklyn,NY: When an individual gets anorexia, is it a disease that just comes up all of a sudden or is a disease that they have had for years but had not turned up until something triggers it? Peg Tyre: What I learned is that many people seem to have a latent vulnerability to the disease that is triggered by environmental factors. In terms of symptoms, many victims I talked to reported that it seemed to "come out of the blue." Others said it had been building for a long time. _______________________ Midland, GA: I was anorexic, in and out of hospitals and doctors offices for numerous years. Though it was not easy, I have now learned how not to obsess about food and weight. Actually, I am now trying valiantly to gain a few pounds!; I have an 18 month old daughter. What I would like to know is if there are any behaviors that we as parents need to avoid in raising her as a healthy happy girl. And how I can start teaching her how to love herself and have a healthy bady image. Peg Tyre: Congratulations! It sounds like you have done what many anorexics long to do-put it in their past! And congrats, too, on starting a family. We all worry about our children and their eating. It's such a primal concern. But for you, it will be a bit more fraught. Anorexia, as you probably know, tends to run in families. So you're going to have to keep a sharp eye on her. But, and here's the hard part, you are going to have to find a way to be normal (at least in front of her) about food. If I were you, I'd find a good therapist who you can discuss this with-you'll have so many questions as your daughter grows and goes through different phases. Good luck! _______________________ Indialantic, FL: Hi, do you have any sense of how funding for this terrible disease compares to other disorders such as AIDS ? Peg Tyre: I was astonished at how little research dollars are actually being spent on eating disorders. I think because of the heavy stigma that is placed on families, most families of anorexics tend to lay low and suffer in silence instead of coming out and trying to raise money. These families often think (and are sometimes told) it is something they caused! _______________________ Oklahoma City, OK: What percentage of teens are affected with anorexia? Peg Tyre: Good question. The answer is that there are no good numbers for eating disorders. There is no central reporting on eating disorders and very little follow up over time. That said, the rate of anorexia is and always has been low-less than 1%. For eating disorders in general, the rates are much higher. _______________________ St. George's, Grenada: Is it likely for a person suffering from Anorexia to die? Peg Tyre: Anorexic can be a fatal disease for many people. Some studies say 10% of them die, some studies say 20%, some say 5% every decade. Mostly they die of suicide or starvation. _______________________ Honolulu, HI: Which treatment centers in the US use the Maudsley approach? Peg Tyre: There are very good programs at the Univ. of Chicago, at the Comprehensive Eating Disorder Program at the Lucile Packard Children's Hosptital in Pao Alto, Ca., at Columbia University in NY and at Mount Sinai also, in NY. _______________________ Austin, TX: In covering this story, did you encounter any information about the insurance industry and its willingness to cover the expense of treatment for eating disorders? In my experience, which was years ago, there was almost no coverage. Just wondering if the new information about biological connections has heloed with this. Peg Tyre: Many families shared their struggles with their insurance companies who by and large, don't recognize this and pay for treatment in the way they might. _______________________ Summerville, SC: What advice can you give to parents of an anorexic who is no longer a teenager and refuses to go to drs appt or therapy? My daughter went through treatment at 14 and went into recovery in about 6 months. After a relatively healthy 3 years, she is struggling and dipping in and out of relapse. It is just so hard when she is making her own decisions now, and isn't open to my parental advice. Peg Tyre: I'm sorry. That sounds like a very difficult situation. My advice to you would be to get in touch with Cynthia Bulik, a professor at UNC in Chapel Hill and ask her for advice. She is an ED specialist. _______________________ Charlotte, NC: How come it's nobody's fault if a kid is anorexic, but parents, society, and supersized sandwiches and biggie fries are responsible for childhood obesity? These are symptoms of the same thing, a whacked out relationship with food. Obesity occurs in families, too, and starts before 10 years old. The people with "eating disorders" as described in this article are just the skinny victims. Clearly the implication is that there is blame to go around for fat. Peg Tyre: You raise some interesting points. I'm not sure, though, about connecting anorexia to obesity in this way. If you had a kid who ate without stopping until they died-who heard voices telling them to eat more-who refused to move so that they wouldn't burn a calorie-that might be the flip side of anorexia. Obesity is a different animal that what we are talking about with anorexia. _______________________ Columbia, PA: I eat a meal a day a have for years and always thought I may have anorexia, but I'm not hungry, that is why I eat 1 meal. Is this anorexia and can it be involuntary. Peg Tyre: I think most anorexics would tell you that it is involuntary. It is not something they are doing. I don't know you or your medical history and I'm not a doctor. I also can't see you so I don't know if your bones are showing. But if you are worried about it, ask your physician. Describe your eating patterns. He or she should be able to tell you quick enough. _______________________ Houston, TX: Did you find anyone investigating Anorexia possibly being linked to PANDAS (pediatric autoimmune neuropsychiatric disorders associated with streptococcal infections? Some groups have been investigating sudden and dramatic onset Peg Tyre: Glad you brought this up. This is a really interesting area of research that I simply didn't have space for. There are doctor who have made links between kids getting strep or a bacterial infection and then, coming down with anorexia. They have also tied PANDAS as it is called to the onset of obsessive compulsive disorder. It it really outside the box-to suggest that a bacterial infection (or perhaps its treatment) may be causing these profound behavioral and neurological changes. But I think it is an exciting avenue of inquiry. It is about time doctors started to take a fresh look at it! _______________________ Pittsburgh, PA: I am in my 30s and suffered severe anorexia. I was treated at Remuda Ranch. Although this is an outstanding article, it is important to note that the family situations described by Hilda Bruch in The Golden Cage e.g. controlling, narcissistic parents, ARE still relevant for some patients. In the opinion of my doctors and therapists, incl those at RR, my ED was caused in large part by my family situation. All of the points made in the article eg genetic susceptibility, are valid. However, I would caution that in some patients a family based treatment approach e.g the Maudsley method, is not suitable. My father hit me with a belt when I would not eat. Clearly the method of parental control of meals that is described would have been completely inappropriate in my case, and undoubtedly in others also. Thank you for the good information in the article. Peg Tyre: I'm very sorry you had to endure what you did. It is heartbreaking to hear about it. You make a good point-and one I tried to emphasize in the article-the family based method is clearly not right for every family-especially for those with a history of addiction or violence. However, it does offer some new hope for an old intractable problem. Good luck to you! _______________________ Minneapolis, MN: Since anorexia is a brain disorder, likened to depression and alcoholism for its genetical predetermination to some degree, has there been any research on the use of anti-depressants, mood-stabilizers, and/or anti-psychotics as a way to aid the symptomology of this disorder? Peg Tyre: I haven't found any good long term studies that suggest that anti-depressants or other psycho-active drugs are helpful. That said, I know clinicians often prescribe anti-depressants/anti-anxiety drugs to anorexics. Often, anorexics suffer from depression or anxiety and I guess some doctors are trying to treat both. _______________________ Ft. Worth, TX: After suffering from and overcoming anorexia, I still face severe anxiety and depression. Is this just because of my genetic makeup? What can I do to combat these issues and feel accepted by my family? (I am currently 19 yrs old and attend a university.) Peg Tyre: There are good studies that show that anorexics often suffer from depression and anxiety as well. Both of those conditions are treatable with the right drugs and a good therapist. Find a good doctor. (University health services should be able to refer you.) _______________________ Indialantic, FL: A followup question, please. Who are the people to contact to get involved in a serious fundraising effort, including corporations that may want to consider sponsorship ? Peg Tyre: I think the National Eating Disorder Association is probably your best bet. _______________________ Rochester, NY: Let's say there are two girls. One has been anorexic for 15+ years. The other girl for six years. Both try to get better but always fall back into their old habits. Would the 1st girl be considered chronic and the other one not ready? Or would they both be considered chronic? There is no defition of what constitute chronic anorexia, so if you could answer my question, it would be great. Peg Tyre: I think they would both be considered chronic. _______________________ Greensboro NC: Why are there only a handful of clinics worldwide to help those with this disease? And why are they so expensive? Peg Tyre: There are more than a handful but you are right, most of them are very very expensive. It is a difficult disease to treat-many parents end up re-mortgaging the house to get their kids in treatment. I'm surprised more families don't lobby for better coverage from their insurance carriers. _______________________ Peg Tyre: I want to put in a plug here for the ongoing NIMH study which is looking at the role genetics plays in anorexia. If you have anorexia, and you think it might run in your family, and you want to be part of an important study that will cost you nothing and may help future generations-you can go to [51]www.angenetics.org or phone 412-647-9794 to get more information about it. _______________________ Oklahoma City, OK: This is a 9th grade class called Basic Life Skills. We have been studing eating disorders. We wrote questions to ask, here is one: Do anorexics still feel hungry, or do they become immune to the pain of not eating? Peg Tyre: Good question: some say that they are not hungry. In fact, they feel full-one said "like I just ate two thanksgiving dinners" almost all the time. Others feel hungry but ignore it until their body stops sending them the "I"m hungry" message all the time. _______________________ Philadelphia, PA: I have a friend of the family, whose 19 year old daughter has an eating disorder due to anxiety and compulsive behavior. The mother's problem is finding adequate care for her age group and then fighting with the insurance companies to pay for an extensive period of time in a facility. Right now she upgraded her insurance to $1200 a month to pay for another 30 days of treatment. This has hit the family hard financially because they have a co-pay on top of this. They plan on taking a second mortgage to pay to keep their daughter well. Is there any help for these kids and families? Peg Tyre: This is such a big problem. Why don't you get in touch with NEDA. They might have resources to help you. There is also a small foundation I've heard of called the Freed Foundation which may have some $$. _______________________ Silver Spring, MD: The 'Pro-Ana" movement appears to be flourishing amongst various internet communities, often with at about 4-6 new 'members' per day. Given that children are fairly technologically advanced, is there any research on what impact this peer support network is having on treament? Peg Tyre: I don't know if I'd call it a movement. I guess you are talking about those websites where very sick, delusional anorexics write defiantly about wanting to be thin. What most people don't realize is that for most people, anorexia isn't a lifestyle choice. It is a mental illness, and like alcoholism, it is often characterized by denial. And yes, for many young people (and older as well) denial feeds denial. What most people fail to remember, though is that these "pro-ana" types are just in the throes of a terrible debilitating disease. _______________________ Rockville, MD: Your article states that "Not only are anorexia's victims younger, they're also more likely to be black, Hispanic or Asian, more likely to be boys, more likely to be middle-aged." What documentation or statistical information do you have to back up this statement? Peg Tyre: There aren't alot of good surveys on this - but I spoke to about two dozen clinicians around the country. What I found is that their patient base has really changed-younger, less white, sometimes older as well. _______________________ Greenfield IN: I was anorexic and bulimic when I was in middle and high school. I bottomed out at 59 pounds. I got therapy and seemed to be doing better. But as I gained the weight back it almost killed me and I would eat and then feel so guilty that i would force myself to throw up. I still have the urge to throw up every time I eat something. I have stopped eating when I can get away with it and if I cant I want to throw up afterwards. Sometimes I still go to the bathroom turn on the water and throw up. I dont know what to do and I dont want to tell my boyfriend for fear he will be upset and worry. I cant do that to him. Peg Tyre: Thanks for writing. As you know, eating disoders can be a chronic problem and it sounds like you're still doing battle with yours. You must be feel very isolated and alone. Why don't you get back in touch with that therapist-or get in touch with one of the clinics or experts I mentioned earlier. They might be able to help you. Good luck. _______________________ Wocester, MA: I know that these articles on anorexia are focusing on biological predispositions to it. It somewhat bothered me how strong the point was, mostly in the Berrien article, that parents didn't cause the anorexia. It bothers me because it seems that parents could read this and feel releaved of any responsibility and not examine there own behaviors. I am currently trying to recover from anorexia which I'm pretty sure surfaced when I was more and adult than a child. And I do believe that I was predisposed to it. However, I have come to see how having a mentally ill sibling and his outbursts toward me and my parents' responses to both him and me really ignited this. It's not means to blame or take responsibility off of me, but it helps me see that it's not such a shock my anorexia surfaced. So basically, my question to you is shouldn't parents not just focus on "fixing" the child and seeing the child as the problem but also to examine that maybe the child is an indication of a larger family problem? Like I said, I'm just afriad these articles will foster misunderstanding and further the stigma that the child is somehow "defective" all on his/her own. Peg Tyre: You raise a really good point here. Eating disorders are often a result of genetic vulnerabilities but there are often environmental triggers that set it off. And families can pull those triggers (heck, they MAKE the triggers). Saying that there may be a genetic component doesn't let families off the hood. The point I want to make is that scientists don't believe this is something that you are chosing to have. And they don't believe that this is something most parents gave you on purpose. Any like any serious mental or physical disease, your entire family can play some role in helping you get better. _______________________ Vernal, UT: I am asking serveral questions, I am devasted I just started suspecting something was going on with my daughters eating habits. I went to the grocery store so The Newsweek cover yesterday. I bought the magazine. My husband and I read it last night. Now we are sure there is something going on. I am frozen in fear about confronting her and knowing where to go from here. Peg Tyre: Please get in touch with some of the experts and facilities quoted in the story and in this livechat. My thoughts are with you. _______________________ Washington, DC: Is the current treatment environment beginning to adapt to the changing trends mentioned in this article? As a 26 year old female with anorexia, there appears to be a lack of specialized treatment programs that serve individuals outside of the 'common onset' age/gender group. Do you know of any programs that are specifically serving younger or older individuals, or males with this disorder? Peg Tyre: The children's hospital in Omaha treats younger kids, so does the ED program at Penn State. Remuda Ranch, a residential program, treats younger kids now, too. _______________________ Effingham, IL: How much has the "Hollywood" influence had on the younger children afflicted with anorexia? Since such stars as Lindsay Lohan parade the fact they are thinner, does that say to the younger fans that they should do the same? Peg Tyre: These are not good role models for our children. Do they give them unhealthy ideas about the body? Yes. Do they give kids unhealthy ideas about eating. Yes. Do they cause eating disoders? In some cases yes, but in other cases, the causes are more deepseated. _______________________ Evanston, IL: Hi Peg. First, congratulations on a fabulous, well-researched and -written story. I'm not given to crying (being a guy), but at several points tears welled up in my eyes. My question is-and I was appalled to read this-why do you and the experts think this is such an intractable condition, with at 10% the highest mortality rate of any mental disorder? Peg Tyre: Thanks very much. I think anorexics are difficult to treat because the disease affects their brain chemistry and ultimately, their ability to think logically about themselves. (Starvation does that.) Death rate? For all the boasting on the pro-ana websites about it being a lifestyle choice, it is really a miserable life filled with isolation and loneliness and frustration. My heart goes out to the sufferers. _______________________ Alexandria, VA: I nearly died of anorexia in 1995. I recovered only after being sent to Remuda Ranch in Arizona. Now, ten years later, I still fight the disease every single day. I am five months pregnant and wonder if there is any help for recovered anorexics who are pregnant? Gaining weight for the baby has been a constant battle. What are your suggestions? Peg Tyre: There are good support groups online-also a good therapist might help. _______________________ Kansas City, MO: I'm 19 and have struggled with anorexia for eight years. I was first hospitalized when I was 12 and have all together been in inpatient treatment five times. I got home from treatment two months ago, which I left against medical advice. Now I feel as though I am doing well with food, I eat three meals a day (that are about half of what my dietitian's meal plan for me calls for) and drink two Ensure Plus. Everyone around me is saying that I need to be back inpatient based solely on my weight right now though and I'm desperately confused. Is being 5'4" and 90 pounds really that much in need of help? Peg Tyre: I'm not qualified to say what you should weight but your doctor is. Part of you disease is not being able to make good judgements about how much food you should eat and what you should weight. Find professionals you trust. Then trust them. _______________________ New York, NY: Are there any programs you know of that treat boys? Peg Tyre: The children's hospital in Omaha may be able to help you. _______________________ Silver Spring, MD: Please,please recommend Lock and LeGrange's book Help Your Teenager Beat an Eating Disorder-it's a great resource for information on family based treatment. Laura Collins' book Eating with your Anorexic and her website [52]www.eatingwithyouranorexic.com are also wonderful resources. Thanks so much for this important article. My 14 year old daughter recovered using family based treatment and it is such a joy to see her happy and healthy again. Peg Tyre: Right-If you are interested in the Maudsley Method, please check out Laura Collin's book Eating With Your Anorexic. It is terrific, brave, heartwarming and very helpful in understanding what families of anorexics go through. She also has a website for family support [53]www.eatingwithyouranorexic.com AP: Pro-anorexia movement has cult-like appeal Experts alarmed by Web sites that promote self-starvation http://www.msnbc.msn.com/id/8045047/ Updated: 1:38 p.m. ET May 31, 2005 CHICAGO - They call her ~SAna.~T She is a role model to some, a goddess to others ~W the subject of drawings, prayers and even a creed. She tells them what to eat and mocks them when they don~Rt lose weight. And yet, while she is a very real presence in the lives of many of her followers, she exists only in their minds. Ana is short for anorexia, and ~W to the alarm of experts ~W many who suffer from the potentially fatal eating disorder are part of an underground movement that promotes self-starvation and, in some cases, has an almost cult-like appeal. Followers include young women and teens who wear red Ana bracelets and offer one another encouraging words of ~Sthinspiration~T on Web pages and blogs. They share tips for shedding pounds and faithfully report their ~Scw~T and ~Sgw~T ~W current weight and goal weight, which often falls into the double digits. They also post pictures of celebrity role models, including teen stars Lindsay Lohan and Mary-Kate Olsen, who last year set aside the acting career and merchandising empire she shares with her twin sister to seek help for her own eating disorder. ~SPut on your Ana bracelet and raise your skinny fist in solidarity!~T one ~Spro-Ana~T blogger wrote shortly after Olsen entered treatment. The movement has flourished on the Web and eating disorder experts say that, despite attempts to limit Ana~Rs online presence, it has now grown to include followers ~W many of them young ~W in many parts of the world. No one knows just how many of the estimated 8 million to 11 million Americans afflicted with eating disorders have been influenced by the pro-Ana movement. But experts fear its reach is fairly wide. A preliminary survey of teens who~Rve been diagnosed with eating disorders at the Lucile Packard Children~Rs Hospital at Stanford University, for instance, found that 40 percent had visited Web sites that promote eating disorders. ~SThe more they feel like we ~W ~Rthe others~R ~W are trying to shut them down, the more united they stand,~T says Alison Tarlow, a licensed psychologist and supervisor of clinical training at the Renfrew Center in Coconut Creek, Fla., a residential facility that focuses on eating disorders. Experts say the Ana movement also plays on the tendency people with eating disorders have toward ~Sall or nothing thinking.~T ~SWhen they do something, they tend to pursue it to the fullest extent. In that respect, Ana may almost become a religion for them,~T says Carmen Mikhail, director of the eating disorders clinic at Texas Children~Rs Hospital in Houston. She and others point to the ~SAna creed,~T a litany of beliefs about control and starvation, that appears on many Web sites and blogs. At least one site encourages followers to make a vow to Ana and sign it in blood. People with eating disorders who~Rve been involved in the movement confirm its cult-like feel. ~SPeople pray to Ana to make them skinny,~T says Sara, a 17-year-old in Columbus, Ohio, who was an avid organizer of Ana followers until she recently entered treatment for her eating disorder. She spoke on the condition that her last name not be used. 'Helping girls kill themselves' Among other things, Sara was the self-proclaimed president of Beta Sigma Kappa, dubbed the official Ana sorority and ~Sthe most talked about, nearly illegal group~T on a popular blog hosting service that Sara still uses to communicate with friends. She also had an online Ana ~Sboot camp~T and told girls what they could and couldn~Rt eat. ~SI guess I was attention-starved,~T she now says of her motivation. ~SI really liked being the girl that everyone looked up to and the one they saw as their ~Rthinspiration.~R ~SBut then I realized I was helping girls kill themselves.~T For others, Ana is a person ~W a voice that directs their every move when it comes to food and exercise. ~SShe~Rs someone who~Rs perfect. It~Rs different for everyone ~W but for me, she~Rs someone who looks totally opposite to the way I do,~T says Kasey Brixius, a 19-year-old college student from Hot Springs, S.D. To Brixius ~W athletic with brown hair and brown eyes ~W Ana is a wispy, blue-eyed blonde. ~SI know I could never be that,~T she says, ~Sbut she keeps telling me that if I work hard enough, I CAN be that.~T Treatment often fails Dr. Mae Sokol often treats young patients in her Omaha, Neb., practice who personify their eating disorder beyond just Ana. To them, bulimia is ~SMia.~T And an eating disorder often becomes ~SEd.~T ~SA lot of times they~Rre lonely and they don~Rt have a lot of friends. So Ana or Mia become their friend. Or Ed becomes their boyfriend,~T says Sokol, who is director of the eating disorders program run by Children~Rs Hospital and Creighton University. In the end, treatment can include writing ~Sgoodbye~T letters to Ana, Mia and Ed in order to gain control over them. But it often takes a long time to get to that point ~W and experts agree that, until someone with an eating disorder wants to help themselves, treatment often fails. Tarlow, at the Renfrew Center, says it~Rs also easy for patients to fall back into the online world of Ana after they leave treatment. ~SUnfortunately,~T she says, ~Swith all people who are in recovery, it~Rs so much about who you surround yourself with.~T Some patients, including Brixius, the 19-year-old South Dakotan, have had trouble finding counselors who truly understand their struggle with Ana. ~SI~Rd tell them about Ana and how she~Rs a real person to me. And they~Rd just look at me like I~Rm nuts,~T Brixius says of the counselors she~Rs seen at college and in her hometown. ~SThey wouldn~Rt address her ever again, so it got very frustrating. ~SHalf the time I~Rm, like, ~RYou know what? I give up.~T~R Other days, she~Rs more hopeful. ~SI gotta snap out of this eventually if I want to have kids and get a job. One day, I~Rll get to that point,~T she says, pausing. ~SBut I~Rll always obsess about food.~T From checker at panix.com Thu Dec 8 02:20:55 2005 From: checker at panix.com (Premise Checker) Date: Wed, 7 Dec 2005 21:20:55 -0500 (EST) Subject: [Paleopsych] NYT: Snared in the Web of a Wikipedia Liar Message-ID: Snared in the Web of a Wikipedia Liar http://www.nytimes.com/2005/12/04/weekinreview/04seelye.html [An excellent summary of the issues. What the article didn't say is that votes can be taken on articles on issues that others would not like to see the light of day, such as Jewish ethnocentrism. I should think it unlikely for Jews not to have assimilated themselves out of existence without being ethnocentric, and indeed there have been books by Jews urging their co-religionists to have more children. The suitability of the article was discussed at length on a Wikipedia forum and was nixed, on the grounds that the topic should be handled in a general one on ethnocentrism. [Regards Mr. Seigenthaler's alleged role in the Kennedy assassination, I would never take this to be an established fact and my have come to doubt the rest of the article as well. But, looking at it just now, the Kennedy reference having been excised, I see no reason to doubt its facts. [What's really great is that I can get a good summary of reigning theories. I failed to find an article that answered a question I often ask, why there are emotions, but I just glanced at some entries. Nor was I successful in getting a rundown of the various theories of elites. Many articles there, and one of them may do the trick. But there are other cases where Wikipedia had just what I wanted. [On the other hand, standard reference sources have their biases, too. I praise Jimbo for his work!] Rewriting History By KATHARINE Q. SEELYE ACCORDING to Wikipedia, the online encyclopedia, John Seigenthaler Sr. is 78 years old and the former editor of The Tennessean in Nashville. But is that information, or anything else in Mr. Seigenthaler's biography, true? The question arises because Mr. Seigenthaler recently read about himself on Wikipedia and was shocked to learn that he "was thought to have been directly involved in the Kennedy assassinations of both John and his brother Bobby." "Nothing was ever proven," the biography added. Mr. Seigenthaler discovered that the false information had been on the site for several months and that an unknown number of people had read it, and possibly posted it on or linked it to other sites. If any assassination was going on, Mr. Seigenthaler (who is 78 and did edit The Tennessean) wrote last week in an op-ed article in USA Today, it was of his character. The case triggered extensive debate on the Internet over the value and reliability of Wikipedia, and more broadly, over the nature of online information. Wikipedia is a kind of collective brain, a repository of knowledge, maintained on servers in various countries and built by anyone in the world with a computer and an Internet connection who wants to share knowledge about a subject. Literally hundreds of thousands of people have written Wikipedia entries. Mistakes are expected to be caught and corrected by later contributors and users. The whole nonprofit enterprise began in January 2001, the brainchild of Jimmy Wales, 39, a former futures and options trader who lives in St. Petersburg, Fla. He said he had hoped to advance the promise of the Internet as a place for sharing information. It has, by most measures, been a spectacular success. Wikipedia is now the biggest encyclopedia in the history of the world. As of Friday, it was receiving 2.5 billion page views a month, and offering at least 1,000 articles in 82 languages. The number of articles, already close to two million, is growing by 7 percent a month. And Mr. Wales said that traffic doubles every four months. Still, the question of Wikipedia, as of so much of what you find online, is: Can you trust it? And beyond reliability, there is the question of accountability. Mr. Seigenthaler, after discovering that he had been defamed, found that his "biographer" was anonymous. He learned that the writer was a customer of BellSouth Internet, but that federal privacy laws shield the identity of Internet customers, even if they disseminate defamatory material. And the laws protect online corporations from libel suits. He could have filed a lawsuit against BellSouth, he wrote, but only a subpoena would compel BellSouth to reveal the name. In the end, Mr. Seigenthaler decided against going to court, instead alerting the public, through his article, "that Wikipedia is a flawed and irresponsible research tool." Mr. Wales said in an interview that he was troubled by the Seigenthaler episode, and noted that Wikipedia was essentially in the same boat. "We have constant problems where we have people who are trying to repeatedly abuse our sites," he said. Still, he said, he was trying to make Wikipedia less vulnerable to tampering. He said he was starting a review mechanism by which readers and experts could rate the value of various articles. The reviews, which he said he expected to start in January, would show the site's strengths and weaknesses and perhaps reveal patterns to help them address the problems. In addition, he said, Wikipedia may start blocking unregistered users from creating new pages, though they would still be able to edit them. The real problem, he said, was the volume of new material coming in; it is so overwhelming that screeners cannot keep up with it. All of this struck close to home for librarians and researchers. On an electronic mailing list for them, J. Stephen Bolhafner, a news researcher at The St. Louis Post-Dispatch, wrote, "The best defense of the Wikipedia, frankly, is to point out how much bad information is available from supposedly reliable sources." Jessica Baumgart, a news researcher at Harvard University, wrote that there were librarians voluntarily working behind the scenes to check information on Wikipedia. "But, honestly," she added, "in some ways, we're just as fallible as everyone else in some areas because our own knowledge is limited and we can't possibly fact-check everything." In an interview, she said that her rule of thumb was to double-check everything and to consider Wikipedia as only one source. "Instead of figuring out how to 'fix' Wikipedia - something that cannot be done to our satisfaction," wrote Derek Willis, a research database manager at The Washington Post, who was speaking for himself and not The Post, "we should focus our energies on educating the Wikipedia users among our colleagues." Some cyberexperts said Wikipedia already had a good system of checks and balances. Lawrence Lessig, a law professor at Stanford and an expert in the laws of cyberspace, said that contrary to popular belief, true defamation was easily pursued through the courts because almost everything on the Internet was traceable and subpoenas were not that hard to obtain. (For real anonymity, he advised, use a pay phone.) "People will be defamed," he said. "But that's the way free speech is. Think about the gossip world. It spreads. There's no way to correct it, period. Wikipedia is not immune from that kind of maliciousness, but it is, relative to other features of life, more easily corrected." Indeed, Esther Dyson, editor of Release 1.0 and a longtime Internet analyst, said Wikipedia may, in that sense, be better than real life. "The Internet has done a lot more for truth by making things easier to discuss," she said. "Transparency and sunlight are better than a single point of view that can't be questioned." For Mr. Seigenthaler, whose biography on Wikipedia has since been corrected, the lesson is simple: "We live in a universe of new media with phenomenal opportunities for worldwide communications and research, but populated by volunteer vandals with poison-pen intellects." From checker at panix.com Thu Dec 8 02:20:59 2005 From: checker at panix.com (Premise Checker) Date: Wed, 7 Dec 2005 21:20:59 -0500 (EST) Subject: [Paleopsych] Stay Free: Mark Crispin Miller on conspiracies, media, and mad scientists Message-ID: Mark Crispin Miller on conspiracies, media, and mad scientists http://www.stayfreemagazine.org/archives/19/mcm.html [I had included this at the end of a posting on the theft of the 2004 election. But this is so thought-provoking that I'm sending it out separately. The author, being a "leftist," of course does not see the statist propaganda that underlies public education, propaganda that is so relentless and continuous that it is not even noticed as such. [Jacques Ellul's _Propaganda: The Forming of Men's Attitudes_ (1962, English translation, 1965) should be reread, for it argued the necessity that propaganda be relentless. He was speaking more specifically of the propaganda for "The American Way." We all know the ironic Depression-era photograph of men and women in bread lines underneath a huge poster with a happy couple and a car that read "There is no way like the American Way." But why beat a drum for what is obviously beneficial (which it wasn't during the Depression) but which propaganda continued through the Eisenhower years? And why is there propaganda to "celebrate diversity," whose stated benefits include only ethic cooking and folk dancers jumping up and down, when people go to ethnic restaurants on their own initiative without any prompting whatsoever?] Interview by Carrie McLaren | [8]Issue #19 After years of dropping Mark Crispin Miller's name in Stay Free!, I figured it was time to interview him. Miller is, after all, one of the sharpest thinkers around. His writings on television predicted the cult of irony-or whatever you call it when actual Presidential candidates mock themselves on Saturday Night Live, when sitcoms ridicule sitcoms, and when advertisements attack advertising. More recently, he has authored The Bush Dyslexicon, aided by his humble and ever-devoted assistant (me). Miller works at New York University in the Department of Media Ecology. Though he bristles at being called an academic, Miller is exactly the sort of person that should be leading classrooms. He's an excellent speaker, with a genius for taking cultural products-be they Jell-O commercials or George W. Bush press conferences-and teasing out hidden meanings. (He's also funny, articulate, and knows how to swear.) I talked to Mark at his home in November, between NPR appearances and babysitting duty. He is currently writing about the Marlboro Man for American Icons, a Yale University Press series that he also happens to be editing. His book Mad Scientists: Paranoid Delusion and the Craft of Propaganda (W. Norton) is due out in 2004.-CM STAY FREE: Let's start with a simple one: Why are conspiracy theories so popular? MCM: People are fascinated by the fundamental evil that seems to explain everything. Lately, this is why we've had the anomaly of, say, Rupert Murdoch's Twentieth Century Fox releasing films that feature media moguls as villains out to rule the world-villains much like Rupert Murdoch. Who's a bigger conspirator than he is? And yet he's given us The X-Files. Another example: Time Warner released Oliver Stone's JFK, that crackpot-classic statement of the case that American history was hijacked by a great cabal of devious manipulators. It just so happens that Stone himself, with Time Warner behind him, was instrumental in suppressing two rival projects on the Kennedy assassination. These are trivial examples of a genuine danger, which is that those most convinced that there is an evil world conspiracy tend to be the most evil world conspirators. STAY FREE: Because they know what's inside their own heads? MCM: Yes and no. The evil that they imagine is inside their heads-but they can't be said to know it, at least not consciously. What we're discussing is the tendency to paranoid projection. Out of your own deep hostility you envision a conspiracy so deep and hostile that you're justified in using any tactics to shatter it. If you look at those who have propagated the most noxious doctrines of the twentieth century, you will find that they've been motivated by the fierce conviction that they have been the targets of a grand conspiracy against them. Hitler believed he was fighting back, righteously, against "the Jewish world conspiracy." [See pp. 30-31] Lenin and Stalin both believed they were fighting back against the capitalist powers-a view that had some basis in reality, of course, but that those Bolsheviks embraced to an insane degree. (In 1941, for example, Stalin actually believed that England posed a greater danger to the Soviet Union than the Nazis did.) We see the same sort of paranoid projection among many of the leading lights of our Cold War-the first U.S. Secretary of Defense, James Forrestal, who was in fact clinically insane; the CIA's James Angleton; Richard Nixon; J. Edgar Hoover; Frank Wisner, who was in charge of the CIA's propaganda operations worldwide. Forrestal and Wisner both committed suicide because they were convinced the Communists were after them. Now, there was a grain of truth to this since the Soviet Union did exist and it was a hostile power. But it wasn't on the rise, and it wasn't trying to take over the world, and it certainly wasn't trying to destroy James Forrestal personally. We have to understand that there was just as much insanity in our own government as there was with the Nazis and the Bolsheviks. This paranoid dynamic did not vanish when the Cold War ended. The U.S. is now dominated, once again, by rightists who believe themselves besieged. And the same conviction motivates Osama bin Laden and his followers. They see themselves as the victims of an expansionist Judeo-Christianity. STAY FREE: Al Qaeda is itself a conspiracy. MCM: Yes. We have to realize that the wildest notions of a deliberate plot are themselves tinged with the same dangerous energy that drives such plots. What we need today, therefore, is not just more alarmism, but a rational appraisal of the terrorist danger, a clear recognition of our own contribution to that danger, and a realistic examination of the weak spots in our system. Unfortunately, George W. Bush is motivated by an adolescent version of the same fantasy that drives the terrorists. He divides the whole world into Good and Evil, and has no doubt that God is on his side-just like bin Laden. So how can Bush guide the nation through this danger, when he himself sounds dangerous? How can he oversee the necessary national self-examination, when he's incapable of looking critically within? In this sense the media merely echoes him. Amid all the media's fulminations against al Qaeda, there has been no sober accounting of how the FBI and CIA screwed up. Those bureaucracies have done a lousy job, but that fact hasn't been investigated because too many of us are very comfortably locked into this hypnotic narrative of ourselves as the good victims and the enemy as purely evil. STAY FREE: There's so much contradictory information out there. Tommy Thompson was on 60 Minutes the other night saying that we were prepared for biological warfare, that there was nothing to worry about. Yet The New York Times and The Wall Street Journal have quoted experts saying the exact opposite. Do you think this kind of confusion contributes to conspiratorial thinking? I see some conspiratorial thinking as a normal function of getting along in the world. When, on September 11th, the plane in Pennsylvania went down, there was lots of speculation that the U.S. military shot it down. MCM: Which I tend to think is true, by the way. I've heard from some folks in the military that that plane was shot down. STAY FREE: But we have no real way of knowing, no expertise. MCM: Yes, conspiratorial thinking is a normal response to a world in which information is either missing or untrustworthy. I think that quite a few Americans subscribe to some pretty wild notions of what's going on. There's nothing new in this, of course. There's always been a certain demented plurality that's bought just about any explanation that comes along. That explains the centuries-old mythology of anti-Semitism. There will always be people who believe that kind of thing. To a certain extent, religion itself makes people susceptible to such theorizing. STAY FREE: How so? MCM: Because it tends a propagate that Manichean picture of the universe as split between the good people and "the evil-doers." Christianity has spread this vision-even though it's considered a heresy to believe that evil is an active force in God's universe. According to orthodox Christianity, evil is not a positive force but the absence of God. STAY FREE: A lot of religious people believe what they want to believe, anyway. Christianity is negotiable. MCM: Absolutely. But when it comes to the paranoid world view, all ethical and moral tenets are negotiable, just as all facts are easily disposable. Here we need to make a distinction. On the one hand, there have been, and there are, conspiracies. Since the Cold War, our government has been addicted to secrecy and dangerously fixated on covert action all around the world. So it would be a mistake to dismiss all conspiracy theory. At the same time, you can't accept everything-that's just as na?ve and dangerous as dismissing everything. Vincent Bugliosi, who wrote The Betrayal of America, is finishing up a book on the conspiracy theories of the Kennedy assassination. He has meticulously gone through the case and has decided that the Warren Report is right. Now, Bugliosi is no knee-jerk debunker. He recognizes that a big conspiracy landed George W. Bush in the White House. STAY FREE: So I take it you don't buy the conspiracy theories about JFK? MCM: I think there's something pathological about the obsession with JFK's death. Some students of the case have raised legitimate questions, certainly, but people like Stone are really less concerned about the facts than with constructing an idealized myth. STAY FREE: Critics of the war in Afghanistan have called for more covert action as an alternative to bombing. That's an unusual thing for the left to be advocating, isn't it? MCM: It is. On the one hand, any nation would appear to be within its rights to try to track down and kill these mass murderers. I would personally prefer to see the whole thing done legally, but that may not be realistic. So, if it would work as a covert program without harm to any innocents I wouldn't be against it. But that presumes a level of right-mindedness and competence that I don't see in our government right now. I don't think that we can trust Bush/Cheney to carry out such dirty business. Because they have a paranoid world-view-just like the terrorists-they must abuse their mandate to "do what it takes" to keep us safe. By now they have bombed more innocents than perished in the World Trade Center, and they're also busily trashing many of our rights. The "intelligence community" itself, far from being chastened by their failure, has used the great disaster to empower itself. That bureaucracy has asked for still more money, but that request is wholly disingenuous. They didn't blow it because they didn't have enough money-they blew it because they're inept! They coasted along for years in a cozy symbiosis with the Soviet Union. The two superpowers needed one another to justify all this military and intelligence spending, and it made them complacent. Also, they succumbed to the fatal tendency to emphasize technological intelligence while de-emphasizing human intelligence. STAY FREE: Yeah, the Green Berets sent to Afghanistan are equipped with all sorts of crazy equipment. They each wear gigantic puffy suits with pockets fit to carry a GPS, various hi-tech gizmos, and arms. MCM: That's just terrific. Meanwhile, the terrorists used boxcutters! STAY FREE: Did you see that the U.S. Army has asked Hollywood to come up with possible terrorist scenarios to help prepare the military for attack? MCM: Yeah, it sent a chill right through me. If that's what they're reduced to doing to protect us from the scourge of terrorism, they're completely clueless. They might as well be hiring psychics-which, for all we know, they are! STAY FREE: The Bush administration also asked Al Jazeera, the Arab TV station, to censor its programming. MCM: Right. And, you know, every oppressive move we make, from trying to muzzle that network to dropping bombs all over Afghanistan, is like a gift to the terrorists. Al Jazeera is the only independent TV network in the Arab world. It has managed to piss off just about every powerful interest in the Middle East, which is a sign of genuine independence. In 1998, the network applied for membership in the Arab Press Union, and the application was rejected because Al Jazeera refused to abide by the stricture that it would do everything it can to champion "Arab brotherhood." STAY FREE: What do you think our government should have done instead of bombing? MCM: I rather wish they had responded with a little more imagination. Doing nothing was not an option. But bombing the hell out of Afghanistan was not the only alternative-and it was a very big mistake, however much it may have gratified a lot of anxious TV viewers in this country. By bombing, the U.S. quickly squandered its advantage in the propaganda war. We had attracted quite a lot of sympathy worldwide, but that lessened markedly once we killed Afghan civilians by the hundreds, then the thousands. Americans have tended not to want to know about those foreign victims. But elsewhere in the world, where 9/11 doesn't resonate as much, the spectacle of all those people killed by us can only build more sympathy for our opponents. That is, the bombing only helps the terrorists in the long run. And so has our government's decision to define the 9/11 crimes as acts of war. That definition has served only to exalt the perpetrators, who should be treated as mass murderers, not as soldiers. But the strongest argument against our policy is this-that it is exactly what the terrorists were hoping for. Eager to accelerate the global split between the faithful and the infidels, they wanted to provoke us into a response that might inflame the faithful to take arms against us. I think we can agree that, if they wanted it, we should have done something else. STAY FREE: You've written that, before the Gulf War, Bush the elder's administration made the Iraqi army sound a lot more threatening than it really was. Bush referred to Iraq's scanty, dwindling troops as the "elite Republican guard." Do you think that kind of exaggeration could happen with this war? MCM: No, because the great given in this case is that we are rousing ourselves from our stupor and dealing an almighty and completely righteous blow against those who have hurt us. Now we have to seem invincible, whereas ten years ago, they wanted to make us very scared that those Iraqi troops might beat us. By terrorizing us ahead of time, the Pentagon and White House made our rapid, easy victory seem like a holy miracle. STAY FREE: Let's get back to conspiracy theories. Do people ever call you a conspiracy theorist? MCM: Readers have accused me of paranoia. People who attacked me for The Bush Dyslexicon seized on the fact that my next book is subtitled Paranoid Delusion and the Craft of Propaganda, and they said, "He's writing about himself!" But I don't get that kind of thing often because most people see that there's a lot of propaganda out there. I don't write as if people are sitting around with sly smiles plotting evil-they're just doing their jobs. The word propaganda has an interesting history, you know. It was coined by the Vatican. It comes from propagare, which means grafting a shoot onto a plant to make it grow. It's an apt derivation, because propaganda only works when there is fertile ground for it. History's first great propagandist was St. Paul, who saw himself as bringing the word of God to people who needed to hear it. The word wasn't pejorative until the first World War, when the Allies used it to refer to what the Germans did, while casting their own output as "education," or "information." There was a promising period after the war when it got out that our government had done a lot of lying. The word propaganda came to connote domestic propaganda, and there were a number of progressive efforts to analyze and debunk it. But with the start of World War II, propaganda analysis disappeared. Since we were fighting Nazi propaganda with our own, it wasn't fruitful to be criticizing propaganda. STAY FREE: I read that the word "propaganda" fell out of fashion among academics around that time, so social scientists started referring to their work as "communications." It was no longer politically safe to study how to improve propaganda. MCM: Experts in propaganda started doing "communications" studies after the war. Since then, "communication" has been the most common euphemism used for "propaganda," as in "political communication." There's also "psychological warfare" and, of course, "spin." The Cold War was when "propaganda" became firmly linked to Communism. "Communist propaganda" was like "tax-and-spend Democrats" or "elite Republican guard." The two elements were inseparable. If the Communists said it, it was considered propaganda; and if it was propaganda, there were Communists behind it. Only now that the Cold War is over is it possible to talk about U.S. propaganda without running the risk of people looking at you funny. The word does still tend to be used more readily in reference to liberals or Democrats. The right was always quick to charge Bill Clinton-that leftist!-with doing propaganda. In fact, his right-wing enemies, whose propaganda skills were awesome, would routinely fault him for his "propaganda." You never heard anybody say Ronald Reagan was as a master propagandist, though. He was "the Great Communicator." STAY FREE: Talk a bit about how conspiracy is used to delegitimize someone who's doing critical analysis. I've heard you on TV saying, "I don't mean to sound like a conspiracy theorist, but . . . " People even do this in regular conversation. A friend of mine was telling me about going to Bush's inauguration in D.C. He was stunned that none of the protests were covered by the media but prefaced his comments by saying, "I want don't want to sound like a conspiracy theorist, but [the press completely ignored the protests]." It's almost as if people feel the need to apologize if they don't follow some party line. MCM: I wouldn't say that, because there are people who are conspiracy theorists. And I think the emphasis there should not be on the conspiracy but on the theory. A theorist is a speculator. It's always much easier to construct a convincing conspiracy theory if you don't bother looking at reality. The web is filled with stuff like this. So, if you want cover yourself, you should say something like: "I don't subscribe to every crackpot notion that comes along, but in this case there's something funny going on-and here's the evidence." It really is a rhetorical necessity. Especially when you're on TV. STAY FREE: Maybe it's more of a necessity, too, when you're talking about propaganda. MCM: I'll tell you something: it's necessary when you're talking about real conspiracies. You know who benefited big time from the cavalier dismissal of certain conspiracies? The Nazis. The Nazis were expert at countering true reports of their atrocities by recalling the outrageous lies the Allies had told about the Germans back in World War I. The Allies had spread insane rumors about Germans bayoneting Belgian babies, and crucifying Canadian soldiers on barn doors, and on and on. So, when it first got out that the Nazis were carrying out this horrible scheme, their flacks would roll their eyes and say, "Oh yeah-just like the atrocity stories we heard in WWI, right?" STAY FREE: I once attended a lecture on Channel One [an advertising-funded, in-school "news" program], where a professor dissected several broadcasts. He talked about how Channel One stories always emphasize "oneness" and individuality. Collective efforts or activism is framed in the negative sense, while business and governmental sources are portrayed positively and authoritatively. Now, someone listening to this lecture might say, "That just your reading into it. You sound conspiratorial." So where do you think this sort of media analysis or literary analysis and conspiracy-mongering intersect? MCM: That's a very good question. For years I've encountered the same problem as a professor. You've got to make the point that any critical interpretation has to abide by the rules of evidence-it must be based on a credible argument. If you think I'm "reading into it," tell me where my reading's weak. Otherwise, grant that, since the evidence that I adduce supports my point, I might be onto something. Where it gets complicated with propaganda is around the question of intention, because an intention doesn't have to be entirely conscious. The people who make ads, for example, are imbedded in a larger system; they've internalized its imperatives. So they may not be conscious intellectually of certain moves they make. If you said to somebody at Channel One, "You're hostile to the collective and you insult the individual," he'd say, reasonably, "What are you talking about? I'm just doing the news." So you have to explain what ideology is. I'm acutely sensitive to this whole problem. When I teach advertising, for example, I proceed by using as many examples as possible, to show that there is a trend, whatever any individual art director or photographer might insist about his or her own deliberate aims. Take liquor advertising, which appeals to the infant within every alcoholic by associating drink with mother's milk. This is clearly a deliberate strategy because we see it in ad after ad-some babe holding a glass of some brew right at nipple level. She's invariably small-breasted so that the actual mammary does not upstage the all-important product. If that's an accident, it's a pretty amazing accident. Now, does this mean that the ad people sit down and study the pathology of alcoholics, or is it something they've discovered through trial and error? My point is that it ultimately makes no difference. We see it over and over-and if I can show you that, according to experts, visual association speaks to a desire in alcoholics, a regressive impulse, then you have to admit I have a point. Of course, there are going to be people who'll accuse you of "reading into it" no matter what you say because they don't want to hear the argument. This is where we come up against the fundamental importance of anti-intellectualism on the right. They hate any kind of explanation. They feel affronted by the very act of thinking. I ran into this when I promoted The Bush Dyslexicon on talk shows-which I could do before 9/11. Bush's partisans would fault me just for scrutinizing what he'd said. STAY FREE: I recently read Richard Hofstadter's famous essay about political paranoia. He argued that conspiracy is not specific to any culture or country. Would you agree with that, or do you think there is something about America that makes it particularly hospitable to conspiracy theories? MCM: Well, there's a lot of argument about this. There's a whole school of thought that holds that England's Civil War brought about a great explosion of paranoid partisanship. Bernard Baylin's book The Ideological Origins of the American Revolution includes a chapter on the peculiar paranoid orientation of the American revolutionaries. But I think paranoia is universal. It's an eternal, regressive impulse, and it poses a special danger to democracy. STAY FREE: Why, specifically, is it dangerous to democracy? MCM: Because democracies have always been undone by paranoia. You cannot have a functioning democracy where everyone is ruled by mutual distrust. A democratic polity requires a certain degree of rationality, a tolerance of others, and a willingness to listen to opposing views without assuming people are out to kill you. There's a guy named Eli Sagan who wrote a book on the destructive effect of paranoia on Athenian democracy. And I think that the American experiment may also fail; America has always come closest to betraying its founding principles at moments of widespread xenophobic paranoia. In wartime, people want to sink to their knees and feel protected. They give up thinking for themselves-an impulse fatal to democracy but quite appropriate for fascism and Stalinism. The question now is whether paranoia can remain confined to that thirty-or-so percent of the electorate who are permanently crazy. That's what Nixon himself said, by the way-that "one third of the American electorate is nuts." About a third of the German people voted for the Nazis. I think there's something to that. It's sort of a magic number. STAY FREE: Come to think of it, public opinion polls repeatedly show that 70% of the public are skeptical of advertising claims. I guess that means about 30% believe anything. MCM: Wow. I wonder if that lack of skepticism toward advertising correlates in any way with this collective paranoia. That would be interesting to know. STAY FREE: Well, during the Gulf War, a market research firm conducted a study that found that the more hawkish people were, the more likely they were to be rampant consumers. Warmongers, in other words, consumed more than peaceniks. Why do you think these two reactions might be correlated? MCM: One could argue that this mild, collective paranoia often finds expression in promiscuous consumption. Eli Sagan talks about the "paranoidia of greed" as well as the "paranoidia of domination." Both arise out of suspicion of the enemy. You either try to take over all his territory forcibly, or you try to buy everything up and wall yourself within the fortress of your property. STAY FREE: Those two reactions also practically dominate American culture. When people from other countries think of America, they think of us being materialistic and violent. We buy stuff and kill people. Do you think there's any positive form of paranoia? Any advantage to it? MCM: No, I don't, because paranoids have a fatal tendency to look for the enemy in the wrong place. James Angleton of the CIA was so very destructive because he was paranoid. I mean, he should have been in a hospital-and I'm not being facetious. Just like James Forrestal, our first defense secretary. These people were unable to protect themselves, much less serve their country. I think paranoia is only useful if you're in combat and need to be constantly ready to kill. Whether it's left-wing or right-wing paranoia, the drive is ultimately suicidal. STAY FREE: Our government is weak compared to the corporations that run our country. What role do you see for corporations in the anti-terrorist effort? MCM: Well, corporations do largely run the country, and yet we can't trust them with our security. The private sector wants to cut costs, so you don't trust them with your life. Our welfare is not uppermost in their minds; our money is. So what role can the corporations play? STAY FREE: They can make the puffy suits! MCM: The puffy suits and whatever else the Pentagon claims to need. Those players have a vested interest in eternal war. STAY FREE: Did you read that article about Wal-Mart? After September 11, sales shot up for televisions, guns, and canned goods. MCM: Paranoia can be very good for business. STAY FREE: Have you ever watched one of those television news shows that interpret current events in terms of Christian eschatology? They analyze everyday events as signs of the Second Coming. MCM: No. I bet they're really excited now, though. I wonder what our president thinks of that big Happy Ending, since he's a born-again. You know, Reagan thought it was the end times. STAY FREE: But those are minority beliefs, even among born-again Christians. MCM: It depends on what you mean by "minority." Why are books by Tim LaHayes selling millions? He's a far-right fundamentalist, co-author of a series of novels all about the end times-the Rapture and so on. And Pat Robertson's best-seller, the New World Order, sounds the same apocalyptic note. STAY FREE: He's crazy. He can't really believe all that stuff. MCM: No, he's crazy and therefore he can believe that stuff. His nurse told him years ago that he was showing symptoms of paranoid schizophrenia. STAY FREE: I recently read a chapter from Empire of Conspiracy-an intelligent book about conspiracy theories. But it struck me that the author considered Vance Packard, who wrote Hidden Persuaders, a conspiracy theorist. Packard's book was straightforward journalism. He interviewed advertising psychologists and simply reported their claims. There was very little that was speculative about it. MCM: The author should have written about Subliminal Seduction and the other books by Wilson Brian Key. STAY FREE: Exactly! That nonsense about subliminal advertising was a perfect example of paranoid conspiracy. Yet he picked on Vance Packard, who conducted his research as any good journalist would. MCM: Again, we must distinguish between idle, lunatic conspiracy theorizing, and well-informed historical discussion. There have been quite a few conspiracies in U.S. history-and if you don't know that, you're either ignorant or in denial. Since 1947, for example, we have conspiratorially fomented counter-revolutions and repression the world over. That's not conspiracy theory. That's fact-which is precisely why it meets the charge of speculation. How better to discredit someone than to say she's chasing phantoms-or that she has an axe to grind? When James Loewen's book Lies Across America was reviewed in The New York Times, for example, the reviewer said it revealed an ideological bias because it mentions the bombing of civilians in Vietnam. Loewen wrote back a killer letter to the editor pointing out that he had learned about those bombings from The New York Times. Simply mentioning such inconvenient facts is to be dismissed as a wild-eyed leftist. When someone tells me I'm conspiracy-mongering I usually reply, "It isn't a conspiracy, it's just business as usual." STAY FREE: That's like what Noam Chomsky says about his work: "This is not conspiracy theory, it is institutional analysis." Institutions do what is necessary to assure the survival of the institution. It's built into the process. MCM: That's true. There's a problem with Chomsky's position, though-and I say this with all due respect because I really love Chomsky. When talking about U.S. press coverage, Chomsky will say that reporters have internalized the bias of the system. He says this, but the claim is belied by the moralistic tone of Chomsky's critique-he charges journalists with telling "lies" and lying "knowingly." There is an important contradiction here. Either journalists believe they're reporting truthfully, which is what Chomsky suggests when he talks about internalizing institutional bias. Or they're lying-and that, I think, is what Chomsky actually believes because his prose is most energetic when he's calling people liars. One of the purposes of my next book, Mad Scientists, will be to suggest that all the best-known and most edifying works on propaganda are slightly flawed by their assumption that the propagandist is a wholly rational, detached, and calculating player. Most critics-not just Chomsky, but Jacques Ellul and Hannah Arendt, among others-tend to project their own rationality onto the propagandist. But you can't study the Nazis or the Bolsheviks or the Republicans without noticing the crucial strain of mad sincerity that runs throughout their work, even at its most cynical. STAY FREE: You have written that even worse than the possibility that a conspiracy exists may be the possibility that no conspiracy is needed. What do you mean by that? MCM: The fantasy of one big, bad cabal out there is terrifying but also comforting. Not only does it help make sense of a bewildering reality, but it also suggests a fairly neat solution. If we could just find all the members of the network and kill them, everything will be okay. It's more frightening to me that there are no knowing authors. No one is at the top handling the controls. Rather, the system is on auto-pilot, with cadres just going about their business, vaguely assuming that they're doing good and telling truths-when in fact they are carrying out what could objectively be considered evil. What do you do, then? Who is there to kill? How do you expose the perpetrators? Whom do you bring before the bar of justice-and who believes in "justice"? And yet I do think that a lot of participants in this enterprise know they're doing wrong. One reason people who work for the tobacco companies make so much money, for example, is to still the voice of conscience, make them feel like they're doing something valuable. But the voice is very deeply buried. Ultimately, though, it is the machine itself that's in command, acting through those workers. They let themselves become the media's own media-the instruments whereby the system does its thing. I finally learned this when I studied the Gulf War, or rather, the TV spectacle that we all watched in early 1991. There was a moment on the war's first night when Ron Dellums was just about to speak against the war. He was on the Capitol steps, ready to be interviewed on ABC-and then he disappeared. They cut to something else. I was certain that someone, somewhere, had ordered them to pull the plug because the congressman was threatening to spoil the party. But it wasn't that at all. We looked into it and found the guy who'd made that decision, which was a split-second thing based on the gut instinct that Dellums' comments would make bad TV. So that was that-a quick, unconscious act of censorship, effected not by any big conspiracy but by one eager employee. No doubt many of his colleagues would have done the same. And that, I think, is scarier than any interference from on high. From checker at panix.com Thu Dec 8 02:21:04 2005 From: checker at panix.com (Premise Checker) Date: Wed, 7 Dec 2005 21:21:04 -0500 (EST) Subject: [Paleopsych] NYTBR: Something We Ate? Message-ID: Something We Ate? http://www.nytimes.com/2005/12/04/books/review/04stern.html [Click the URL to view the graphic.] [Given that we are omnivores, it's hard to think the effects of different diets, caloric intake constant, would matter all that much. Even still, I by and large am a grazer and eat the diet recommended in _The Paleolithic Presciption_. However, we are believing animals, and specific diets can have strong placebo-like effects. The record for all of them for long-term weight reduction is pretty low. Obesity is a mystery, and calling it a public health MENACE is mostly likely to be a full employment act for health bureaucrats.] Review by JANE AND MICHAEL STERN AGREED: Good health depends on a good diet. But which good diet? Experts and pretenders have offered countless schemes for salubrity, from the cabbage regime propounded by Cato the Elder to the chopped-meat-and-water plan of the 19th-century physician John Salisbury (whose name lives on via the Salisbury steak). Formal theorizing began in the second century, when Galen codified nutrition as a matter of correctly balanced humors. By the first millennium, the Byzantine Dietary Calendar advised sipping aromatic wine in January to avert the dangers of sweet phlegm; in 19th-century America, the phony physician Sylvester Graham and, later, the cereal guru John Harvey Kellogg inspired corn-flake crusades based on the proposition that constipation causes death. Our own bookshelves hold such off-the-wall 20th-century treatises as "Man Alive: You're Half Dead! (How to Eat Your Way to Glowing Health and Stay There)" and a pamphlet titled "Woman 80 Never Tired Eats and Sleeps Well," which turned upside down and around is labeled "What Causes Gas on the Stomach?" To eat is basic instinct; how to do it correctly worries humans more than sex. So "Terrors of the Table" is a perfect title for this story of nutritional doctrine's tyranny up to modern times when, in Walter Gratzer's words, fear of cholesterol has "supplanted the Devil as the roaring lion who walketh about, seeking whom he may devour." Gratzer, a biophysicist at King's College, London, who previously put a human face on science in "Eurekas and Euphorias: The Oxford Book of Scientific Anecdotes," reels out a historical pageant of science and pseudoscience teeming with remarkable characters who have advanced (and retarded) knowledge about what makes humans thrive. The faddists on soapboxes are especially amusing, including vegetarians who denounce eating meat as ungodly and an anti-vegetarian cleric who answers that God attached white tails to rabbits to make them easier targets. Gratzer asserts that fashion, not science, rules contemporary diet advice, and he enjoys eviscerating the "gruesome" Duke rice diet, the "probably dangerous" Scarsdale diet and the "grossly unbalanced" Atkins diet. "The history of nutritional science is full of fascination and drama," he writes, a point borne out by various accounts of forced hunger during World War II. A Nazi program to euthanize children deemed unworthy of living was carried out in hospital buildings called hungerh?user, where a diet of potatoes, turnips and cabbage was designed to cause death in three months. In 1940, when the Germans decided to eradicate the Jewish population of Warsaw by starvation. Dr. Israel Milejkowski and a group of ghetto physicians conducted research on the effects of malnutrition, figuring that some good might come of their suffering. "It was . . . the first study of the kind ever made," Gratzer notes. Some of the papers were smuggled to a non-Jewish professor, who buried them until after liberation. Learning exactly what happens when people starve was crucial in the progress of nutritional science because it focused on sickness caused not by pathogens but by what was missing from the diet. Since Galen, disease had been blamed on something bad invading the body and putting it out of balance. The paradigm shift occurred after it became unavoidably clear that the lack of essential nutrients could also be at fault. Even well into the 19th century, when it was already known that citrus fruits and vegetables prevented scurvy, conventional wisdom asserted they were effective because they contained an antidote to bad air and unwholesome food. "The notion that they contained a constituent essential for health," Gratzer writes, "lay beyond the reach of man's imagination." Nowhere was the stubborn resistance to this idea more apparent than in the insufferably slow recognition of what caused pellagra. Known as a disease of squalor and poverty, it was widespread during and after the Civil War in the southern United States, where the mortality rate among those suffering from it was 40 percent. Some blamed insect bites; others were convinced it was a contagious disease brought into the country by Italian immigrants. When the epidemiologist Joseph Goldberger went south in 1915 and noted that in asylums holding pellagra sufferers none of the staff members were affected, he concluded that it could not be infectious. On the other hand, the employees ate well while inmates were fed fatback and cornbread. To see if inadequate nutrition was the culprit, Goldberger served balanced meals to children in two orphanages where, after only a few weeks, pellagra disappeared. The logical conclusion - that pellagra resulted from a deficient diet (specifically, lack of nicatinic acid) - was obscured by the prevalence of eugenics, whose proponents contended that the institutions where Goldberger conducted his studies held inferior people who were especially susceptible to disease. "Willful obduracy," Gratzer calls the resistance, going on to describe Goldberger's outrageous strategy to put the infection theory to rest: "filth parties." At a pellagra hospital in Spartansburg, S.C., Goldberger and seven others "injected themselves with blood from severely affected victims . . . rubbed secretions from their mucous sores into their nose and mouth, and after three days swallowed pellets consisting of the urine, feces and skin scabs from several diseased subjects." None contracted pellagra. But despite these irrefutable findings, little was done initially to improve the diets of the poor. The disease finally began to disappear in the 1930's, thanks in part to federal soup kitchens and the introduction of enriched flour. Goldberger's audacity, and the pig-headedness of those who refused to believe him, are vivid evidence of Gratzer's promise that the history of nutritional dogma "encompasses every virtue, defect and foible of human nature." Jane and Michael Stern are the authors of the restaurant guide "Roadfood" and the cookbooks "Square Meals" and "Blue Plate Specials and Blue Ribbon Chefs." From checker at panix.com Thu Dec 8 02:21:12 2005 From: checker at panix.com (Premise Checker) Date: Wed, 7 Dec 2005 21:21:12 -0500 (EST) Subject: [Paleopsych] NYTBR: Merchandise of Venice Message-ID: Merchandise of Venice http://www.nytimes.com/2005/12/04/books/review/04schillinger.html [First chapter appended.] [The first chapter is better than the review, for it invites comparison with Colin Campbell's _The Romantic Ethic and the Spirit of Modern Consumerism_ (Oxford: Basil Blackwell, 1987). The book extended Max Weber's _The Protestant Ethic and the Spirit of Capitalism_ by going beyond the Protestant theology of predestination that Weber invoked to later developments in Protestantism that morphed into Sentimentalism and Romanticism. These later developments foster the idea of the new and hence (though as much unintended a money-making was to Luther and Calvin) of buying and buying and buying in the latter eighteenth century England and America. [Keep this in mind as you read the review and the first chapter and try to avoid conflating shopping the Renaissance with shopping in England and America from the latter eighteenth century through today. [So "rampant consumerism" is not something foisted onto us by wicked capitalists during just the past twenty years. I had somehow thought the meme "Fashion wears out clothes faster than women do" went back to Shakespeare. Googling <"wears out clothes"> and <"faster than women do"> turns up nothing. So let this be a meme of mine!] SHOPPING IN THE RENAISSANCE Consumer Cultures in Italy, 1400-1600. By Evelyn Welch. Illustrated. 403 pp. Yale University Press. $45. By LIESL SCHILLINGER CAN'T afford to pay your Visa bill this month? Why not mail in a pair of socks? If two are hard to fit in an envelope, one might do. After all, there's rich precedent. In "Shopping in the Renaissance," her meticulously researched and elegantly illustrated book about spending habits in 15th- and 16th-century Italy, Evelyn Welch, a professor at Queen Mary University of London, explains that Bolognese debtors commonly used household items as collateral: "old hoes, hammers, cooking pots, a brass cup, a pair of scissors, or in one case, a single white stocking." Explain to your creditors that money is the root of all evil and see if fear for their souls prompts leniency. Long before the gold standard was dreamed up, before the invention of credit cards and before shopping had come to be recognized as a vital form of therapy, Italian shoppers had considerable difficulty grasping the notion of conspicuous consumption. In the minds of moralists, Welch explains, "Any exchange of merchandise for money was potentially tainted." In the 16th century, the humanist Paolo Cortesi moaned that "gluttony and lust are fostered by perfumers, vendors of delicacies, poultry-sellers, money-vendors and cooks and savory foods," while the Venetian writer Tomaso Garzoni bewailed the "detestable occupations" of "eating, drinking and enjoying oneself" shown by day-trippers who wandered the piazzas, "looking at glassware, mirrors and rattles," gossiping at barber shops and, worse, reading the news. Such indulgence smacked to Renaissance Italians of what Professor Harold Hill called "fritterin' " as he stirred the inhabitants of River City to rise up against idle youth. In a similar vein, the Sienese preacher San Bernardino lambasted shop owners for contributing to the delinquency of minors. "You know well to whom you sell pine-nut biscuits, candies, marzipans and sugar cake," he scolded. "Your conscience cannot rest easy unless you have no sense of guilt in turning boys bad." Nonetheless, sometime after the Black Death winnowed the population in 1348, ushering in a period of plenty, new generations of Italians acquired a taste for the material pleasures of this earth, which ensuing spates of disease, famine and jeremiad did little to curb. But the learning curve was slow. While bountiful harvests were considered a good thing, and poor harvests were rued - as can be seen in illuminations by the Florentine corn-chandler Domenico Lenzi, which picture angels rejoicing above scenes of abundance and devils with bat wings flapping above meager crops - to profit from the sale of staples was a no-no. The butcher, the baker and the candlestick maker who bartered their wares and services for tablecloths and cooking pots avoided criticism, but lowly retailers - rivenditrice - who sold produce they had not grown themselves were compelled to carry banners or tablets bearing the shameful letter "R" to indicate their stigmatized trade. The Florentine poet Antonio Pucci derided peasant women who hawked vegetables, eggs and poultry in the Mercato Vecchio, declaring, "I speak of them with harsh words, / Those who fight throughout the day over two dried chestnuts / Calling each other whores / And they are always filling their baskets with fruit to their advantage." Decent women did not rove city streets, bickering with strangers about the price of garlic. They were expected to "either remain indoors or to move through the city with deliberate purpose." The question arises - who was buying the nuts and chickens if respectable ladies weren't? The answer was personal shoppers (although at the time they were known as servants, spenditore and courtiers), usually men, who were entrusted with purchases great and small by the bourgeois or ducal houses that employed them. They might go a-marketing for onions and haunches of veal, or they might be sent on quests for luxury goods. And the purse strings for all but sundry purchases were in the hands of the man of the house - unless the woman had ample resources of her own, both monetary and intellectual. In such cases, they could be more demanding and capricious than J. Lo before a concert. In a shopping list the teenage Marchioness of Mantua, Isabella d'Este, wrote out for a courtier named Zigliolo in 1491, she imperiously instructed, "These are the kind of things that I wish to have - engraved amethysts, rosaries of black, amber and gold, blue cloth for a camora, black cloth for a mantle, such as shall be without a rival in the world." Apparently, Zigliolo correctly anticipated her tastes, but a few years later, when a Ferrarese courtier provided the wrong sort of gloves from Spain, she complained that "he has sent us 12 dozen of the saddest gloves that had he searched all of Spain in order to find such poor quality I don't believe he could have found as many. . . . We would be ashamed to give them to people whom we love and they would never wear them. Can you please send them back." The marchioness was exercising her hyperdeveloped shopping muscle for a nation of women who mostly couldn't. Yet. From time to time, thrill-seeking nobles went out on the town to conduct their own treasure hunts, but such journeys were fraught with peril. In 1491, when Beatrice d'Este and her cousin, Isabella of Aragon, visited the markets of Milan wearing woolen headdresses, they were mocked by local women for their fashion sense. Beatrice's husband, Ludovico Maria Sforza, wrote to his sister-in-law in Mantua: "Since it is not the custom for women to go about with such cloths on their heads here, it seems that some of the women in the street began to make villainous remarks, upon which my wife fired up and began to curse them in return, in such a manner that they expected to come to blows." Even 500 years ago, shopping was not always pretty. But making purchases was tricky, even for people who had figured out the dress code, because Italian coins varied from city to city and political leaders minted their own vanity coins, much as today's celebrities brew their own signature perfumes. Political figures frequently banned the use of their opponents' coins. All in all, it was wiser to throw your socks on the counter and start haggling. When Isabella d'Este went to buy antiquities from the Medici collection, she offered Mantuan cloth in payment, and a large part of her 30,000-ducat dowry consisted not of gold pieces but of jewels, silverware and elaborate gowns - all of which could be pawned and pledged, whether to raise armies, buy art or pay for luxurious holiday trips. Her hope chest doubled as a bank vault, "enabling her, like any other wealthy Italian, to turn material wealth into ready cash." All this "expensive clothing, jewels and plate," Welch explains, "could be mortgaged over and over again, allowing men and women with possessions to spend in ways that far exceeded their immediate means." If they went too far, however, and couldn't redeem their goods in time, they might see their valuables auctioned on the piazza or risk other forms of public humiliation: being barred from the Rialto in Venice or forced to wear the debtor's crown of shame, the green beret, in Rome. It may be a pity we can't live in the style of Renaissance Italians anymore, swapping our clothes and casserole dishes for priceless antiquities, but it's no small consolation that we can incur debt the modern way, by charging it, and shop on the Rialto, even if we can't afford it. Liesl Schillinger, a New York-based arts writer, is a regular contributor to the Book Review. First chapter of 'Shopping in the Renaissance' http://www.nytimes.com/2005/12/04/books/chapters/1204-1st-welch.html By EVELYN WELCH In February 2001 the British artist Michael Landy took over an empty department store in central London. Before a fascinated and occasionally distraught audience of friends, fellow-artists and strangers drawn in from the streets, he and his assistants placed all his personal possessions on a conveyor belt. Looping round the complex, mechanised route, Landy's furniture, record collection, clothing and even his car were first inventoried and then systematically dismembered, divided and shredded. The work attracted considerable press attention and provoked a powerful public response. Landy's emphasis on destruction was seen as a challenge to the champions of consumerism and as a strong commentary on the seductions of acquisition and ownership. The setting, the bare interior of a store stripped of its lighting, counters and displays, was central to the work's meaning (Figure 1). As shoppers moved on from the performance into the still-functioning department stores and shops nearby, they were invited to reflect on the ultimate purposelessness of their purchases. Commenting after the event, Landy described his surprise when a number of onlookers equated his actions with those of a holy figure or a saint. Yet the disposal or dispersal of possessions has been a fundamental part of religious asceticism since early Christianity. But unlike the powerful image of Saint Francis of Assisi giving away his cloak to a beggar before stripping off all his clothes in order to refuse his father's wealth, Landy had no intention of forming a new religious order (Figure 2). Landy's attack on human attachment to material possessions was a secular act of artistic performance, a counterpart to contemporary celebrations of affluence and prosperity. As such he was, and is, part of a growing debate. Today, shopping, the process of going out to special sites to exchange earnings for consumable objects, is seen as both a force for good (consumer spending is saving Western domestic economies) and as a danger to society (consumer spending is destroying the environment and local diversity). Given its current importance, such behaviour has been closely scrutinised by anthropologists and sociologists who have often argued that the purchase of mass-produced items is a defining characteristic of modernity. In their turn, economists have looked for rational patterns of consumer spending, while an equally weighty literature has grown up to evaluate the emotive and psychological impulses that lie behind modern consumerism, culminating in a focus on the 'shopaholic' or kleptomaniac, usually a woman who, for complex reasons, is unable to control her desire to either buy or steal from stores. Following in this wake, historians and art historians are using concepts such as the emergence of a public sphere and the agency of the consumer to map out a new narrative linking this changing social behaviour to the development of new architectural spaces. Some have found the origins for contemporary shopping practices in the American malls of the 1930s or in the opening of the first department stores, such as Whiteley's in London in 1863 or the Bon March? in Paris in 1869 (Figure 3). These purpose-built buildings, with their fixed prices and large body of salaried personnel radically changed the nature of shopping. Buying became a leisure activity as well as a chore, one that women were increasingly able to enjoy. But while some have insisted that this was a distinctive feature of the late nineteenth and twentieth centuries, others have pushed back the transformation to the coffee-houses of eighteenth-century London, the mercers' shops of eighteenth-century Paris, or to the market halls and commercial chambers of seventeenth-century Amsterdam (Figure 4). As new social rituals developed, such as reading the paper, listening to public concerts or discussing scientific innovations, so too did a demand for new products such as coffee, tea, chocolate, porcelain and printed chintzes. Here bow-shaped glass shop windows, with their displays of exotic, imported goods are thought to have tempted buyers, sparking off a capitalist revolution and eventually liberating women from the home. In the search for the first modern shopping trip, these eighteenth- and nineteenth-century developments are often set against the backdrop of an undifferentiated late medieval past. The story of temporal progression requires more distant periods to be perceived as lacking in sophistication. The pre-industrial world is presented as having had a relatively limited access to a smaller range of regionally produced goods and a minimum of disposable income. Most of a family's earnings would have been spent on food. Little was left over for non-essentials, and most goods were produced within the home itself. These assumptions have meant that while many studies have looked for a growing mass-market for consumer goods in the eighteenth century, Renaissance scholarship has focused on elite patronage or international trade. Recently, however, there has been a tendency to argue that the supposed consumer boom of the enlightenment period started much earlier and that this revolution took place, not in London or Paris, but in fifteenth-century Italy. In 1993, for example, the economic historian Richard Goldthwaite argued that, 'the material culture of the Renaissance generated the very first stirring of the consumerism that was to reach a veritable revolutionary stage in the eighteenth century and eventually to culminate in the extravagant throw-away, fashion-ridden, commodity-culture of our own times'. But the question arises whether the Italian Renaissance consumerism was really the embryo of contemporary expenditure, a defining moment in the transition from the medieval to the modern. Does the detail from the 1470 Ferrarese frescoes of Palazzo Schifanoia depicting elegant shops with their customers represent a new form of activity or an ongoing tradition (Figure 5)? Is it in any way, however marginal, indicative of, or evidence for, a new form of consumer behaviour? While there will be much in this book that seems familiar, such as the pleasure that teenage girls took in trips to the market, there is a great deal that is very different. Indeed, far from pinpointing the start of 'ourselves' in fifteenth- and sixteenth-century Florence, the experience of the Italian Renaissance challenges rather than reinforces a sense of linear transfer from past to present. In particular, it threatens some basic assumptions concerning the connections between architecture and consumer behaviour. In the English language the links could not be closer. A standard dictionary defines shopping as, 'the action of visiting a shop or shops for the purpose of inspecting or buying goods'. A shopper is, 'one who frequents a shop or shops for the purpose of inspecting or buying goods'. But this correlation has no parallel in other European languages where there is little, if any, verbal connection between 'the shop' and the activity, 'shopping'. This is an important distinction because the impact of this assumed association between the architecture of commerce and modernity goes far beyond semantics. Early twentieth-century sociologists and economists who defined concepts of consumption relied on models of social development that considered shopping in stores as a far more sophisticated form of exchange than gift-trade or administered-trade. The latter were only phases that societies went through before finally emerging as fully developed (and hence more effective and efficient) market economies. This was not simply a theory. It was put into practice in countries such as Italy which only became a nation in the 1860s. From that point onwards, defining an Italian city as a modern urban society involved constructing new commercial and social spaces, particularly those modelled on the more seemingly advanced English and French examples. The so-called 'Liberty' or Art Nouveau style was adopted for some shop fronts while glass and iron proved popular for new shopping areas (Figure 6). When in 1864, for example, the city of Florence began demolishing its walls, gates and medieval market centre, it was to mark the town's transformation into the first capital of the new nation (Figures 7 and 8). Florence was not to stop, as one protagonist put it, 'in the lazy contemplation of our past glories but fight gallantly on the road to progress'. In 1865, it was even suggested that the entire market areas of the city centre should be transformed into a glass gallery on the model of the English Great Exhibition Hall before it was agreed to tear it down and rebuild the densely packed centre in a more piecemeal fashion. Likewise, in i864, the city of Milan marked its entry into the Italian nation with major urban renewal plans. This included a galleried arcade, whose construction contract was awarded to the British-based 'City of Milan Improvement Company Limited'. As the first King of the united Italy, Vittorio Emanuele II laid the foundation stones of the Galleria, the new glass and iron mall was presented as a symbol of the new country's future prosperity and a rejection of its backwards past (Figure 9) But these nineteenth-century debates reveal a more complex and contradictory set of attitudes than a simple embrace of British engineering. Photographers using advanced technologies for the period captured the emptied spaces of the old Florentine market while graphic artists produced postcard images of what was to be destroyed. Londoners who had visited the city wrote to The Times to decry the destruction of the old town centre and city walls. A sense of the need to preserve an attractive 'local' culture for the tourist market vied with the political desire to be accepted as the equal of the economically advanced countries of Europe and the United States. The issues raised by the Milanese Galleria and the destruction of Florence's old market centre have resonances that go far beyond the Italian peninsula and the nineteenth century. The competing values of preservation and nostalgia versus modernity and progress continue to have serious consequences today. Planners eager to impose change have tended to describe developing countries as having 'medieval' types of exchange. Open markets in Africa and Asia, systems of barter and supposedly informal networks of credit, have been presented as either backwards, or, conversely, as more romantic and natural than contemporary North American and British supermarkets and shopping malls. As in nineteenth-century Florence, seemingly unregulated and potentially unhygienic markets have been driven from city centres in places such as Hong Kong and Singapore by officials hoping to exclude elements perceived as old-fashioned from their growing economies. In contrast, highly developed urban areas such as New York and London, have re-introduced 'farmer's markets'. These evoke traditional street fairs in order to reassure customers that produce sold from stalls and served up in brown bags is somehow more genuine than shrink-wrapped goods removed from a refrigerated cabinet. Shopping in the Renaissance Given this context, it is difficult to step back and assess how men and women actually went out to shop in the past without falling into a narrative of either progress or decline. This is particularly acute for the Renaissance. During the period between 1400 and 1600, the daily business of buying and selling was an act of embedded social behaviour, not a special moment for considered reflection. While international merchants' manuals do survive both in manuscript and in print, the ordinary consumer's ability to assess value, select goods, bargain, obtain credit and finally to pay, was learnt primarily through observation, practice and experience rather than through any form of written instruction. This means that any study of Renaissance buying practices, where exchanges were transitory and verbal, has to rely on scattered and often problematic evidence. The images, literary sources, criminal records, statutes, auction and price lists, family accounts and diaries used in this book all had their own original purposes and formats. Their meanings were rarely fixed and the same item might be perceived in different ways in different times and places. For example, a poem such as Antonio Pucci's fourteenth-century description of the Mercato Vecchio in Florence, might carry one meaning for its audience when heard during a time of famine and yet another when read in a period of prosperity. But despite its slippery nature, it is still important to set such 'soft' evidence against the seemingly more stable facts and figures that make up grain prices and daily wage rates. This book takes, therefore, the approach of a cultural historian in an attempt to gain an insight into the experience of the Renaissance marketplace. While some of the material goes over the immediate boundaries of the title, the book focuses primarily on central and northern Italy between 1400 and 1600. This is, in part, because of the wealth of documentation available for this period and region. Venice, an entrep?t whose retailers served both an international and local clientele, was exceptional in its commercial sophistication and specialisation. But the entire northern and central Italian peninsula, with its multiplicity of large and medium-sized towns and distribution networks of ports, canals and roads that reached far into the countryside, was much more urbanised than the rest of Europe. Unlike England where the inhabitants of villages and hamlets gravitated to larger market towns to buy and sell produce, even the smaller and more isolated of Italy's rural and urban communities housed permanent shops and regular markets. For example, sixteenth-century Altopascio, a Tuscan mountain village with a population of 700 inhabitants had five shoemakers, two grocers and a ceramic seller, a bottegaio di piatti, as well as a blacksmith. The slightly larger Tuscan town of Poppi in the Casentino had a population of 1,450. In 1590, its inhabitants benefited from nine grocery stores, two bakeries, two butchers, three drugstores, a mercer's shop, a barber, a tailor and a shoemaker along with workshops for wool, leather and iron as well as kilns producing ceramic wares. These amenities served the wider locality as well as the small town, a relationship noted when the municipal council allowed complete immunity for debtors on market days, `for the good and benefit and maintenance of Poppi, considering its location on a dry hill and in need of being frequented and visited by other men and people'. Of equal importance was the diversity and competition between these urban centres, both large and small. Italy's political fragmentation had considerable cultural consequences. By the mid-fifteenth century power on the peninsula was roughly divided between the Kingdom of Naples, the Papal States, the Duchy of Milan and the city-states of Florence and Venice. By the end of the century, however, the fragile balance had been disrupted as the growing powers of France, Spain and the Habsburg empire attempted to gain control. After 1530, Italy's two major territorial states, Lombardy and Naples, were ruled by viceroys who drew on local urban structures but answered to Spain. These multiple boundaries - local, regional and international - allowed for the coexistence of legal systems as well as for the circulation of different forms of currencies, dress, codes of conduct, gesture and language. The diversity had real material meanings. Velvets permitted to butchers' wives in Milan might be forbidden to those in Venice; hats that seemed desirable in Naples may have been rejected in Genoa. Although the costume books from the second half of the sixteenth century such as those of Cesare Vecellio and Pietro Bertelli often exaggerated the differences, the fashions forged in Rome were quite distinct from those in Mantua or Ferrara (Figures 10-12). Even women living under the same jurisdiction, such as those in Vicenza and Venice, might wear different garments (Figures 13-14). This created issues around novelty that were very different from those of nation-states such as France and England where the major contrasts were between a single capital city like Paris or London and the provincial towns and rural communities. . . . From checker at panix.com Fri Dec 9 01:53:36 2005 From: checker at panix.com (Premise Checker) Date: Thu, 8 Dec 2005 20:53:36 -0500 (EST) Subject: [Paleopsych] NYT: 350 Years of What the Kids Heard Message-ID: 350 Years of What the Kids Heard http://www.nytimes.com/2005/12/05/books/05nort.html [Connections column by Edward Rothstein appended.] [I must have read children's books when I was a child, but beyond the Winnie the Pooh books and the Alice books, I can't recall any. Speaking of Alice, one of the five requirements I have for a wife is that she agree to name our first daughter Alice. Sarah agreed instantly, and indeed our first daughter is named Alice. The other is Adelaide, and both have the same Teutonic root meaning truth. [The other four are having the same ethnic background (which ensures deep commonalities), a love of classical music, a realistic view of the world (does not believe in Bronze Age creator gods or political shibboleths about planning and equality), and an honesty of appearance (no make up!). The woman I married has all five in spaces, AND she remains the most feminine person I have ever met. [She does have a shortcoming and a defect, though. She comes up four inches shorter than I do, and her brown eyes are so enormous that her eyelids aren't completely closed when she is asleep. her eyelids don't close when she is asleep. I can watch her rapid eye movements when she is dreaming.] By DINITIA SMITH Before Harry Potter there was "Slovenly Peter." Written by Heinrich Hoffmann and published in Germany in 1845, it is one of the best-selling children's books ever, translated into more than 100 languages. And what a piece of work it is. A girl plays with matches and suffers horrendous burns, on all her clothes "And arms, and hands, and eyes, and nose;/ Till she had nothing more to lose/ Except her little scarlet shoes." A little boy who sucks his thumb has his thumbs cut off by the Scissor Man. And in the difference between Harry and Peter lies the lesson of children's literature, said Jack Zipes, general editor of the new Norton Anthology of Children's Literature, published this month by W. W. Norton & Company. "These works reflect how we view children, and something about us," said Mr. Zipes, 68, a professor of German and comparative literature at the University of Minnesota, in a telephone interview from Minneapolis. The anthology joins the 11 other definitive compendiums by Norton. It is one of the first modern, comprehensive, critical collections of children's literature. And it is intended not for children, but for scholars. "It's a huge event, a real arrival of children's literature in academic studies," said John Cech, director of the Center for Children's Literature and Culture at the University of Florida in Gainesville. Although the academic study of children's literature is an exploding field, there are only a handful of Ph.D. programs in children's literature in English departments. One purpose of the anthology, said Mr. Zipes, is to encourage departments to add courses. The anthology, 2,471 pages long and weighing three pounds, covers 350 years of alphabet books, fairy tales, animal fables and the like, and took Mr. Zipes and four other editors four years to compile. Some stories are reprinted in full, sometimes with illustrations; others are excerpted. In it, the editors trace the history of juvenile literature from what is probably the first children's book, "Orbis Sensualium Pictus," an illustrated Latin grammar by Johann Amos Comenius published in 1658, up through works as recent as "Last Talk With Jim Hardwick," by Marilyn Nelson, which came out in 2001. Most early children's books were didactic and had a religious flavor, intended to civilize and save potential sinners - albeit upper-class ones, since they were more likely to be literate. As today, publishers were shrewd marketers of their wares. When John Newbery published "A Little Pretty Pocket-Book," in 1744, he included toys with the books - balls for boys, pincushions for girls. It is striking in the anthology to see the way certain forms cross cultures. Lullabies, for instance, have a nearly universal form, with elongated vowels, long pauses and common themes of separation, hunger, bogeymen, death - as if singing of these terrors could banish them from a child's dream world. One stunning entry is "Lullaby of a Female Convict to Her Child, the Night Previous to Execution," from 1807. "Who then will sooth thee, when thy mother's sleeping," the mother sings. "In her low grave of shame and infamy!/ Sleep, baby mine! - to-morrow I must leave thee." The book traces the evolution of various works, including "Hush-a-bye, baby, on the tree top" from its origins as an African-American slave song, "All the Pretty Horses." That version ends with the horrifying image, "Way down yonder in the meadow lays a poor little lambie/ The bees and the butterflies peckin' out his eyes/ The poor little thing cries, 'Mammy.' " The editors write that attitudes toward children began to change in the mid-18th century. In 1762, in his revolutionary work, "?mile; or, On Education," Rousseau wrote that children are intrinsically innocent and should be educated apart from corrupt society, a view later taken up by the Romantics. In the mid- to late-19th century, with the rise of the "isms," as Mr. Zipes put it - Darwinism, Freudianism, communism, Shavian socialism - children were recognized as people, and their literature became less heavily didactic. Schools were established for the lower classes, and increased literacy created new markets for books. This was the golden age of children's literature, of Robert Louis Stevenson, Rudyard Kipling, Louisa May Alcott, Mark Twain and Lewis Carroll. Throughout the text, in editors' notes and introductions, are tidbits about the hidden messages in the literature. "London Bridge Is Falling Down," say the editors, contains coded references to the medieval custom of burying people alive in the foundations of bridges. But children's stories, especially fairy tales, have always been hiding places for the subversive. "The Griffin and the Minor Canon" by Frank Stockton is a condemnation of cowardice and social hypocrisy; "The Happy Prince" by Oscar Wilde, a critique of the aristocracy. In the late 1960's and early 70's, as the anthology demonstrates, children's stories began to be rewritten and children's literature was approached in a different way. Black writers like Julius Lester and Mildred Taylor came to prominence along with Latino and Native Americans authors. Nowadays, the boundaries between adult and children's fiction are disappearing. Nothing is taboo. Included in the anthology are both Francesca Lia Block's story "Wolf" (2000), about rape, and "The Bleeding Man" (1974), a story about torture by Craig Kee Strete, a Native American writer. There is also a hefty selection of illustrations that parents may remember fondly - Sendak's wild things, Dr. Seuss's goofy animals, Babar the elephant king - as well as comics and science fiction, officially bringing those genres into the canon. The book also includes the full text of the play "Peter Pan," never before published in the United States, as far Mr. Zipes knows. Notably absent, however, is Harry Potter. That was because the cost of excerpting the Potter books was too high, Mr. Zipes said. Besides that, he said, "the Harry Potter books are very conventional and mediocre." "The plots are in the tradition of the schoolboy novel," he said, citing "Tom Brown's School Days," which was published in 1857. Mr. Zipes called the Potter books, "the ideological champions of patriarchal society," adding: "They celebrate the magical powers of a boy, with a girl - Hermione - cheerleading him. You can predict the outcome." Never mind, though. Harry Potter is doing just fine. Reading Kids' Books Without the Kids http://www.nytimes.com/2005/12/05/books/05conn.html Connections By EDWARD ROTHSTEIN I confess: for me, it's partly personal. I am in a local Barnes & Noble, looking at a table spread with new releases of books; behind me are four or five bookcases lined with similar books, all published in the last few years. I am reading jacket copy. "Life has not been easy lately for Walker," reads one. "His father has died, his girlfriend has moved away." And now, his "mother is going to work as a stripper. What if his friends find out? What if Rachel finds out?" Another introduces a clique of high-school girls, one of whom is "smart, hardworking and will insult you to tears faster than you can say, 'My haircut isn't ugly!' " And a third shows a photograph of an eighth-grade girl, eyes open in shock as she examines a piece leopard-skin lingerie. But the problem she faces going to a "lingerie shower" for her brother's ex-girlfriend doesn't compare with the problem of a 12th grader in another book who is so attracted to her 35-year-old English teacher that the two "tumble headlong into a passionate romance." What, I wonder, would Heidi have done in similar circumstances, or Anne of Green Gables? What would Eleanor Estes's Moffat children have said if their mom, instead of working as a seamstress making clothes for others, decided to strip her own clothes off instead? Did even Judy Blume dream how far her vision of a frank new form of children's fiction might go? It isn't just the plots of these books that are jarring. Teen pulp, which evolved out of children's books and rebelled against their supposed strictures, appears to take up far more real estate on the shelves of bookstores than books of more subtle literary bent for the pre-adult set. The genre also reflects a different set of expectations about how books are read and why. Hoping to be reminded of what is being missed, I turn to the opposite end of the cultural spectrum, to the newly released Norton Anthology of Children's Literature. It contains, it promises, "350 years of literary works for children" including nursery rhymes, primers, fairy tales, fables, legends, myths, fantasy, picture books, science fiction, comics, poetry, plays and adventure stories by 170 authors and illustrators, all tightly stuffed into 2,471 pages. But here, too, crankiness gets the better of me as I slip the book out of its case. Only my wariness is not caused by the content. It has to do with this book's purpose. The jacket calls the anthology a celebration of literary "richness and variety" in which "readers will find beloved works." But it is not really designed for readers in the usual sense. It was edited to be used in college courses. Childhood, the preface points out, is "a time saturated with narratives," but this is not a book whose selections are meant to be read to a child as bedtime narratives, let alone as bedtime stories. In fact, the binding is too floppy and the book too weighty to hold up without resting it on a table, and turning its tissue-thin pages requires mature surgical finesse. That's fine, of course. Children's literature does need to be studied; its ideas and evolution need to be understood, and the greatness of its achievements needs to be recognized. But then something else needs to be understood, and this is connected to the problems with teen pulp as well. It has to do with the function of children's books and the way pre-adult fiction grows out of them. We can anthologize short stories or philosophical works or essays, and their purpose and meaning will remain relatively unchanged. But when children's literature is placed in an anthology that is not for children, something is altered. The texts are read in a different way. Why, in fact, do children read, and why are they read to? Why are books specifically written for readers who are not yet adults? Children's books have a sense of multiple perspectives built into them because of how they are encountered. When a parent reads "Where the Wild Things Are" aloud, for example, the anger of the child, Max, his fantasy of mastery and revenge, and finally, his relief at his welcome home, are given another twist: his personal drama is not a private drama. The parent reading - the voice of the story itself - is precisely the authority with whom the child has waged similar battles. Everything is intensified; the resolution is also made more comforting, because in the calm moments of bedtime, the parent's voice reassures. Even for older children, the parent becomes a textual presence, an inescapable alternate voice. And by the time the child reads alone, the books themselves become multi-voiced. Literature for those-who-are-not-yet-adults is often proposing alternatives, refusing to settle into a single version of the "real." Lewis Carroll allows neither Alice to settle into a single interpretation of what she sees, nor the child reader - or, for that matter, the adult. Last week at the New York Public Library, Adam Gopnik, who has just written a fantasy novel for children, "The King in the Window" (Hyperion), spoke with his sister, Dr. Alison Gopnik, a cognitive scientist who has studied children's learning. Dr. Gopnik argued that children read the way scientists work: they experiment with different ways of ordering the world, exploring alternate modes of understanding. But in an academic reading of children's books this can be forgotten. An adult may read to discern political and economic interests, to see what lessons are latent in the text, to analyze how narrative works, to make connections. Norton has a Web site (www.wwnorton.com/nrl) in which course curriculums are proposed based on the anthology. They tend to use phrases like "ideological constructions" in discussing children's books. One course aims to "destabilize the totalizing idea of 'the child' and set up contrasts between male and female, urban and rural, rich and poor." In other words, it aims to splinter the category of childhood and focus attention on social strata, gender, locale. The risk is that literature ends up becoming univocal: each work is seen as an expression of the particular, and not much more. But this happens only in mediocre literature, like teen pulp, where narrow-casting is the marketing norm. Those books are meant to be close reflections of their readers, mirrors of their fantasies. The characters are just different enough from the readers to spur curiosity and sexual interest, and just similar enough to guarantee identification. A great children's book, though, does not reflect the world or its reader. It plays within the world. It explores possibilities. It confounds expectations. That is why the anthology's academic function makes me wary. The child, with the adult near at hand, never has a single perspective. Almost anything can happen. And usually does. Connections, a critic's perspective on arts and ideas, appears every other Monday. From checker at panix.com Fri Dec 9 01:54:10 2005 From: checker at panix.com (Premise Checker) Date: Thu, 8 Dec 2005 20:54:10 -0500 (EST) Subject: [Paleopsych] NYT: Instant Millions Can't Halt Winners' Grim Slide Message-ID: Instant Millions Can't Halt Winners' Grim Slide http://www.nytimes.com/2005/12/05/national/05winnings.html [I've read in many places that a great happy event (marriage, promotion, having lunch with Hillary) raises one's happiness level for only about a year and that a terrible event (death of spouse, being fired, being forced to have lunch with Hillary) lowers it also for only about a year. The sad tale below is an exception.] By JAMES DAO CORBIN, Ky., Nov. 30 - For Mack W. Metcalf and his estranged second wife, Virginia G. Merida, sharing a $34 million lottery jackpot in 2000 meant escaping poverty at breakneck speed. Years of blue-collar struggle and ramshackle apartment life gave way almost overnight to limitless leisure, big houses and lavish toys. Mr. Metcalf bought a Mount Vernon-like estate in southern Kentucky, stocking it with horses and vintage cars. Ms. Merida bought a Mercedes-Benz and a modernistic mansion overlooking the Ohio River, surrounding herself with stray cats. But trouble came almost as fast. And though there have been many stories of lottery winners turning to drugs or alcohol, and of lottery fortunes turning to dust, the tale of Mr. Metcalf and Ms. Merida stands out as a striking example of good luck - the kind most people only dream about - rapidly turning fatally bad. Mr. Metcalf's first wife sued him for $31,000 in unpaid child support, a former girlfriend wheedled $500,000 out of him while he was drunk, and alcoholism increasingly paralyzed him. Ms. Merida's boyfriend died of a drug overdose in her hilltop house, a brother began harassing her, she said, and neighbors came to believe her once welcoming home had turned into a drug den. Though they were divorced by 2001, it was as if their lives as rich people had taken on an eerie symmetry. So did their deaths. In 2003, just three years after cashing in his winning ticket, Mr. Metcalf died of complications relating to alcoholism at the age of 45. Then on the day before Thanksgiving, Ms. Merida's partly decomposed body was found in her bed. Authorities said they have found no evidence of foul play and are looking into the possibility of a drug overdose. She was 51. Ms. Merida's death remains under investigation, and large parts of both her and Mr. Metcalf's lives remain wrapped in mystery. But some of their friends and relatives said they thought the moral of their stories was clear. "Any problems people have, money magnifies it so much, it's unbelievable," said Robert Merida, one of Ms. Merida's three brothers. Mr. Metcalf's first wife, Marilyn Collins, said: "If he hadn't won, he would have worked like regular people and maybe had 20 years left. But when you put that kind of money in the hands of somebody with problems, it just helps them kill themselves." As a young woman, Ms. Merida lived with her family in Houston where her father, Dempsey Merida, ran a major drug-trafficking organization, law enforcement officials say. He and two of his sons, David and John, were indicted in 1983 and served prison sentences on drug-related convictions. John Murphy, the first assistant United States attorney for the western district of Texas, who helped prosecute the case, said the organization smuggled heroin and cocaine into Texas using Mr. Merida's chain of auto transmission shops as fronts. Mr. Murphy described Mr. Merida as a gruff, imposing man who tried to intimidate witnesses by muttering loudly in court. Mr. Merida received a 30-year sentence but was released in 2004 because of a serious illness, Mr. Murphy said. He died just months later in Kentucky at age 76. When Dempsey Merida and his two sons went to prison, his wife moved the family to northern Kentucky. Virginia Merida married, had a son, was divorced and married again, to Mack Metcalf, a co-worker at a plastics factory. But he drank too much and disappeared for long stretches of time, friends of Ms. Merida said, leaving her alone to care for her son and mother. She worked a succession of low-paying jobs, lived in cramped apartments, drove decrepit cars and struggled to pay rent. For his part, Mr. Metcalf drifted from job to job, living at one point in an abandoned bus. Then one July day in 2000, a friend called Ms. Merida and gave her some startling news: Mr. Metcalf had the winning $3 ticket for a $65 million Powerball jackpot. Ms. Merida had refused to answer his calls, thinking he was drunk. "Mack kept calling here, asking me to go tell Ginny that he had won the lottery," said Carolyn Keckeley, a friend of Ms. Merida. "She wouldn't believe him." At the time, both were barely scraping by, he by driving a forklift and she by making corrugated boxes. But in one shot, they walked away with a cash payout of $34 million, which they split 60-40: he received about $14 million after taxes, while she got more than $9 million. In a statement released by the lottery corporation, Mr. Metcalf said he planned to move to Australia. "I'm going to totally get away," he said. But problems arrived almost immediately. A caseworker in Northern Kentucky saw Mr. Metcalf's photograph and recognized him as having been delinquent in child support payments to a daughter from his first marriage. The county contacted Mr. Metcalf's first wife and they took legal action that resulted in court orders that he pay $31,000 in child support and create a $500,000 trust fund for the girl, Amanda, his only child. Ms. Collins, his first wife, said Mr. Metcalf abandoned the family when Amanda, now 21, was an infant, forcing them into near destitution. "I cooked dinner and set the table for six months for him, but he never came back," said Ms. Collins, 38. They were divorced in 1986. Even as he was battling Ms. Collins in court, Mr. Metcalf was filing his own lawsuit to protect his winnings. In court papers, he asserted that a former girlfriend, Deborah Hodge, had threatened and badgered him until he agreed, while drunk, to give her $500,000. Ms. Hodge vowed to call witnesses to testify that Mr. Metcalf had given money to other women as well. Mr. Metcalf's suit was dismissed after he walked out of a deposition, according to court papers. Still, there were moments of happiness. Shortly after winning the lottery, he took Amanda shopping in Cincinnati, giving her $500 to buy clothing and have her nails done. "I had never held that kind of money before," Ms. Metcalf said. "That was the best day ever." Pledging to become a good father, he moved to Corbin to be near Amanda, buying a 43-acre estate with a house modeled after Mount Vernon for $1.1 million. He collected all-terrain vehicles, vintage American cars and an eccentric array of pets: horses, Rottweilers, tarantulas and a 15-foot boa constrictor. He also continued to give away cash. Neighbors recall him buying goods at a convenience store with $100 bills, then giving the change to the next person in line. Ms. Metcalf said she discovered boxes filled with scraps of paper in his home recording money he had given away, debts he would never collect. His drinking got worse, and he became increasingly afraid that people were plotting to kill him, installing surveillance cameras and listening devices around his house, Ms. Metcalf said. Then in early 2003, he spent a month in the hospital for treatment of cirrhosis and hepatitis. After being released from the hospital, he married for the third time, but died just months later, in December. Virginia Merida seemed to handle her money better. She repaid old debts, including $1,000 to a landlord who had evicted her years earlier. She told a friend she had set aside $1 million for retirement. But she splurged enough to buy a Mercedes and a geodesic-dome house designed by a local architect in Cold Spring for $559,000. She kept the furnishings simple, neighbors said, but bought several arcade-quality video games for her son, Jason. For a time, Ms. Merida's mother lived with her as well. "I was at her house a year after she moved in, and she said she hadn't even unpacked," said Mary Jo Watkins, a neighbor. "It was as if she didn't know how to move up." Then in January, a live-in boyfriend, Fred Hill, died of an overdose of an opiate-related drug, according to a police report. No charges were filed, and officials said it was not clear if the opiate was heroin or a prescription drug. But neighbors began to believe that the house had become a haven for drug use or trafficking. "I think we all suspected that some drug problems were going on there because so many people were coming and going," Ms. Watkins said. In May, Ms. Merida filed a complaint in Campbell County Circuit Court against one of her brothers, David, saying that he had been harassing her. In June 16, a circuit court judge ordered both brother and sister to keep away from each other. It was unclear why she filed the complaint, and David Merida would not comment. When Ms. Merida's son found her body on Nov. 23, she had been dead for several days, the county coroner's office said. There was no evidence of a break-in, or that she had been attacked, officials said. Toxicological studies on her remains will not be completed for several weeks. It is unclear how much of Ms. Merida's estate remains, but it appears she saved some of it. That may not have been the case with Mr. Metcalf, his daughter said. Six months after his death, his house in Corbin was sold for $657,000, about half of what Mr. Metcalf had paid for it. In a brief obituary in The Kentucky Enquirer, Ms. Merida's family described her simply as "a homemaker." On a black tombstone, Ms. Metcalf had this inscribed for her father, "Loving father and brother, finally at rest." Al Salvato contributed reporting from Cold Spring, Bellevue and Dayton, Ky. From checker at panix.com Fri Dec 9 01:54:36 2005 From: checker at panix.com (Premise Checker) Date: Thu, 8 Dec 2005 20:54:36 -0500 (EST) Subject: [Paleopsych] The Week: The Father of Natural Selection Message-ID: The Father of Natural Selection http://www.theweekmagazine.com/article.aspx?id=1228 [This is a nice, brief summation. I think Darwin became an agnostic, as he confessed to his diaries if not to his confidents.] The work of Charles Darwin, the British naturalist whose ideas form the basis of modern evolutionary theory, is under attack by religious conservatives. What did Darwin actually say? 12/2/2005 How did Darwin become a biologist? Born in 1809, the son of a physician in Shrewsbury, England, Darwin was a bookish youngster but a poor student. He attended the University of Edinburgh to study medicine, but dropped out because he couldn't stand the sight of blood. He did like studying living things, though, and indulged his interest by hiking, collecting beetles, and attending botany lectures. When Darwin was 21, he learned that Robert FitzRoy, captain of the HMS Beagle, was looking for a hardy companion on a trip to chart the South American coast. Although FitzRoy thought Darwin had a "lack of energy and determination," he took him on. The Beagle set sail on Dec. 27, 1831. How did he spend the voyage? As the Beagle made various landfalls, Darwin disembarked to observe, sketch, and collect plants, animals, and fossils. He sometimes traveled as much as 400 miles over mountains, through jungles, and up and down rivers, before meeting up with the ship. By the time the Beagle returned to England on Oct. 2, 1836, Darwin had accumulated an 800-page diary; 1,700 pages of zoology and geology notes; 4,000 skins, bones, and other dried specimens; and another 1,500 pickled plants and animals. What did he do with all this stuff? Exhausted from the journey, Darwin holed up in London with his collection to prepare his journal for publication. But as he did so, he began thinking about some of the inconsistencies and anomalies he had observed. He was particularly intrigued by the 12 previously unknown kinds of finches he had discovered. Darwin realized they were separate species, distinguished mainly by the shapes of their beaks. Each was suited to a particular task--crushing seeds, eating insects, poking into flowers for nectar, and so on. How, Darwin wondered, had such similar birds wound up with different beaks? And why were the birds' beaks so well-suited to the food supply on islands where they were found? Darwin could think of no good answers until 1838, when he came upon a book by Thomas Malthus, An Essay on the Principle of Population. How did this book affect him? It was like a strike of lightning. Malthus, a minister and professor, argued that human populations would always grow beyond their ability to feed themselves unless they were checked by disease, catastrophe, or some other restraint. The idea sparked a recognition in Darwin that all living things, including plants, animals, and human beings, were constantly struggling to survive in a world of limited resources and myriad dangers. Those species that thrived, he reasoned, had adapted to circumstances by some sort of biological mechanism. That is, they had evolved. Was evolution Darwin's idea? Far from it. By Darwin's time, most respected scientists believed that living things tended to improve as the need to do so arose. But they had the details wrong. The most famous mistake was made by the French naturalist Jean-Baptiste Lamarck (1744-1829). Lamarck proposed that individuals developed specific characteristics by exercising them, while losing others through disuse. He thought, for example, that giraffes got their long necks by stretching for leaves that were out of reach, then passed their elongated necks onto their offspring. Darwin rejected this approach completely. Acquired characteristics, he argued, are not inherited. Rather, random chance had favored individuals or species with traits that allowed them to flourish in their environments. Successful organisms reproduced, and came to dominate their environments, while less successful organisms perished and disappeared. In 1859, after two decades of thought, analysis, and research, Darwin published his conclusions in a book, On the Origin of Species by Means of Natural Selection, or The Preservation of Favoured Races in the Struggle for Life. Why did he wait so long to publish? An introverted, nervous man, Darwin hated attention. He knew that his findings would arouse the wrath of millions who believed in the biblical creation story. Publishing his theories, he once told a friend, would be "like confessing a murder." Darwin decided to publish Origin of Species only when he discovered that a competitor, Alfred Russel Wallace, was about to go public with his own version of evolutionary theory. What was the public reaction? It was immediate and explosive. Origin of Species' entire first print run of 1,250 copies sold out, necessitating a second printing of 3,000 copies just six weeks later. Eminent scientists, philosophers, and liberal theologians recognized it as a groundbreaking work. The botanist Hewett Watson wrote Darwin, "You are the greatest revolutionist in natural history, if not of all centuries." But others, including many intellectuals, were appalled at the notion that man had evolved from lower life forms. What did his critics say? The astronomer Sir John Herschel openly derided Origin of Species as nonsensical, calling it "the law of higgledy-piggledy." The geologist William Whewell, master of Trinity College, Cambridge, refused to allow it into the college library on the grounds that it would threaten the moral fiber of England. In June 1860, the first great public debate about Darwin took place at the annual meeting of the British Association for the Advancement of Science. In a spontaneous exchange, Samuel "Soapy Sam" Wilberforce, the bishop of Oxford, clashed with biologist Thomas Huxley, one of Darwin's strongest defenders. Wilberforce asked Huxley if he was descended from apes on his grandmother's or his grandfather's side of the family. Huxley replied that if given the choice between being descended from apes, or from "a man highly endowed by nature" who used those gifts "for the mere purpose of introducing ridicule into a grave scientific discussion, I unhesitatingly affirm my preference for the ape." The gale of laughter that followed Huxley's remark heralded a storm over Darwin's ideas that continues, 145 years later. From checker at panix.com Fri Dec 9 01:55:02 2005 From: checker at panix.com (Premise Checker) Date: Thu, 8 Dec 2005 20:55:02 -0500 (EST) Subject: [Paleopsych] Newsweek: Charles Darwin: Evolution of a Scientist Message-ID: Charles Darwin: Evolution of a Scientist http://www.msnbc.msn.com/id/10118787/site/newsweek/ [Arts and Letters Daily pointed to several articles on the evolution controversy. Here are most of them. Like the summary in The Week of the great man, this is also very good.] He had planned to enter the ministry, but his discoveries on a fateful voyage 170 years ago shook his faith and changed our conception of the origins of life. By Jerry Adler Newsweek Nov. 28, 2005 issue - On a December night in 1831, HMS Beagle, on a mission to chart the coast of South America, sailed from Plymouth, England, straight into the 21st century. Onboard was a 22-year-old amateur naturalist, Charles Darwin, the son of a prosperous country doctor, who was recruited for the voyage largely to provide company for the Beagle's aloof and moody captain, Robert FitzRoy. For the next five years, the little ship?just 90 feet long and eight yards wide?sailed up and down Argentina, through the treacherous Strait of Magellan and into the Pacific, before returning home by way of Australia and Cape Town. Toward the end of the voyage, the Beagle spent five weeks at the remote archipelago of the Galapagos, home to giant tortoises, black lizards and a notable array of finches. Here Darwin began to formulate some of the ideas about evolution that would appear, a quarter-century later, in "The Origin of Species," which from the day it was written to the present has been among the most influential books ever published. Of the revolutionary thinkers who have done the most to shape the intellectual history of the past century, two?Sigmund Freud and Karl Marx?are in eclipse today, and one?Albert Einstein?has been accepted into the canon of modern thought, even if most people still don't understand what he was thinking. Darwin alone remains unassimilated, provocative, even threatening to some?like Pat Robertson, who recently warned the citizenry of Dover, Pa., that they risked divine wrath for siding with Darwin in a dispute over high-school biology textbooks (click here for related story). Could God still be mad after all this time? Unintentionally, but inescapably, that is the question raised by a compelling new show that opened Saturday at the American Museum of Natural History in New York. Here are the beetles Darwin collected fanatically, the fossils and ferns he studied obsessively, two live Galapagos tortoises like the ones he famously rode bareback, albeit these were hatched in the Milwaukee County Zoo. And here are the artifacts of his life: his tiny single-shot pistol, his magnifying glass and rock hammer?and the Bible that traveled around the world with him, a reminder that before his voyage he had been studying for the ministry. (Indeed, in a letter to his father, who opposed the trip, he listed all the latter's objections, starting with "disreputable to my character as a clergyman hereafter." Little did he imagine.) The show, which will travel to Boston, Chicago and Toronto before ending its tour in London in Darwin's bicentennial year of 2009, coincides by chance with the publication of two major Darwin anthologies as well as a novel by best-selling author John Darnton, "The Darwin Conspiracy," which playfully inverts history by portraying Darwin as a schemer who dispatched a rival into a volcano and stole the ideas that made him famous. Visitors to Britain will note that Darwin has replaced that other bearded Victorian icon, Charles Dickens, on the British 10-pound note. "Even people who aren't comfortable with Darwin's ideas," says Niles Eldredge, the museum's curator of paleontology, "are fascinated by the man." In part, the fascination with the man is being driven by his enemies, who say they're fighting "Darwinism," rather than evolution or natural selection. "It's a rhetorical device to make evolution seem like a kind of faith, like 'Maoism'," says Harvard biologist E. O. Wilson, editor of one of the two Darwin anthologies just published. (James D. Watson, codiscoverer of DNA, edited the other, but both include the identical four books.) "Scientists," Wilson adds, "don't call it 'Darwinism'." But the man is, in fact, fascinating. His own life exemplifies the painful journey from moral certainty to existential doubt that is the defining experience of modernity. He was an exuberant outdoorsman who embarked on one of the greatest adventures in history, but then never again left England. He lived for a few years in London before marrying his first cousin Emma, and moving to a country house where he spent the last 40 years of his life, writing, researching and raising his 10 children, to whom he was extraordinarily devoted. Eldredge demonstrates, in his book accompanying the museum show, "Darwin: Discovering the Tree of Life," how the ideas in "The Origin of Species" took shape in Darwin's notebooks as far back as the 1830s. But he held off publishing until 1859, and then only because he learned that a younger scientist, Alfred Russel Wallace, had come up with a similar theory. Darwin was afflicted throughout his later life by intestinal distress and heart palpitations, which kept him from working for more than a few hours at a time. There are two theories about this mysterious illness: a parasite he picked up in South America, or, as Eldredge believes, anxiety over where his intellectual journey was leading him, and the world. It appeared to many, including his own wife, that the destination was plainly hell. Emma, who had other plans for herself, was tormented to think they would spend eternity apart. Darwin knew full well what he was up to; as early as 1844, he famously wrote to a friend that to publish his thoughts on evolution would be akin to "confessing a murder." To a society accustomed to searching for truth in the pages of the Bible, Darwin introduced the notion of evolution: that the lineages of living things change, diverge and go extinct over time, rather than appear suddenly in immutable form, as Genesis would have it. A corollary is that most of the species alive now are descended from one or at most a few original forms (about which he?like biologists even today?has little to say). By itself this was not a wholly radical idea; Darwin's own grandfather, the esteemed polymath Erasmus Darwin, had suggested a variation on that idea decades earlier. But Charles Darwin was the first to muster convincing evidence for it. He had the advantage that by his time geologists had concluded that the Earth was millions of years old (today we know it's around 4.5 billion); an Earth created on Bishop Ussher's Biblically calculated timetable in 4004 B.C. wouldn't provide the scope necessary to come up with all the kinds of beetles in the world, or even the ones Darwin himself collected. And Darwin had his notebooks and the trunkloads of specimens he had shipped back to England. In Argentina he unearthed the fossil skeleton of a glyptodont, an extinct armored mammal that resembled the common armadillos he enjoyed hunting. The armadillos made, he wrote, "a most excellent dish when roasted in [their] shell," although the portions were small. The glyptodont, by contrast, was close to the size of a hippopotamus. Was it just a coincidence that both species were found in the same place?or could the smaller living animal be descended from the extinct larger one? But the crucial insights came from the islands of the Galapagos, populated by species that bore obvious similarities to animals found 600 miles away in South America?but differences as well, and smaller differences from one island to another. To Darwin's mind, the obvious explanation was that the islands had been colonized from the mainland by species that then evolved along diverging paths. He learned that it was possible to tell on which island a tortoise was born from its shell. Did God, the supreme intelligence, deign to design distinctive shell patterns for the tortoises of each island? Darwin's greater, and more radical, achievement was to suggest a plausible mechanism for evolution. To a world taught to see the hand of God in every part of Nature, he suggested a different creative force altogether, an undirected, morally neutral process he called natural selection. Others characterized it as "survival of the fittest," although the phrase has taken on connotations of social and economic competition that Darwin never intended. But he was very much influenced by Thomas Malthus, and his idea that predators, disease and a finite food supply place a limit on populations that would otherwise multiply indefinitely. Animals are in a continuous struggle to survive and reproduce, and it was Darwin's insight that the winners, on average, must have some small advantage over those who fall behind. His crucial insight was that organisms which by chance are better adapt-ed to their environment?a faster wolf, or deer?have a better chance of surviving and passing those characteristics on to the next generation. (In modern terms, we would say pass on their genes, but Darwin wrote long before the mechanisms of heredity were understood.) Of course, it's not as simple as a one-dimensional contest to outrun the competition. If the climate changes, a heavier coat might represent the winning edge. For a certain species, intelligence has been a useful trait. Evolution is driven by the accumulation of many such small changes, culminating in the emergence of an entirely new species. "[F]rom the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows," Darwin wrote. And there was an even more troubling implication to his theory. To a species that believed it was made in the image of God, Darwin's great book addressed only this one cryptic sentence: "Much light will be thrown on the origin of man and his history." That would come 12 years later, in "The Descent of Man," which explicitly linked human beings to the rest of the animal kingdom by way of the apes. "Man may be excused for feeling some pride at having risen, though not through his own exertions, to the very summit of the organic scale," Darwin wrote, offering a small sop to human vanity before his devastating conclusion: "that man with all his noble qualities ... still bears in his bodily frame the indelible stamp of his lowly origin." So it was apparent to many even in 1860?when the Anglican Bishop Samuel Wilberforce debated Darwin's defender Thomas Huxley at Oxford?that Darwin wasn't merely contradicting the literal Biblical account of a six-day creation, which many educated Englishmen of his time were willing to treat as allegory. His ideas, carried to their logical conclusion, appeared to undercut the very basis of Christianity, if not indeed all theistic religion. Was the entire panoply of life stretching back millions of years to its single-celled origins, with its innumerable extinctions and branchings, really just a prelude and backdrop to the events of the Bible? When did Homo sapiens, descended by a series of tiny changes in an unbroken line from earlier species of apes, develop a soul? The British biologist Richard Dawkins, an outspoken defender of Darwin and a nonbeliever, famously wrote that evolution "made it possible to be an intellectually fulfilled atheist." Although Darwin struggled with questions of faith his whole life, he ultimately described himself as an "Agnostic." But he reached that conclusion through a different, although well-traveled, route. William Howarth, an environmental historian who teaches a course at Princeton called "Darwin in Our Time," dates Darwin's doubts about Christianity to his encounters with slave-owning Christians?some of them no doubt citing Scripture as justification?which deeply offended Darwin, an ardent abolitionist. More generally, Darwin was troubled by theodicy, the problem of evil: how could a benevolent and omnipotent God permit so much suffering in the world he created? Believers argue that human suffering is ennobling, an agent of "moral improvement," Darwin acknowledged. But with his intimate knowledge of beetles, frogs, snakes and the rest of an omnivorous, amoral creation, Darwin wasn't buying it. Was God indifferent to "the suffering of millions of the lower animals throughout almost endless time"? In any case, it all changed for him after 1851. In that year Darwin's beloved eldest daughter, Annie, died at the age of 10?probably from tuberculosis?an instance of suffering that only led him down darker paths of despair. A legend has grown up that Darwin experienced a deathbed conversion and repentance for his life's work, but his family has always denied it. He did, however, manage to pass through the needle's eye of Westminster Abbey, where he was entombed with honor in 1882. So it's not surprising that, down to the present day, fundamentalist Christians have been suspicious of Darwin and his works?or that in the United States, where 80 percent of the population believe God created the universe, less than half believe in evolution. Some believers have managed to square the circle by mapping out separate realms for science and religion. "Science's proper role is to explore natural explanations for the material world," says the biologist Francis Collins, director of the Human Genome Project and an evangelical Christian. "Science provides no answers to the question 'Why are we here, anyway?' That is the role of philosophy and theology." The late Stephen Jay Gould, a prolific writer on evolution and a religious agnostic, took the same approach. But, as Dawkins tirelessly observes, religion makes specific metaphysical claims that appear to conflict with those of evolution. Dealing with those requires some skill in Biblical interpretation. In mainstream Christian seminaries the dominant view, according to Holmes Rolston III, a philosopher at Colorado State University and author of "Genes, Genesis and God," is that the Biblical creation story is a poetic version of the scientific account, with vegetation and creatures of the sea and land emerging in the same basic order. In this interpretation, God gives his creation a degree of autonomy to develop on its own. Rolston points to Genesis 1:11, where God, after creating the heavens and the Earth, says, "Let the Earth put forth vegetation ..." "You needed a good architect at the big bang to get the universe set up right," he says. "But the account describes a God who opens up possibilities in which creatures are generated in an Earth that has these rich capacities." Collins identifies the soul with the moral law, the uniquely human sense of right and wrong. "The story of Adam and Eve can thus be interpreted as the description of the moment at which this moral law entered the human species," he says. "Perhaps a certain threshold of brain development had to be reached before this became possible?but in my view the moral law itself defies a purely biological explanation." The Darwin exhibit was conceived in 2002, when the current round of Darwin-bashing was still over the horizon, but just in those three years' time museum officials found they had to greatly expand their treatment of the controversy?in particular, the rise of "intelligent design" as an alternative to natural selection. ID posits a supernatural force behind the emergence of complex biological systems?such as the eye?composed of many interdependent parts. Although ID advocates have struggled to achieve scientific respectability, biologists overwhelmingly dismiss it as nonsense. Collins comments, in a video that is part of the museum show: "[ID] says, if there's some part of science that you can't understand, that must be where God is. Historically, that hasn't gone well. And if science does figure out [how the eye evolved]?and I believe it's very likely that science will ... then where is God?" Where is God? it is the mournful chorus that has accompanied every new scientific paradigm over the last 500 years, ever since Copernicus declared him unnecessary to the task of getting the sun up into the sky each day. The church eventually reconciled itself to the reality of the solar system, which Darwin, perhaps intentionally, invoked in the stirring conclusion to the "Origin": "There is grandeur in this view of life ... that whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved." For all his nets and guns and glasses, Darwin never found God; by the same token, the Bible has nothing to impart about the genetic relationships among the finches he did find. But it is human nature to seek both kinds of knowledge. Perhaps after a few more cycles of the planet, we will find a way to pursue them both in peace. With Anne Underwood and William Lee Adams From checker at panix.com Fri Dec 9 01:55:17 2005 From: checker at panix.com (Premise Checker) Date: Thu, 8 Dec 2005 20:55:17 -0500 (EST) Subject: [Paleopsych] Jerusalem Post: Are Jews born smart? Message-ID: Are Jews born smart? http://www.jpost.com/servlet/Satellite?cid=1132475650155&pagename=JPost%2FJPArticle%2FPrinter [The idea that we use only 5-7% of our brain flies in the face of evolutionary logic. The brain uses 12% of the body's calories yet constitutes only 2% of its mass.] JEREMY MAISSEL , THE JERUSALEM POST Nov. 29, 2005 Ashkenazi Jews are genetically intellectually superior to everyone else. This is the conclusion of a recent "scientific" study entitled Natural History of Ashkenazi Intelligence that triggered several articles in popular publications such as New York Magazine, The New York Times and the Economist. In this study, Gregory Cochran, Jason Hardy and Henry Harpending of the University Of Utah's anthropology department suggest a genetic explanation to account for this remarkable intellectual achievement. They base their hypothesis on four observations. First, that Ashkenazi Jews have the highest average IQ of any ethnic grouping. Second, Ashkenazim have a very low inward gene flow (low intermarriage). Third, historic restrictions on professions allowed to Jews, such as money-lending, banking and tax farming, in which higher intelligence strongly favored economic success, in turn led to increased reproductive success. Low intermarriage acted as a selective process genetically favoring these abilities. Fourth, genetic mutations responsible for diseases commonly found in Ashkenazi Jews, such as Tay-Sachs, are responsible for improved intelligence. My initial reaction to a theory like this is suspicion laced with a healthy dose of skepticism. Undoubtedly Ashkenazim have made a disproportionate contribution to Western intellectual and cultural life - think Freud, Einstein, Mahler, or Woody Allen and Jerry Seinfeld, to name but a few. But saying that Ashkenazi genes are different calls into question the motivation behind the research. SHOULD 'RACE' be dignified as a subject of scientific study? To refuse to investigate a subject, however objectionable, would in itself be unscientific. Yet the attention of the scientific community alone lends it credibility. This study is to be published in The Journal of Biosocial Science in 2006 by Cambridge University Press. The paper drew considerable criticism for both its aims and methods from geneticists, historians, social scientists and other academics as "poor science" - condemning its polemical style and the lack of usual rigor and dispassion of scientific texts. But what do we do with the conclusions of the thesis? Maybe file them with Jewish conspiracies such as The Protocols of the Elders of Zion? Claiming we are a race genetically differentiated from the rest of humanity could provide excellent material for anti-Semites. It could share a shelf with other "scientific" works on race and intelligence such as those of Arthur Jensen or The Bell Curve by Charles Murray and Richard Herrnstein - which questioned affirmative action in the US, claiming that African-Americans are genetically inferior in intellectual abilities. Is the Harpending and Cochran study any less odious for the fact that it portrays Jews in a positive light? JUDAISM HAS never advocated Jewish racial superiority. Indeed, the Talmud (Sanhedrin 38a) explains that Adam, the biblical first man, was created singly as the common forebear of all mankind so that future families would not quarrel over claims of superiority in their respective ancestry. If racial purity was important the Jewish people would not have accepted converts, or would maybe maybe reconsider the status of their offspring. Yet we have the biblical story of Ruth, a convert who is not only accepted into the Jewish people, but whose descendents include King David and, ultimately, the Messiah. Down the centuries, reluctance to accept converts was based on concerns about the smooth transmission of family traditions, religious observances, history and culture, and not the watering-down of blood, diluting DNA, or contamination of the Jewish gene pool. Being "the chosen people" does not make Jews superior either. The idea of chosenness first appears in the book of Exodus (19:5-6) where, contingent on complying with and keeping the Divine covenant, the Jewish people is singled out to become "a kingdom of priests and a holy nation." In the words of Henri Atlan: "Election does not imply superiority or inherent sanctity, since the correct reading of the Bible in fact implies conditional chosenness. The election is one of duty, not of rights or attributes." IF JEWS aren't racially superior, then, how does one account for the undeniably disproportionate achievements of Jews (numbering 0.2% of the world population) at winning Nobel prizes, for example? There is a "self-fulfilling prophecy" explanation. Nobel prizes are awarded according to a set of culturally-rooted values - extolling the virtues of Western civilization and rewarding its paradigms, we should bear in mind that Judaism made a significant contribution to that civilization. Jews have always been literate, and historically the professional restrictions on Ashkenazi Jews encouraged them to promote "exile-proof" skills. They valued and encouraged learning, hard work and achievement. These were a cultural legacy, not innate qualities. If race is the source of those achievements, where does hard work or personal endeavor enter the equation? If I am an Ashkenazi Jew, is it my destiny to achieve? And what do we do with this within the Jewish world? We really don't need another source of divisiveness along the Ashkenazi/Sephardi rift. My own view as an educator is that everyone has the same intellectual potential, regardless of lineage. Psychologists maintain that the average person uses only 5-7% of that potential. Differing levels of achievement among people are accounted for by the amount of their potential they have managed to exploit. If there is any common factor accounting for the achievement of some exceptional Ashkenazi Jews it may be their cultural legacy that has enabled them to make more of themselves. Their achievements are not predestined by an accident of birth. The writer, a member of Kibbutz Alumim, is senior educator in Melitz Centers for Jewish-Zionist Education. From checker at panix.com Fri Dec 9 01:56:14 2005 From: checker at panix.com (Premise Checker) Date: Thu, 8 Dec 2005 20:56:14 -0500 (EST) Subject: [Paleopsych] Physics World: Does God play dice? Message-ID: Does God play dice? (December 2005) - Physics World - PhysicsWeb http://www.physicsweb.org/articles/world/18/12/2/1 [I like Seth Lloyd's 10^120 calculation of the maximum number of calculations since the universe began. I did something similar, namely to take 1. The number of photons, 2. The Planck distance divided by the speed of light, as number of movements a photon can make per unit of time, and 3. The number of units of time since the Big Bang. I multiplied them together and got iirc just this 10^120. Now since 2^10 is approx. 10^3, 10^120 = 2^600. I've seen the 10^120 figure elsewhere, but this may just have been a repeat of Lloyd's reasoning. [And so a key of 600 bits would be absolutely unbreakable in the next 13.5 billion years, if the entire universe were devoted to breaking it AND there were no barriers, like the speed of light, to slow down the calculations coming together. [Yet I've been told that it is possible to crack a 600-bit encryption. Please reconcile this! And while you are at it, tell me how much communication is slowed down as the number of bits increases. [A different point: I can hardly think that superstring theory should be stopped just because we NOW have no means to testing it. This is like Dr. Michael Behe saying there must be an intelligent designer because Dr. Michael Behe cannot figure out how life evolved. It is immoral ever to stop inquiry. Unless you run out of grant money, of course.] Forum: December 2005 Einstein was one of the founders of quantum mechanics, yet he disliked the randomness that lies at the heart of the theory. God does not, he famously said, play dice. However, quantum theory has survived a century of experimental tests, although it has yet to be reconciled with another of Einstein's great discoveries - the general theory of relativity. Below four theorists - Gerard 't Hooft, Edward Witten, Fay Dowker and Paul Davies- outline their views on the current status of quantum theory and the way forward Gerard 't Hooft argues that the problems we face in reconciling quantum mechanics with general relativity could force us to reconsider the basic principles of both theories. Gerard 't Hooft Gerard 't Hooft If there is any preconceived notion concerning the laws of nature - one that we can rely on without any further questioning - it is the assumption that they are controlled by strict logic. Under all conceivable circumstances, the laws of nature should dictate how the universe evolves. Curiously, however, quantum mechanics has given a new twist to this adage. It does not allow a precise sequence of events to be predicted, only statistical averages. All statistical averages can be predicted - in principle with infinite accuracy - but nothing more than that. Einstein was one of the first people to protest against this impoverishment of the concept of logic. It has turned out, however, to be a fact of life. Quantum mechanics is the only known realistic description of the microscopic parts of our universe like atoms and molecules, and it works just fine. Logically impoverished or not, quantum mechanics appears to be completely self-consistent. But how does quantum mechanics tie in with particles that are much smaller than atoms? The Standard Model is the beautiful solution to two fundamental problems: one, how to combine quantum mechanics with Einstein's theory of special relativity; and two, how to explain numerous experimental observations concerning the behaviour of sub-atomic particles in terms of a concise theory. This model tells us how far we can go with quantum mechanics. Provided that we adhere strictly to the principles of quantum field theory, nature obeys both quantum mechanics and special relativity up to arbitrarily small distance and time scales. Just like all other successful theories of nature, the Standard Model obeys the notions of locality and causality, which makes this theory completely comprehensible. In other words, the physical laws of this theory describe in a meaningful way what happens under all conceivable circumstances. The standard theory of general relativity, which describes the gravitational forces in the macroscopic world, approaches a similar degree of perfection. Einstein's field equations are local, and here, cause also precedes effect in a local fashion. These laws, too, are completely unambiguous. But how can we combine the Standard Model with general relativity? Many theorists appear to think that this is just a technical problem. But if I say something like "quantum general relativity is not renormalizable", this is much more than just a technicality. Renormalizability has made the Standard Model possible, because it lets us answer the question of what happens at extremely tiny distance scales. Or, more precisely, how can we see that cause precedes effect there? If cause did not precede effect, we would have no causality or locality - and no theory at all. Asking both questions in quantum gravity does not appear to make sense. At distance scales small compared with the Planck scale, some 10^-33 cm, there seems to be no such thing as a space-time continuum. That is because gravity causes space-time to be highly curved at very small distances. And at small distance scales, this curvature exceeds all bounds. But what exactly does this mean? Are space and time discrete? What then do concepts such as causality and locality mean? Without proper answers to such questions, there is no logically consistent formalism, not even a quantum-mechanical one. One ambitious attempt to combine quantum mechanics with general relativity is superstring theory. However, I am unhappy with the answers that this theory seems to suggest to us. String theory seems to be telling us to believe in "magic": it is claimed that "duality theorems", which are not properly understood, will allow us to predict features without reference to locality or causality. To me such magic is synonymous with "deceit". People only rely on magic if they do not understand what is really going on. This is not acceptable in physics. In thinking about these matters, I have reached a conclusion that few other researchers have adopted: the problem lies with quantum mechanics, possibly with general relativity, or conceivably with both. Quantum mechanics could well relate to micro-physics the same way that thermodynamics relates to molecular physics: it is formally correct, but it may well be possible to devise deterministic laws at the micro scale. However, many researchers say that the mathematical nature of quantum mechanics does not allow this - a claim deduced from what are known as "Bell inequalities". In 1964 John Bell showed that a deterministic theory should, under all circumstances, obey mathematical inequalities that are actually violated by the quantum laws. This contradiction, however, arises if one assumes that the particles we talk about, and their properties, are real, existing entities. But if we assume that objects are only real if they have been precisely defined, including all oscillations as small as the Planck scale - and that only our measurements of the properties of particles are real - then there is no blatant contradiction. One might assume that all macroscopic phenomena, such as particle positions, momenta, spins and energies, relate to microscopic variables in the same way thermodynamic concepts such as entropy and temperature relate to local, mechanical variables. Particles, and their properties, are not (or not entirely) real in the ontological sense. The only realities in this theory are the things that happen at the Planck scale. The things we call particles are chaotic oscillations of these Planckian quantities. What exactly these Planckian degrees of freedom are, however, remains a mystery. This leads me to an even more daring proposition. Perhaps general relativity does not appear in the formalism of the ultimate equations of nature. In making the transition from a deterministic theory to a statistical - i.e. quantum mechanical - treatment, one may find that the quantum description develops many more symmetries than the deeper deterministic description. Let me try to clarify what I mean. If, according to the deterministic theory, two different states evolve into the same final state, then quantum mechanically these states will be indistinguishable. We call such a feature "information loss". In quantum field theories such as the Standard Model, we often work with fields that are not directly observable, because of "gauge invariance", which is a symmetry. Now, I propose to turn this around. In a deterministic theory with information loss, certain states are unobservable (because information about them has disappeared). When one uses a quantum-mechanical language to describe such a situation, gauge symmetries naturally arise. These symmetries are not present in the initial laws. The "general co-ordinate covariance" of general relativity could be just such a symmetry. This is indeed an unusual view on the concept of symmetries in nature. Nature provides us with one indication that perhaps points in this direction: the unnatural, tiny value of the cosmological constant L. It indicates that the universe has a propensity to stay flat. Why this happens is a mystery that cannot be explained in any theory in which gravitation is subject to quantum mechanics. If, however, an underlying, deterministic description naturally features some preferred flat co-ordinate frame, the puzzle will cease to perplex us. There might be another example, which is the preservation of the symmetry between the quarks in the subatomic world, called charge-parity (CP) symmetry - a symmetry that one would have expected to be destroyed by their strong interactions. The problem of the cosmological constant has always been a problem of quantum gravity. I am convinced that the small value of L cannot be reconciled with the standard paradigms of quantized fields and general relativity. It is obvious that drastic modifications in our way of thinking, such as the ones hinted at in this text, are required to solve the problems addressed here. Edward Witten thinks that one of the most perplexing aspects of quantum mechanics is how to apply it to the whole universe Edward Witten Edward Witten Quantum mechanics is perplexing, and likely to remain so. The departure from ordinary classical intuition that came with the emergence of quantum mechanics is almost surely irrevocable. An improved future theory, if there is one, will probably only lead us farther afield. Is there any hint of a clue that might lead to a more complete theory? Experimental physicists are increasingly able to perform experiments that used to be called thought experiments in textbooks. Quantum mechanics has held up brilliantly. If there is a cloud on the horizon, it is that it is hard to see what it means to apply quantum mechanics to the whole universe. I suppose that there are two aspects to this. Quantum-mechanical probabilities do not seem to make much sense when applied to the whole universe, which appears to happen only once. And we all find it confusing to include ourselves in a quantum description of the whole universe. Yet applying quantum mechanics to something less than the whole universe - to an experimental system that is observed by a classical human observer - is precisely what forces us to interpret quantum mechanics in terms of probabilities. If we had a good understanding of what quantum mechanics means when applied to the whole universe, we might ultimately say that the notion that "God plays dice" results from trying to describe a quantum reality in classical terms. Fay Dowker thinks that the puzzles of quantum mechanics could be solved by considering what are known as the "histories" of a system, as introduced by Richard Feynman Fay Dowker Fay Dowker The development of quantum mechanics was a major advance in our understanding of the physical world. However, quantum mechanics has not yet come fully to fruition because it has not replaced classical mechanics in the way that general relativity has replaced Newtonian gravity. In the latter case, we can start from general relativity and derive the laws of Newtonian gravity as an approximation; we can also predict when - and quantitatively to what extent - that approximation is valid. But we cannot yet derive classical mechanics from quantum mechanics in the same way. The reason is that, in its standard textbook formulation, quantum mechanics requires us to assume we have classical measuring equipment. Predictions about the measurements that are recorded, or observed, by this equipment form the scientific output of the theory. But without a classical observer, we cannot make any predictions. While many physicists have been content with quantum mechanics in its textbook form, others - beginning with Einstein - have sought to complete the quantum revolution and make it a truly universal theory, independent of any classical crutch. One attempt to sort out quantum mechanics is to view it as a generalization of classical "stochastic" theories, such as Brownian motion. In Brownian motion, a particle moves along one of a number of possible trajectories, or "histories". The notion of a history is crucial here: it is a complete description of the system at each time between some initial and final times. A history is an a priori possibility for the complete evolution of the system, and the collection of all the histories is called the "sample space". The system will have only one actual history from the sample space but any one is an a priori possibility. The actual history is chosen from the sample space at random according to the "law of motion" for a Brownian particle. This law is a probability distribution, or "measure", on the sample space that outlines, roughly, how likely each history is for the actual evolution. Quantum mechanics also has a sample space of possible histories - trajectories of a particle, say - but on this occasion the sample space has a "quantal measure" associated with it. As with Brownian motion, the quantal measure gives a non-negative number for each subset of the sample space. However, this quantal measure cannot now be interpreted as a probability because of the phenomenon of quantum interference, which means that the numbers cannot be added together like probabilities. For example, when electrons pass through a Young's double-slit set-up, the quantal measure of the set of all histories for the electron that ends up at a particular region on the screen is not just the quantal measure of the set of histories that goes through one slit added to the quantal measure of the set of histories that goes through the other. Essentially, this is due to the phenomenon we call quantum interference between histories, which is due, in turn, to the way we calculate the quantum measure of a bunch of histories as the square of the sum of the amplitudes of the histories in the bunch. When you add some numbers and then square the result, you do not get the sum of the squares - there are also cross terms, which are the expression of the interference that spoils the interpretation as probabilities. The challenge is to find the right interpretation of this quantal measure, one that explains the textbook rules by predicting objectively when classical "measurement" situations arise. This includes the struggle to understand quantum mechanics as a theory that respects relativistic causality in the face of experimental evidence that widely separated particles can be correlated in ways that seem incompatible with special relativity. It is no coincidence that those physicists who are at the forefront of developing this histories approach to quantum mechanics - people like James Hartle from the University of California at Santa Barbara, Chris Isham at Imperial College, London and Rafael Sorkin at Syracuse University - all work on the problem of quantum gravity, which is the attempt to bring gravity within the framework of a universal quantum theory. In histories quantum gravity, each history in the sample space of possibilities is not in space-time; rather, each history is a space-time. If a theory of quantum gravity of this sort can be achieved, it would embody Einstein's hopes for a unification in which matter and space-time, observer and observed, are all treated on an equal footing. Paul Davies believes that the complexity of a system could define the boundary between the quantum and classical worlds Paul Davies Paul Davies Despite its stunning success in describing a wide range of phenomena in the micro-world, quantum mechanics remains a source of puzzlement. The trouble stems from meshing the quantum to the classical world of familiar experience. A quantum particle can be in a superposition of states - for example it may be in many places at once - whereas the "classical" world of observation reveals a single reality. This conundrum is famously captured by the paradox of Schr?dinger's cat, in which a quantum superposition is amplified in order to put an animal into an apparently live-dead hybrid state. Physicists divide into those who believe quantum mechanics is a complete theory that applies to the universe as a whole, regardless of scale, and those who think it must break down at some level between atom and observer. The former group subscribe to the "many universes" interpretation, according to which all branches of a quantum superposition are equally valid and describe parallel realities. Though many physicists reject this interpretation as unacceptably bizarre, there is no consensus on the alternative. Quantum mechanics does not seem to fail at any obvious scale of size or mass, as the phenomenon of superconductivity attests. So perhaps some other property of a physical system signals the emergence of classicality from the quantum realm? I want to suggest that complexity may be the appropriate quantity. Just how complex must a system be to qualify for the designation "classical"? A cat is, I submit, a classical object because it is complex enough to be either alive or dead, and not both at the same time. But specifying a precise measure of complexity is difficult. Many definitions on offer are based on information theory or computing. There is, however, a natural measure of complexity that derives from the very nature of the universe. This is defined by the maximum amount of information that the universe can possibly have processed since its origin in a Big Bang. Seth Lloyd of the Massachusetts Institute of Technology has computed this to be about 10^120 bits (2000 Nature 406 1047 and 2002 Phys. Rev. Lett. 99 237901). A system that requires more than this quantity of information to describe it in detail is so complex that the normal mathematical laws of physics cannot be applied to arbitrary precision without exceeding the information capacity of the universe. Cosmology thus imposes a small but irreducible uncertainty, or fuzziness, in the operation of physical laws. For most systems the Lloyd limit is irrelevantly large. But quantum systems are described by vectors in a so-called Hilbert space, which may have a great - indeed infinite - number of dimensions. According to my maximum-complexity criterion, quantum mechanics will break down when the dimensionality of the Hilbert space exceeds about 10^120. A simple example is an entangled state of many electrons. This is a special form of superposition in which up and down spin orientations co-exist in all possible combinations. Once there are about 400 electrons in such a state, the Lloyd limit is exceeded, suggesting that it is at this level of complexity that classicality emerges. Although such a state is hard to engineer, it lies firmly within the design specifications of the hoped-for quantum computer. This is a machine that would harness quantum systems to achieve an exponentially greater level of computing power than a conventional computer. If my ideas are right, then this eagerly awaited technology will never achieve its full promise. About the authors Gerard 't Hooft is in the Institute for Theoretical Physics, University of Utrechtthe Netherlands, e-mail mailto:g.thooft at phys.uu.nl; Edward Witten is in the Institute for Advanced Study, Princeton, US, e-mail witten at ias.edu; Fay Dowker is at Imperial College, London, UK, e-mail f.dowker at imperial.ac.uk; and Paul Davies is professor of natural philosophy in the Australian Centre for Astrobiology, Macquarie University, Sydney, Australia, e-mail pdavies at els.mq.edu.au From guavaberry at earthlink.net Fri Dec 9 18:17:33 2005 From: guavaberry at earthlink.net (K.E.) Date: Fri, 09 Dec 2005 13:17:33 -0500 Subject: [Paleopsych] Shhhhh is this another Ur Strategy? In-Reply-To: <95.31905577.2c76c886@aol.com> References: <95.31905577.2c76c886@aol.com> Message-ID: <7.0.0.16.0.20051209130336.04a4c578@earthlink.net> hi everyone, sorry to interrupt the present conversation . . . but i've been wondering about this . . . . What is shhhhh? and does this fall under another UR strategy a western custom or is it a world wide "instinct" we all have to use shhhhhh for shushing a baby to stop crying or to calm a crying baby or crying child. Is this another Ur Strategy? Do all human babies recognize this as the signal to be quiet? Do all cultures use this? I imagine it sounding like a snake's rattle but that doesn't mean much. I've heard it the same sound calm's horses and sounds similar to the word for thank you in mandarin. Do we know anything about shhhhh? Appreciate any thoughts you might have. thanks, Karen Ellis Archive 8/16/03 Re: Ur strategies and the moods of cats and dogs hb to pavel kurakin: I've had an adventure that will force me to stop for the night. One of my cats attacked me and tore several holes in my face, nearly removing one of my eyes. <>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<> The Educational CyberPlayGround http://www.edu-cyberpg.com/ National Children's Folksong Repository http://www.edu-cyberpg.com/NCFR/ Hot List of Schools Online and Net Happenings, K12 Newsletters, Network Newsletters http://www.edu-cyberpg.com/Community/ 7 Hot Site Awards New York Times, USA Today , MSNBC, Earthlink, USA Today Best Bets For Educators, Macworld Top Fifty <>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<> From checker at panix.com Fri Dec 9 21:34:09 2005 From: checker at panix.com (Premise Checker) Date: Fri, 9 Dec 2005 16:34:09 -0500 (EST) Subject: [Paleopsych] Christianity Today: Slim for Him Message-ID: Slim for Him http://www.christianitytoday.com/bc/2005/006/14.16.html [A lot of silly AntiRacism here, but it bears out what I said in my meme, "Imitate Christ, not Elvis!", reproduced at the end. That meme has not spread, at least not to Google.] Born Again Bodies: Flesh and Spirit in American Christianity by R. Marie Griffith Univ. of California Press, 2004 323 pp. $21.95 Several weeks ago my wife and I were driving home from Atlanta to Chapel Hill. A few miles out of the city, my eye caught a billboard featuring a lean young white woman pointing to her bare midriff. The caption read, "Look hon, no scars." The logo at the bottom directed viewers to the website of a local birth control clinic. Baffled (as usual) by the subtleties of modern advertising, I asked my wife what it meant. She patiently explained that it was an ad for tubal ligation. I drove on, thinking something deep like, "Oh." After reading R. Marie Griffith's Born Again Bodies this past weekend, I saw the billboard in a new light. It is not often that a work of first-rate historical scholarship opens our eyes to the unspoken assumptions regnant in the world around us. But this one--written by a Princeton University religion professor--does. And no wonder. The book is exhaustively researched, elegantly crafted, methodologically self-conscious, and argued with moral passion. The volume marks a worthy successor to Griffith's influential Harvard dissertation on Women's Aglow, published in 1997 as God's Daughters: Evangelical Women and the Power of Submission. The scope of Born Again Bodies is intimidating. Though focused on the American story, it begins deep in the Middle Ages and ends yesterday. In the process, Griffith ranges back and forth across the Atlantic, lingers among Puritans and their evangelical successors, delves into the intricacies of 19th-century New Thought partisans, ventures into the hermetic realm of body purging and fasting zealots, surveys a plethora of Christian-inspired sexual prescriptions and proscriptions, investigates the largely unknown sideroads of phrenology, physiognomy, and soma typing, and finally ends up in the vast subculture of the contemporary Christian diet industry. Griffith's main argument can be stated in two sentences. Between the early 17th and the late 20th centuries, American body discipline practices evolved from a ritual of repentance to an exercise in self-gratification. Though a wide range of more or less secular forces propelled the process, Christians in general and white middle-class Protestants in particular pioneered that development. A closely related sub-argument is that Christians perennially have viewed the body as a window into the soul. Occasionally the process worked the other way around. A few body disciplinarians--usually New Thought advocates--felt that they could change the mind by manipulating the body. Either way, everyone, it seems, perceived an intimate connection between the spirit and the flesh. When it came to eternal matters, Christians saw through a glass darkly, but when it came to temporal matters, they saw clearly. The body told no lies. Historians generally have interpreted the evolving meanings associated with rigorous dieting (and other kinds of physical denial) as a process of secularization. What started as mortification for sin, they say, turned into purposeful renunciation to compensate for the guilt of affluence and leisure. Griffith disagrees. She argues instead that religion has been involved in those cultural protocols from beginning to end. The story is an evangelical one, centered on the good news of abstemious eating: go out, bring the (obese) sinners in, give them the (lo-cal) salvation message, hear their (before-and-after) testimonies, urge them to stay the (one-course) course, offer a helping (though never a second helping) hand to the weak-willed. If it is a New Thought story of gnostic discernment (there are bariatric secrets to be known), it is also a Wesleyan story of entire sanctification (permanent deliverance from the temptations of the palate), and a Reformed story of divine sovereignty (God's nutritional laws are non-negotiable). Above all, it is a millennialist story of can-do achievement. Our destiny lies within our hands. Just put down the fork and push yourself away from the table. The meat in the sandwich lies in chapter 4, aptly titled "Pray the Weight Away: Shaping Devotional Fitness Culture." In this chapter, inner grace manifests itself most forcefully and unequivocally in a lean, firm body, which stands as a mark of a disciplined life, a holy people and, above all, right standing with God. A sampler of the titles produced by the Christian--mostly evangelical-fundamentalist--diet industry in the past half century tells the tale. In alphabetical order: Devotions for Dieters God's Answer to Fat: Lo?se It! Help Lord--The Devil Wants Me Fat! I Prayed Myself Slim More of Jesus, Less of Me Pray Your Weight Away Slim for Him These and scores of other works, which sell in the millions, have been complemented by religiously based support groups such as Body and Soul, Overeaters Victorious, and 3D (Diet, Discipline, Discipleship), not to mention a cornucopia of dieting paraphernalia--including exercise tapes, work-out clothes, and training manuals--and a never-ending schedule of seminars. In her analysis of the postwar Christian diet industry, Griffith isolates at least two problems that ought to trouble the conscience. The first is the redefinition of the sins of the mouth. Where once the emphasis was on gluttony (enslavement of the appetites) or disordered desires (longing for a good less than God), now it was on fat. Just that, fat. And so it was that Gwen Shamblin, CEO and founder of Weigh Down workshop, could say, "Grace ... does not go down into the pigpen." For Griffith, the second and more troubling problem is the diet industry's race and class pretensions, intended or not. She shows that the presumed audience was white, sustained by a "racialized ideal of whiteness, purged of the excesses associated with nonwhite cultures." It also was middle- or upper-middle-class, sustained by the affluence and leisure that made costly diet foods and gear (and for women, cosmetic enhancements) affordable. Those presumptions were not value-neutral. Rather they carried a normative edge that made the firm, angular bodies of an idealized white middle class the rule of thumb for all. Admittedly, a few African Americans, such as T. D. Jakes and Jawanza Kunjufu, joined the crusade. But most of the leaders were white, and most seemed unable to imagine that there might be a difference between good health and (their notion of) good looks, or that economic deprivation and ethnic tradition might play a role in diet options and choices. "In ways both explicit and implicit," she tells us, "white Christian men and women exchanged ideas about how to uphold their image in the world, to sustain their place at the top of the racialized class hierarchy embedded in American society and the Anglo-American Christian tradition." (How the firm, angular bodies of black athletes--ubiquitous in advertising for Nike et al. --figure in this narrative is not entirely clear.) Though Griffith does not say much about it, there is a Giant Pooka in the story, and it keeps popping up in unexpected places. Bluntly put, the diet industry, Christian and otherwise, is fighting an unwinnable battle. Sociologists, she tells us, have found that religious practice correlates positively with obesity. Christians in general and Southern Baptists in particular are the heaviest. Yet who is surprised? Whole Foods-style supermarkets might be growing, but so are McDonalds. Indeed, I do not recall ever seeing a fast-food franchise boarded up for keeps. What's not to like about this brilliant and deeply earnest book? Only this: Griffith makes Protestantism the chief protagonist. To be sure, she shows that similar attitudes about body politics crop up among Mormons and, from time to time, Catholics, Jews, and secularists too. Yet she insists that "Protestantism--as the tradition that has most comprehensively influenced the course of American history--takes center stage in this story." This claim raises more questions than it resolves. That a majority of leaders in the diet crusade happened to be Protestants is undeniable. And that the crusade often looked like a Protestant revival also is undeniable. But I see little evidence that the sleek-body promoters drew upon historic Protestant principles, or that they represented the actual life of faith practiced Sunday-by-Sunday in countless Protestant churches. By my lights, the true culprit in this sorry tale is not Protestantism but consumer capitalism gone off the rails. Once upon a time Protestantism had something to do with capitalism's birth, but it should not be forced forever to bear the guilt for capitalism's excesses. I close on a personal note. Many decades ago one of my U.S. history professors--a scholar well-known for his high-minded support of progressive causes--casually remarked in class that President Taft, "being corpulent, was prone to be lazy." Neither he nor the 200-plus students in the lecture hall noted anything amiss. But I winced. As a lifelong battler of the scales, I suspected that Taft had felt the same desperations I have felt. And since then I have wondered about the many ways that I too might have diminished my students' lives. Marie Griffith's marvelous book will make a lot of people think twice. She has done what many historians aspire to do but few actually manage to accomplish: make this world a more humane place. Grant Wacker is professor of Church History at Duke University's Divinity School. He is the author of Heaven Below: Early Pentecostalism and American Culture (Harvard Univ. Press). -------------- Meme 033: Imitate Christ, Not Elvis! sent 4.10.18 We went to a wedding this Summer for a daughter of an Evangelical friend I have known since my college days. (It was I who got him to meet his future wife, so I was indirectly responsible for not only his wedding but for the very existence of his four children.) To my horror, the music at the reception was played by an Elvis impersonator by the name of Rick Spruill. Whether he is a true heir of Presley, I neither know nor care. Elvis was obnoxious when he was alive; his impersonator is obnoxious now. (I wish I had had the foresight to have brought along a CD of Bach's Orchestral Suites, called "Suites for Dancing," played in dance tempos on the theory that Bach intended his suites to be danced to. After the impersonator had left, I could have asked those remaining at the console to slip in the Bach, to delight, I hope, the audience.) Anyhow, I protested to the Evangelicals there that Christ is King, not Elvis, but to no effect. Later I was talking to a Mormon friend, who is hugely overweight in a conversation about Mormonism, in which he repeatedly stressed the importance of winning souls for Christ. I asked him if Christ was King and if he believed in the imitation of Christ. Yes, on both accounts. Then I suggested to him that he was in fact imitating Elvis by eating himself out to Elvis proportions. He said I had a point, and I suggested to him that he think about this the next time he reaches for seconds. Sadly, to all appearances, he continues to imitate Elvis. But here's hoping that the meme, "Imitate Christ, Not Elvis" will spread among the Christian community and become the first diet method in all of history that works. [I am sending forth these memes, not because I agree wholeheartedly with all of them, but to impregnate females of both sexes. Ponder them and spread them.] From checker at panix.com Fri Dec 9 21:34:22 2005 From: checker at panix.com (Premise Checker) Date: Fri, 9 Dec 2005 16:34:22 -0500 (EST) Subject: [Paleopsych] Telegraph: Umberto Eco: God isn't big enough for some people Message-ID: Umberto Eco: God isn't big enough for some people http://www.arts.telegraph.co.uk/opinion/main.jhtml;jsessionid=WIFVKH3A4A0YHQFIQMFCFFWAVCBQYIV0?xml=/opinion/2005/11/27/do2701.xml&sSheet=/portal/2005/11/27/ixportal.html [This completely backs up what Lene told me, namely that the secularization thesis (with modernization secularism) has failed in Europe, too: Christianity's decline has been replaced by the rise of New Age religions.] (Filed: 27/11/2005) We are now approaching the critical time of the year for shops and supermarkets: the month before Christmas is the four weeks when stores of all kinds sell their products fastest. Father Christmas means one thing to children: presents. He has no connection with the original St Nicholas, who performed a miracle in providing dowries for three poor sisters, thereby enabling them to marry and escape a life of prostitution. Human beings are religious animals. It is psychologically very hard to go through life without the justification, and the hope, provided by religion. You can see this in the positivist scientists of the 19th century. They insisted that they were describing the universe in rigorously materialistic terms - yet at night they attended seances and tried to summon up the spirits of the dead. Even today, I frequently meet scientists who, outside their own narrow discipline, are superstitious - to such an extent that it sometimes seems to me that to be a rigorous unbeliever today, you have to be a philosopher. Or perhaps a priest. And we need to justify our lives to ourselves and to other people. Money is an instrument. It is not a value - but we need values as well as instruments, ends as well as means. The great problem faced by human beings is finding a way to accept the fact that each of us will die. Money can do a lot of things - but it cannot help reconcile you to your own death. It can sometimes help you postpone your own death: a man who can spend a million pounds on personal physicians will usually live longer than someone who cannot. But he can't make himself live much longer than the average life-span of affluent people in the developed world. And if you believe in money alone, then sooner or later, you discover money's great limitation: it is unable to justify the fact that you are a mortal animal. Indeed, the more you try escape that fact, the more you are forced to realise that your possessions can't make sense of your death. It is the role of religion to provide that justification. Religions are systems of belief that enable human beings to justify their existence and which reconcile us to death. We in Europe have faced a fading of organised religion in recent years. Faith in the Christian churches has been declining. The ideologies such as communism that promised to supplant religion have failed in spectacular and very public fashion. So we're all still looking for something that will reconcile each of us to the inevitability of our own death. G K Chesterton is often credited with observing: "When a man ceases to believe in God, he doesn't believe in nothing. He believes in anything." Whoever said it - he was right. We are supposed to live in a sceptical age. In fact, we live in an age of outrageous credulity. The "death of God", or at least the dying of the Christian God, has been accompanied by the birth of a plethora of new idols. They have multiplied like bacteria on the corpse of the Christian Church -- from strange pagan cults and sects to the silly, sub-Christian superstitions of The Da Vinci Code. It is amazing how many people take that book literally, and think it is true. Admittedly, Dan Brown, its author, has created a legion of zealous followers who believe that Jesus wasn't crucified: he married Mary Magdalene, became the King of France, and started his own version of the order of Freemasons. Many of the people who now go to the Louvre are there only to look at the Mona Lisa, solely and simply because it is at the centre of Dan Brown's book. The pianist Arthur Rubinstein was once asked if he believed in God. He said: "No. I don't believe in God. I believe in something greater." Our culture suffers from the same inflationary tendency. The existing religions just aren't big enough: we demand something more from God than the existing depictions in the Christian faith can provide. So we revert to the occult. The so-called occult sciences do not ever reveal any genuine secret: they only promise that there is something secret that explains and justifies everything. The great advantage of this is that it allows each person to fill up the empty secret "container" with his or her own fears and hopes. As a child of the Enlightenment, and a believer in the Enlightenment values of truth, open inquiry, and freedom, I am depressed by that tendency. This is not just because of the association between the occult and fascism and Nazism - although that association was very strong. Himmler and many of Hitler's henchmen were devotees of the most infantile occult fantasies. The same was true of some of the fascist gurus in Italy - Julius Evola is one example - who continue to fascinate the neo-fascists in my country. And today, if you browse the shelves of any bookshop specialising in the occult, you will find not only the usual tomes on the Templars, Rosicrucians, pseudo-Kabbalists, and of course The Da Vinci Code, but also anti-semitic tracts such as the Protocols of the Elders of Zion. I was raised as a Catholic, and although I have abandoned the Church, this December, as usual, I will be putting together a Christmas crib for my grandson. We'll construct it together - as my father did with me when I was a boy. I have profound respect for the Christian traditions - which, as rituals for coping with death, still make more sense than their purely commercial alternatives. I think I agree with Joyce's lapsed Catholic hero in A Portrait of the Artist as a Young Man: "What kind of liberation would that be to forsake an absurdity which is logical and coherent and to embrace one which is illogical and incoherent?" The religious celebration of Christmas is at least a clear and coherent absurdity. The commercial celebration is not even that. o Umberto Eco's latest book is The Mysterious Flame of Queen Loana (Secker & Warburg, ?17.99) From checker at panix.com Fri Dec 9 21:34:28 2005 From: checker at panix.com (Premise Checker) Date: Fri, 9 Dec 2005 16:34:28 -0500 (EST) Subject: [Paleopsych] Atlas Sphere: Good Happens, Too Message-ID: Good Happens, Too http://www.theatlasphere.com/columns/printer_051105-perren-good-happens.php [Increased attention to Frank Sinatra as a sign of things getting better??] By Jeffrey Perren Nov 30, 2005 In response to a recent comment of mine, someone asked me for some examples of good things that have happened in the last thirty-five years. So here's a partial inventory. Some of the things listed are personal, some are global, with lots in between. Of course, all of the categories listed below are tightly interrelated. Personal The opportunity to meet like-minded, reasonable, and good people is greater today than it was in prior decades. Just as one example, I would've been very unlikely thirty-five years ago to have 'met' and 'conversed' with some of the fine Ayn Rand admirers I've corresponded with recently. Many could say the same. Intellectually Despite abysmally poor U.S. K-12 (and even college) education, more people are sharing more good ideas and useful information than ever before in history. The opportunity for this kind of cross-fertilization simply didn't exist even as recently as ten years ago. Obviously, the Internet is one major factor, but there are others. The Internet made sharing ideas easy and cheap, but even in the print world there are more magazines now to satisfy every possible interest than ever before. In addition, we've now been the recipients of decades of beneficial influences: Rand, some conservative thinkers (Sowell, for example), a general rise in the number of large bookstore chains, and the failure of grand social experiments. These provide helpful theoretical guidance and useful empirical evidence, allowing us to lead wiser lives. Socially Evolving mores have driven to historically unprecedented low levels the amount and severity of sexual and racial prejudice, rigid adherence to restrictive social behavior, etc. (These are a couple of the few good effects of the 60s.) This 'moral anarchy' creates an opportunity for better, and better-grounded, practices to emerge. The near monolithic thinking that characterized the intellectual atmosphere of the first several decades of the twentieth century is gone, probably for good. Yes, there certainly has been produced far too much post-modern, nihilistic, irrationalist garbage from some of the same causes but this article is about the good things. Politically In my lifetime alone the Soviet Union has morphed, and is no longer an active threat to the U.S. and the rest of the world. The Berlin Wall has been dismantled and Germany re-united. These are not small things. Many formerly socialist countries, India and Argentina to choose only two examples, have moved considerably toward greater freedom. The Middle East, so very troublesome now, is being actively dealt with instead of sitting to stew to become an even bigger problem later. (Yes, this one is in the nature of a prediction, but the present good is that the U.S. is no longer standing idly by.) The current heated controversies about foreign policy, domestic policy, and the debates about the character of politicians on both major sides of the aisle are actually good. Just as two examples, no one would've been willing to so much as seriously discuss radical changes to tax codes and Social Security until recently. Thirty-five years ago there was plenty of complaining about all these things, but much more uniform opinion and much less real debate. We now have considerable historical experience with socialism and the welfare state, much better grounded arguments for various desired outcomes, and much more substantial disagreements and clearly distinguishable views. This is a necessary prelude to improving the present situation. And there is much more divided opinion within the two major U.S. parties, with more viable potential alternatives to both than ever before. Artistically Ayn Rand's novels continue to sell phenomenally well. Tom Clancy, Michael Crichton, and Ken Follett continue to write bestsellers. J.K. Rowling's recently released novel made her $36 million in one day, and, to date, her books have sold almost 270 million units. I'm not arguing that these latter writers are anywhere near being in the same league artistically or philosophically; but their novels are not full of degraded people whining about their miserable lives. Quite the reverse. Yes, plenty of the opposite still dominates the publishing industry. Again, this article is about the good things. The dreck produced too often by Hollywood from the 70s to the present is lately accompanied by offerings such as Braveheart, Air Force One, What Women Want, Titanic, Patriot Games, and others. (I'm not making the case that these are great movies, but they're much more reflective of the spirit of the 40s and 50s than those produced during the late 60s to early 80s, after which the trend began to reverse. And none of them would likely have been produced during that time.) In fine painting, Jack Vettriano, Chen Yifei , and a score of other 'romantic realist' painters have been making a living. In some cases, doing very well, thank you. This is not something you would've been likely to see thirty-five years ago. Most popular 'music' continues to be as bad as ever. But with improved distribution mechanisms young people are being (re)introduced to Frank Sinatra, Puccini, and many others who are more popular than they were twenty years ago. This can't help but encourage composers to actually write new good music. Post-modernism is rapidly coming to a close as an active artistic force. This, along with a much wider variety of much less expensive distribution channels, creates an opportunity for more art that is consistently good to be commercially successful. Materially The improvements in this area are pretty obvious. Today we have internet-enabled cell phones, faster and smaller computers, the Internet, satellite TV and radio, artificially increased tree production, a larger average home size, and more efficient heating and air conditioning systems. In the area of biotech products, there are genetically altered food as well as enhanced agriculture in general, improved pharmaceutical products, and medical technology (e.g., CATs, NMRs, artificial organs). All these have either been introduced or substantially improved in the last thirty five years. Spiritually There has been a fairly recent widespread revival of concern for ethics in everyday life. (Granted, much of the answers to such concern have been wrong-headed. For the last time, I'm focusing on the good here.) There's much more discussion today about authentic values and non-Nietzschean, non-Pragmatist style self-interest than was the case before. The general atmosphere up until the last few years was that people didn't think much about the harm they did to themselves or to others. Theories of rational self-interest and other positive intellectual forces are definitely having an effect. It's up to us to make sure the right side wins. The need to solve serious problems is hardly gone likely it never will be. But a sense of perspective, and a recognition of the positive changes of the last few decades, may help counter-balance the tendency to despair or cynicism that too often colors the enthusiasm for life of many. Personally, I'm looking forward to the next fifty years. [1]Jeffrey Perren is a novelist with a background in Physics and Philosophy. His latest novel, The Geisha Hummingbird (in progress), is the story of a ship designer whose fianc? disappears on the eve of her wedding, amidst a whirlpool of industrial espionage. References 1. http://www.theatlasphere.com/directory/profile.php?id=1488 From checker at panix.com Fri Dec 9 21:34:34 2005 From: checker at panix.com (Premise Checker) Date: Fri, 9 Dec 2005 16:34:34 -0500 (EST) Subject: [Paleopsych] On Academic Boredom by Amir Baghdadchia Message-ID: On Academic Boredom by Amir Baghdadchi Arts and Humanities in Higher Education 4(3) University of Cambridge, UK [This is a lovely article! I'd like to know more about the emergence of boredom in the 18th century. I do not deny that people were bored in a broad sense of the term, or that other animals can be bored. But, in a sense so specific that a word had to be coined for it, boredom goes only back so far. (Words are not coined at random.) It is a "socially constructed" emotion, a specific narrowing down of (or mixture of) basic emotions.] First, the summary from the "Magazine and Journal Reader" feature of the daily bulletin from the Chronicle of Higher Education, 5.11.29 http://chronicle.com/daily/2005/11/2005112901j.htm A glance at the current issue of Arts and Humanities in Higher Education: Academic bore Confronting boredom in higher education can help academics to eradicate a system that survives by being dull, writes Amir Baghdadchi, a Ph.D. candidate at the University of Cambridge who is studying argument and literary form in 18th-century literature. Such boredom is "corrosive," writes Mr. Baghdadchi. He says that it occurs when academics are unable to make use of another person's findings, and that "the boring work is one which provides us with nothing to make use of." While boredom is normally considered the result of a situation gone bad, Mr. Baghdadchi writes that, in academe, it is actually the product of things gone right. He says that uninteresting work creates a "defensive moat around a paper" because people are rarely apt to scrutinize a boring topic. Because it is free from any inquiry, lackluster work can survive criticism. "Sometimes it even seems as if we have a Mutually Assured Boredom pact," he writes. "I get up and bore you, you get up and bore me, and at the end of the day we are all left standing." He writes that while the system has worked well so far, changes are worth considering. Researchers should not be wholly concerned with simply avoiding "academic battles," he says, but rather with solving society's problems. After all, he asks, "do we want a system that promotes not the graduate students who are the most vivaciously interested, but the ones who are the most contentedly bored?" The article, "On Academic Boredom," is available for a limited time at http://ahh.sagepub.com/cgi/content/abstract/4/3/319 --Jason M. Breslow _________________________________________________________________ abstract The kind of boredom experienced in academia is unique. Neither a purely subjective nor objective phenomenon, it is the product of the way research is organized into papers, seminars, and conferences, as well as of a deep implicit metaphor that academic argument is a form of warfare. In this respect, the concepts of boredom and rigour are closely linked, since there is a kind of rigour in the Humanities that stresses the war metaphor, and structures scholarship defensively. This is opposed to a different kind of rigour that eschews the war metaphor altogether, and considers rigorousness in the light of a work?s usefulness to its audience. -------------------------- While few would deny that some kind of boredom is part of the culture of research and teaching in the Humanities, there are, however, two reasons why it is worth considering academic boredom as a species of boredom in its own right. First, it is not at all clear that the word 'boredom' refers to a coherent topic with an essential character. Since the word first gained currency in the 18th century, 'boredom? has come to be used to describe circumstances as various as the restlessness of a child on a car trip, the sense of monotony in assembly-line work, a crippling sense that the universe has no purpose, and there being nothing worth watching on television. These may not all be the same. Whereas, the kind of boredom experienced in university departments is of a very particular kind. It is most easily identified in terms of affect: the sense that the seminar is never going to end, that the speaker will never get to the point, that the articles one is reading are proceeding at a glacial pace, that one simply cannot get into a discussion, that one dreads getting into it in the first place. The talk, the seminar, the conference; these are all contexts particular to us, with their own rules, etiquettes, and expectations. They are a set of practices. To treat the topic of our boredom without reference to these is not only to miss the peculiar shape of academic boredom, but to ignore the shape of ourselves inside it. The second reason is practical. If we think that boredom is a problem that we ought to do something about, then it makes sense to consider how it relates to our practices and the structure of our discourse. We have no power over abstractions; but we can alter practices. That some may not see academic boredom as a real problem at all, I will readily admit. To those, I can only offer my observations from some five years as a graduate student in the United States and the United Kingdom, in a variety of institutions. In my experience, boredom is corrosive. I have seen my classmates begin their graduate work with great vivacity and curiosity, and I have seen them slowly ground down into duller, quieter, less omnivorously interested people. I have seen it in myself. I have observed this change over years and even in microcosm over the course of a single seminar. I know that graduate students are extremely reluctant to discuss it out loud, since that would be akin to admitting weakness. But it is the case nevertheless. If anyone believes there is a counterexample, then one may attempt the following thought experiment:try and imagine someone who, after several years of graduate work, became, at the end of it, more vivacious. Rather than trying to pin down what academic boredom is in the abstract, a better way will be to treat the words 'boredom', 'boring' and so on as available descriptions. Thus, some of the questions we might instead ask are: in what kinds of academic contexts and circumstances do we describe ourselves as 'bored'? What other kinds of perceptions accompany, or precede, our judgment that we are bored? What kinds of things can be called 'boring'? Our commonsense answers are very illuminating here. One popular answer to the first question (at least among graduate students) is that one's sense of boredom in, for example, a seminar, is derived from personal inadequacy. One finds oneself unable to concentrate on an argument, and concludes that this is because one does not know enough, has not studied enough, is not up to this level of discourse. There are in fact two different propositions here. First, that boredom is a subjective state, and second, that it is one's own fault or responsibility. The first is uncontroversial enough; and indeed, most writers on any kind of boredom assume that it is some kind of mental state. The second proposition is not so obvious.The dictum 'boredom is your own fault', like its cousin, 'only boring people are ever bored', seems closer to the kind of rule one tells children to make them behave. That this belief should be so prevalent, at least in an implied form, in graduate studies, is not surprising if we take one of the aims of graduate work to be the moulding of the student into a docile, well-behaved, academic subject. However, as a representation of what actually happens when we are bored, it is not very informative. To say that one's experience of boredom is the product of internal causes is very much like saying that the pain one feels from a splinter is caused by one's nervous system. That is surely correct. But it ignores the splinter. So we move from internal to external causes.And,sure enough,just as often as we blame ourselves, we blame the speaker or the topic for being boring.And we even say of certain topics, or speakers, that they are just inherently boring. But consider an extreme case, the case of the switched papers. Imagine that at the last English conference, there was a paper on Wycherley that engaged its audience fully, while, at the last meeting of the European Society of Industrial Chemical Producers, there was a chemical engineering paper that similarly was a great success. Let us suppose that no one in the respective audiences thought the papers in the least bit boring. But then let us suppose that through some accident, the next time the speakers give their papers, they switch audiences.And we could easily imagine that the audience of engineers would not be so enraptured with an analysis of The Country Wife, and that the English students, however skilled at reading any text in the most interesting way possible, would become very fidgety very quickly. Hence, it seems that, if we are careful, we cannot say that 'boringness' is a quality that is indisputably attached to a topic or a speaker. Moreover, it seems that neither the internal account of boredom--which says it is purely a state of mind--nor the external one--which says it is purely someone's fault--can stand on its own. But consider this: If we think of what happens when we are bored as an event--as an occasion when we are supposed to do something --then the two accounts can be complementary. Thus, our complaint, 'I'm just not getting this. I can't follow it', can be rephrased as saying, 'there is nothing I can do with this material. I can't make use of it.' Likewise, when the English student is boring the chemical engineers, we may say that the English student has given the engineers nothing that they can make use of. The English paper is so designed as to give them nothing to do. And, if we are willing to take that on board, we can consolidate our observations thus: Boredom occurs when we are unable to make use of a work, and the boring work is one which provides us with nothing to make use of. Thus far, the assumption has been that boredom is wholly bad. This might seem uncontroversial, but it is worth asking whether boredom is not some malfunction, what happens when things go wrong, but is perhaps something adaptive, which happens in order to succeed in some situation. There is a strong reason to think so. Consider that when one is bored by a paper, one does not ask questions. Boredom--whether caused by massive amounts of unfamiliar data, or impenetrable syntax--creates a defensive moat around a paper. It protects it. It guarantees that, even if it does not win everyone over, it survives the war, and that is good enough. The underlying metaphor here is that an argument is war. This idea is very brilliantly discussed by linguists George Lakoff and Mark Johnson (1980) in their book Metaphors We Live By. They point out that the metaphor 'argument is war' not only describes what we do when we argue, but structures it: that is to say, when we argue, we behave as if we were at war: we fortify our positions, we attack weak points, there is a winner and loser, and so on. And because we actually do behave as if we were at war, the metaphor seems perfectly apt. However, as Lakoff and Johnson point out, the metaphor conceals as much as it explains, for, the person with whom we are arguing is actually giving their time, which is hardly a characteristic of warfare, and often when we disagree we are collaborating on the same problem. Collaboration, dialogue, the sense of a common discipline--these are elements of academic discourse left out by the war metaphor. Now if we set that beside the description of boredom we arrived at earlier, it appears that we have two models of academic discourse that sit very ill together. One can either say that argument is war, and therefore must be waged offensively and defensively, or one can see scholarship as producing objects intended for manipulation. These positions are contrary; or, at least, one cannot maximize the one without minimizing the other. Of the two models, boredom feeds on the metaphor 'argument is war'. One can succeed in the war by virtue of boredom because it is a defensive tactic. Sometimes it even seems as if we have a Mutually Assured Boredom pact. I get up and bore you, you get up and bore me, and, at the end of the day, we are all left standing. It would not be hard to find graduate students whose measure of a successful conference paper lies entirely in whether they were 'shot down' or not. In this situation, being boring is a very good policy indeed. At the outset I stated a concern with boredom as something detrimental to academic discourse. But it is not necessary to think in these terms at all. Indeed, I believe one of the reasons that academic boredom has not been an important topic is because of a very robust and practical objection that could be made. It is an extremely persuasive objection, and I would like to deal with it now. It argues that while it is all well and good to decry things that are boring and to think about what counts as interesting, the real business of academic work has nothing to do with 'being interesting' at all: rather, it has to do with the construction of rigorous arguments that can withstand attack. Whether or not the audience is interested is a consideration always second to the strength of the research. If an audience finds a rigorously argued piece of scholarship boring, that is their problem, since they cannot expect that it was written for their enjoyment. As I say, a very robust and, perhaps, very familiar objection. It is built around an opposition of 'rigorous' vs. 'interesting'. However, I do not think this has to be the case. I think we can see this by interrogating the concept of 'rigorousness'. It will be helpful to take an uncontroversial example of something that must be done rigorously. Let us suppose, purely hypothetically, that a graduate student, owing to an inability of the department to offer any funding whatsoever, finds a job working in a fish restaurant, in which he has the task of cleaning out the industrial walk-in refrigerator. As I say, the example is purely hypothetical. But if I had to guess, I would think that he would have to see to it that this was done with extreme rigour: the temperatures would have to be precisely maintained, the fish would have to be separated and rotated for freshness, the floors and walls and shelves would have to be scrubbed meticulously to avoid any kind of health risk. Here then is a paradigm of rigour, since: (a) it must be done to an external standard; (b) the work is meant to be examined and approved by an inspector; and (c) everything must be such that it can be easily used and manipulated by others. Now contrast this kind of rigour--which resembles a scientific experiment in that it wants others to see what happened, wants others to follow the reasoning, and wants the scrutiny--with the rigour that is purely defensive: the rigour of endless authorities trotted in, of obscure language, of massive amounts of information deployed to scare off inquiry. The very fact that we are often willing to declare a work to be rigorous without claiming actually to understand it points to these two types of rigour being different, if not contrary. Perhaps because both kinds of rigour are commonly signified by the same word, they are not usually distinguished. But if we were to make the distinction, it seems to be fortuitous that we do have a ready-made phrase for this latter kind of bellicose, deadly rigour: we may call it rigor mortis, literally the 'rigor, the stiffness of death'. Rigor mortis shuts us up, it closes off inquiry, it digs the moat, it wants to bore us to death. Again, we might call the other kind of rigour-- for lack of a better term--a 'living rigour'. Living rigour, if we wish to carry on with the martial metaphor, takes risks, seeks risks, is designed to be vulnerable. But it is better to do without the martial metaphor, as that will tempt us into thinking of argument still as confrontation, and think of living rigour as a kind of rigour that constructs things to be used, inspected, evaluated. However, there is a consequence of thinking this way, which, depending on one's predisposition, either threatens the very possibility of ever being rigorous, or provides the only way in which rigour might be a meaningful concept. Consider that we now have an idea of rigour in terms of usefulness in a broad sense. But, because how useful a work is depends both on the shape of the discourse and on what the audience knows or wants to do, we cannot therefore determine how rigorous a work is, once and for all, just by looking at its shape or content. Rather, we are called on to think of the word 'rigorous' as operating in a way similar to the word 'shocking'. You might think you have written the most shocking piece of literary criticism ever, yet, if no one in your audience of veterinary surgeons is actually shocked, you cannot really maintain that it was shocking, absolutely. Likewise, rigour: if rigour demands that your audience can manipulate your idea, and no one cares to manipulate it, then you lose the right to boast that you have been perfectly, objectively, and in the mind of God rigorous. To refer to the 'mind of God' may seem like a rhetorical gesture, but it is in fact what one has to do by the logic of the objection. If the rigorousness of a scholarly work can exist without reference to any imaginable mortal audience (and anyone who thinks being interesting is a separate matter from being rigorous is implying this), then to whom else is the work addressed, if not to some all-hearing deity who understands every point and can never be bored? On the other hand, if one prefers the idea of a living rigour, this is not without dangers, since, along with removing the certainty of being rigorous in every situation, it removes the authority we arrogate to ourselves based on a reputation for rigorous work. (To make an observation from the point of view of a graduate student: rigour is the most frequent stick with which we are beaten. In researching a topic about which you necessarily become more informed than your supervisor, what other kind of authority can a supervisor wield?) Indeed, living rigour compels us to think of our work as not complete once the paper is polished, but only occurring the moment the paper is being received. By this account, then, a concern with being rigorous in the best way possible indeed justifies, rather than detracts from, a concern with academic boredom. Academic boredom, which occurs when one is unable to make use of a work and cannot find anything in it with which to engage, is the consequence of rigor mortis, the kind of rigour deployed for winning academic battles rather than solving problems. Boredom, because it feels like a lack of something, may seem trivial and unimportant. It is not a thing to be reckoned with because no thing appears to be there. But as I have tried to show, this is not the case. Boredom is a sign that our system is not functioning the way we think it is, that we are not always being rigorous when we think we are. Of course, there is no need to change a system that has served us very well so far. But it is worth considering whether we want a system that promotes not the graduate students who are the most vivaciously interested, but the ones who are the most contentedly bored. reference Lakoff, G. and Johnson, M. (1980) Metaphors We Live By. Chicago, IL: University of Chicago Press. biographical note amir baghdadchi is currently a PhD student in the Faculty of English at Cambridge University. He is working on the idea of argument and literary form in 18th-century literature. [Email: ab490 at cam.ac.uk] From checker at panix.com Fri Dec 9 21:34:40 2005 From: checker at panix.com (Premise Checker) Date: Fri, 9 Dec 2005 16:34:40 -0500 (EST) Subject: [Paleopsych] New Yorker: Baudrillard on Tour Message-ID: Baudrillard on Tour The New Yorker: The Talk of the Town ttp://www.newyorker.com/talk/content/articles/051128ta_talk_macfarquhar [Baudrillard is most definitely not an academic bore.] MEN OF LETTERS Issue of 2005-11-28 Posted 2005-11-21 There may never again be a year in Jean Baudrillard's life quite like 1999. Baudrillard, the French philosopher, is best known for his theory that consumer society forms a kind of code that gives individuals the illusion of choice while in fact entrapping them in a vast web of simulated reality. In 1999, the movie "The Matrix," which was based on this theory, transformed him from a cult figure into an extremely famous cult figure. But Baudrillard was ambivalent about the film--he declined an invitation to participate in the writing of its sequels--and these days he is still going about his usual French-philosopher business, scandalizing audiences with the grandiloquent sweep of his gnomic pronouncements and his post-Marxian pessimism. Earlier this month, he gave a reading at the Tilton Gallery, on East Seventy-sixth Street, in order to promote "The Conspiracy of Art," his new book. The audience was too big for the room--some people had to stand. A tall, Nico-esque blond woman in a shiny white raincoat leaned against the mantelpiece, next to a tall man with chest-length dreadlocks. A middle-aged woman with red-and-purple hair sat nearby. There was a brief opening act: Arto Lindsay, the onetime Lounge Lizard, whose broad forehead, seventies-style eyeglasses, and sturdy teeth seemed precariously supported by his reedy frame, played a thunderous cadenza on a pale-blue electric guitar. Baudrillard opened his book and began to read in a careful tone. He is a small man with large facial features. He wore a brown jacket and a blue shirt. (Some years ago, he appeared on the stage of Whiskey Pete's, near Las Vegas, wearing a gold lam? suit with mirrored lapels, and read a poem, "Motel-Suicide," which he wrote in the nineteen-eighties. But there was no trace of the lam? Baudrillard at the Tilton Gallery.) " `The illusion of desire has been lost in the ambient pornography and contemporary art has lost the desire of illusion,' " he began. " `After the orgies and the liberation of all desires, we have moved into the transsexual, the transparency of sex, with signs and images erasing all its secrets and ambiguity.' " After he read, Baudrillard expanded on his theme. "We say that Disneyland is not, of course, the sanctuary of the imagination, but Disneyland as hyperreal world masks the fact that all America is hyperreal, all America is Disneyland," he said. "And the same for art. The art scene is but a scene, or obscene"--he paused for chuckles from the audience--"mask for the reality that all the world is trans-aestheticized. We have no more to do with art as such, as an exceptional form. Now the banal reality has become aestheticized, all reality is trans-aestheticized, and that is the very problem. Art was a form, and then it became more and more no more a form but a value, an aesthetic value, and so we come from art to aesthetics--it's something very, very different. And as art becomes aesthetics it joins with reality, it joins with the banality of reality. Because all reality becomes aesthetical, too, then it's a total confusion between art and reality, and the result of this confusion is hyperreality. But, in this sense, there is no more radical difference between art and realism. And this is the very end of art. As form." Sylv?re Lotringer, Baudrillard's longtime publisher, who was there to interview him, added, "Yes, this is what I was saying when I was quoting Roland Barthes saying that in America sex is everywhere except in sex, and I was adding that art is everywhere but also in art." "Even in art," Baudrillard corrected. "Even in art, yes. The privilege of art in itself as art in itself has disappeared, so art is not what it thinks it is." Many people in the room wished to ask Baudrillard a question. A gray-haired man wearing a denim cap and a green work shirt, an acolyte of the philosopher Bernard Stiegler, wanted to know whether, even if art was no longer art, as such, it might not still function as useful therapy for the wounded narcissism of artists. A middle-aged man in the second row who had been snapping photographs of Baudrillard with a tiny camera raised his hand. "I don't know how to ask this question, because it's so multifaceted," he said. "You're Baudrillard, and you were able to fill a room. And what I want to know is: when someone dies, we read an obituary--like Derrida died last year, and is a great loss for all of us. What would you like to be said about you? In other words, who are you? I would like to know how old you are, if you're married and if you have kids, and since you've spent a great deal of time writing a great many books, some of which I could not get through, is there something you want to say that can be summed up?" "What I am, I don't know," Baudrillard said, with a Gallic twinkle in his eye. "I am the simulacrum of myself." The audience giggled. "And how old are you?" the questioner persisted. "Very young." COMMENT MEN OF LETTERS DISPLACEMENT DEPT. THE BOARDS THE FINANCIAL PAGE [spacer.gif] -- Larissa MacFarquhar BACK TO THE TOP DISPLACEMENT DEPT. [spacer.gif] [sub_onedollar_title.gif] [spacer.gif] [spacer.gif] Click here for INTERNATIONAL ORDERS >> Click here to GIVE A GIFT >> E-mail address ___________________________________ State [Choose your State...] Name ___________________________________ Mailing address 1 ___________________________________ Zip ______ Mailing address 2 ___________________________________ continue City ___________________________________ [spacer.gif] [me_submenu02.gif] [me_submenu03.gif] [me_submenu04.gif] [me_submenu_events.gif] [me_submenu05.gif] [me_submenu06.gif] [me_submenu07.gif] [spacer.gif] [fo_condenet.gif] Copyright ? Cond?Net 2005. All rights reserved. Use of this Site constitutes acceptance of our User Agreement and Privacy Policy. This Site looks and works best when viewed using browsers enabled with JavaScript 1.2 and CSS, such as Netscape 7+ and Internet Explorer 6+. [spacer.gif] [spacer.gif] Click here to Subscribe [spacer.gif] [cover_newyorker_80.jpg] [spacer.gif] [spacer.gif] [spacer.gif] [spacer.gif] [spacer.gif] [spacer.gif] [spacer.gif] [spacer.gif] [spacer.gif] [javascript_disabled] From checker at panix.com Fri Dec 9 21:34:48 2005 From: checker at panix.com (Premise Checker) Date: Fri, 9 Dec 2005 16:34:48 -0500 (EST) Subject: [Paleopsych] TCS: Why People Hate Economics Message-ID: Why People Hate Economics http://www.techcentralstation.com/112105A.html [This is good, the idea those who reason from consequences of a proposal and those who reason from the supposed motives of the proponents. But Mr. Mencken said one thing the author did much better: ["The whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, all of them imaginary" (_In Defence of Women_).] By Arnold Kling Published 11/21/2005 "the separateness of these two mechanisms, one for understanding the physical world and one for understanding the social world, gives rise to a duality of experience. We experience the world of material things as separate from the world of goals and desires. ...We have what the anthropologist Pascal Boyer has called a hypertrophy of social cognition. We see purpose, intention, design, even when it is not there." -- Paul Bloom, writing in The Atlantic Paul Bloom's essay "Is God an Accident?" in the latest issue of The Atlantic, suggests that humans' belief in God, Intelligent Design, and the afterlife is an artifact of brain structure. In this essay, I am going to suggest that the same artifact that explains why people are instinctively anti-Darwin explains why they are instinctively anti-economic. Bloom says that we use one brain mechanism to analyze the physical world, as when we line up a shot on the billiard table. We use another brain mechanism to interact socially, as when we try to get a date for the prom. The analytical brain uses the principles of science. It learns to make predictions of the form, "When an object is dropped, it will fall toward the earth." The social brain uses empathy. It learns to guess others' intentions and motives in order to predict their reactions and behavior. The difference between analytical and social reasoning strikes me as similar to the difference that I once drew between Type C and Type M arguments. I wrote, "Type C arguments are about the consequences of policies. Type M arguments are about the alleged motives of individuals who advocate policies." Type C arguments about policy come from the analytical brain and reflect impersonal analysis. Type M arguments come from the social brain. In my view, they inject emotion, demagoguery, and confusion into discussions of economic policy. As a shortcut, I will refer to the analytical, scientific mental process as the type C brain, and the emotional, empathic mental process as the type M brain. What I take from Bloom's essay is the suggestion that our type M brain seeks a motive and intention behind the events that take place in our lives. This type M brain leads to irrational religious beliefs and superstitions, as when we attribute emotions and intentions to inanimate objects. We need our type M brains, but in moderation. Without a type M brain, one is socially underdeveloped. In extreme cases, someone with a weak type M brain will be described by Asperger's Syndrome or autism. On the other hand, as Bloom suggests, there are many cases in which we over-use our type M brains. For example, social psychologists have long noted the fundamental attribution error, in which we see people's actions as derived from their motives or dispositions when in fact the actions result from context. Economics is an attempt to use a type C brain to understand market processes in impersonal terms. We do not assess one person's motives as better than another's. We assume that everyone is out for their own gain, and we try to predict what will happen when people trade on that basis. Perhaps one of the reasons that economics is taught using math is that mathematics engages the Type C brain. By getting students to look at equations represented in graphs, the instructor steers them away from thinking in terms of motives. The down side of this is that when they go back to looking at the real world, many people who have taken economics courses simply revert to using their type M brains. Explaining Higher Gas Prices For example, consider the run-up in gasoline prices that occurred after Hurricane Katrina. Looking for the cause of higher gas prices, the type M brain asks, "Who?" The type C brain asks "What?" Some Senators, appealing to the type M brains among their constituents, hauled oil company executives into a hearing to ask them to explain why they raised prices so high. One might just as well imagine hauling people before a Senate hearing and holding them personally responsible for gravity or inertia. No one sets the price of gasoline. If they could, oil company executives would charge $10 a gallon or more. However, because of competition, they have to charge an amount that will allow them to sell the gasoline that they are able to produce. After Katrina, they were able to produce less gasoline, so that at $2 a gallon they would have run out. They raised their prices to the point where they could not raise them further without losing most of their business to competitors. If an oil company had decided magnanimously to sell gasoline at low prices, it would have run out of gasoline. If enough companies had done so, there would have been so little gasoline left that by October the public would have been at the mercy of those few suppliers that held any inventories. If gasoline had cost $2 a gallon in September, the shortage in October might have pushed the price up to $5 a gallon. If a monopolist were in charge of the oil industry, he would shut down some refineries in order to reduce the availability of gasoline. A monopolist would rather produce less gasoline and charge $3 per gallon than produce more gasoline but have to charge $2 a gallon to sell it all. Fortunately, the oil industry is not run by a monopolist, and we do not have to face $3 a gallon all the time. A competitive firm will not shut down its refinery capacity to keep supply off the market, because that only benefits its competitors. Hurricane Katrina temporarily did for the industry what a monopolist would do permanently. The hurricane shut down refinery capacity. As a result, oil companies earned high short-term profits. But those high profits did not reflect a sudden outbreak of greed among the oil company executives. Profits are explained by type C analysis of context, not by type M attributions of motive. Politics and Government Type M thinking views government as a parent. Conservatives want their government/parent to police moral behavior. Liberals want their government/parent to provide nurturance. Type C thinking instead thinks of government as an institutional arrangement. Rather than anthropomorphize government as a parent, type C thinking leads me to prefer the Separation of Family and State. Type M thinking treats political conflicts as battles between good and evil. "Our" side is wise and sincerely motivated. The "other" side is stupid and evil. Many economists revert to type M thinking when they look at politics. See my Challenge for Brad DeLong. Type C thinking treats political conflict as an inevitable competition among various interest groups. Actors in the political sphere respond to incentives, just as they do in other spheres. Politicians try to exploit the type M brain. Politicians appeal to people's fears. Their message is, "You are in danger. Fortunately, I care about you, and I will save you." The many political crusades against Wal-Mart reflect type M thinking. For example, the state of Maryland, where I live, is considering legislation forcing Wal-Mart to provide expensive health insurance to its employees. The type M brain sees Wal-Mart management as Scrooge, and Maryland's politicians as the ghosts that are going to get the company to see the evil of its ways. However, Basic type C economics says that forcing the company to provide more health insurance benefits would lead to lower wages for Wal-Mart workers. International Trade Economists view international trade as equivalent to the discovery of a more efficient production process. As Alan Blinder put it recently, "It has long been a mystery to economists why so many people view creative destruction that stems from technology as okay, while similar creative destruction that stems from international trade is something to be opposed." Hardly anyone feels guilty about using tax preparation software rather than paying an accountant to handle their tax returns. Yet many people would tell you that there is something wrong with outsourcing tax preparation to accountants in India. Neither economists nor non-economists tend to think of tax preparation software as an alien outsider trying to steal our jobs. However, many non-economists' type M brains instinctively think of Indian accountants as trying to do us harm. Economists are trained to look at international trade through the same type C eyes that we view technological innovation, and we are constantly amazed by the general public's hostility toward it. Paul Bloom offers extensive evidence that the majority of people do not accept the type C approach to evolution, death, and other matters. If biologists have been unable to get people to change their type M minds, then perhaps economists should not feel so bad. Arnold Kling is author of Learning Economics. From HowlBloom at aol.com Sat Dec 10 06:41:12 2005 From: HowlBloom at aol.com (HowlBloom at aol.com) Date: Sat, 10 Dec 2005 01:41:12 EST Subject: [Paleopsych] Shhhhh is this another Ur Strategy? Message-ID: <24c.351f908.30cbd288@aol.com> Hi, Karen. Good question. All I can add to this is the bark the cool and the growl. Animals use low noises--the growl-- to make themselves look big--big to rivals and big to females who, even in frogs, go for bigness. The bigger the woofer the lower the sound, so us animals go real low to make our woofers sound huge. Low rumbles are our musical dominance gestures. Animals use mid-range noises--the bark--to say hello, how are you or to introduce themselves to others they feel are equals. The mid-range is a music we sing to each other to connect without slipping into anger or intimacy. And animals use high-pitched, soft sounds to make themselves sound small, unthreatening, adorably appealing, and intimate. We use high-pitched soft sounds--coos--when we baby-talk to our young ones or to our lovers. Tweeters make high sounds. The smaller the tweeter, the higher the sound. Coos are musical submission and seduction gestures. Shhhhh falls into the coo category, but so do lots of other sounds. I suspect that shhh isn't cross-cultural--that it isn't replicated in Chinese, Japanese, or Mayan. But I'm not at all sure. Or should that be shhhure? Howard ps take a look at this paleopsych conversation from 1998 in which Martha Sherwood added something intriguing: Martha Sherwood writes: Subj: Re: Language as display Date: 98?02 ?23 13:01:14 EST From: msherw at oregon.uoregon.edu (Martha Sherwood) To: HowlBloom at aol.com Regarding your query to Gordon Burghart about geckos, it might be relevant that the vocalizations accompanying vampire bat threat displays are within the human auditory range whereas their other signals are not. Martha hb: very nifty, Martha. This would fit in with the coo, bark and growl research, since the bats are conceivably descending into what for them is a basso profundo growl to maximize their menace. Howard In a message dated 12/9/2005 1:21:22 PM Eastern Standard Time, guavaberry at earthlink.net writes: hi everyone, sorry to interrupt the present conversation . . . but i've been wondering about this . . . . What is shhhhh? and does this fall under another UR strategy a western custom or is it a world wide "instinct" we all have to use shhhhhh for shushing a baby to stop crying or to calm a crying baby or crying child. Is this another Ur Strategy? Do all human babies recognize this as the signal to be quiet? Do all cultures use this? I imagine it sounding like a snake's rattle but that doesn't mean much. I've heard it the same sound calm's horses and sounds similar to the word for thank you in mandarin. Do we know anything about shhhhh? Appreciate any thoughts you might have. thanks, Karen Ellis Archive 8/16/03 Re: Ur strategies and the moods of cats and dogs hb to pavel kurakin: I've had an adventure that will force me to stop for the night. One of my cats attacked me and tore several holes in my face, nearly removing one of my eyes. <>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<> The Educational CyberPlayGround http://www.edu-cyberpg.com/ National Children's Folksong Repository http://www.edu-cyberpg.com/NCFR/ Hot List of Schools Online and Net Happenings, K12 Newsletters, Network Newsletters http://www.edu-cyberpg.com/Community/ 7 Hot Site Awards New York Times, USA Today , MSNBC, Earthlink, USA Today Best Bets For Educators, Macworld Top Fifty <>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<> _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Recent Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Advanced Technology Working Group, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Institute for Accelerating Change ; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From guavaberry at earthlink.net Sat Dec 10 22:16:22 2005 From: guavaberry at earthlink.net (K.E.) Date: Sat, 10 Dec 2005 17:16:22 -0500 Subject: [Paleopsych] Shhhhh is this another Ur Strategy? In-Reply-To: <24c.351f908.30cbd288@aol.com> References: <24c.351f908.30cbd288@aol.com> Message-ID: <7.0.0.16.0.20051210164551.0328fce0@earthlink.net> hey howard, But I'm not at all sure. Or should that be shhhure? Howard :-) your info helps to frame out the bigger story, loved reading it - thanks so much and didn't Skoyles used to write to the list? http://www.hinduonnet.com/2001/10/11/stories/08110007.htm suggests music is hardwired (which i believe) & i think Steven Pinker is totally wrong i think Trehub has it goin on the other day on of my husbands co-workers said he was reading a book about calming babies called best baby on the block which gave parents an ordered 5 step to do list (& daddy said it worked) one of the steps included saying shhhhh in the baby's ear & to do it at the volume to match the loudness of the baby cry loud cry = loud shhhhh directly into the kids ear and the book also says no worries you can't hurt the kids eardrum. the whole thing got me thinking about the shhhhh - Ur thing cause it's gotta be full of overtones & that falls under the music brain wiring idea. http://www.annalsnyas.org/cgi/content/abstract/930/1/1 http://www.edu-cyberpg.com/Literacy/whatresearch4.asp the coo, bark and growl research does that include the chip? if dr. provine can tickle rats and get them to laugh is laughing called a chip? i just can't stop wondering about this. best, k At 01:41 AM 12/10/2005, you wrote: >Hi, Karen. Good question. > >All I can add to this is the bark the cool and the growl. > >Animals use low noises--the growl-- to make >themselves look big--big to rivals and big to >females who, even in frogs, go for bigness. The >bigger the woofer the lower the sound, so us >animals go real low to make our woofers sound >huge. Low rumbles are our musical dominance gestures. > >Animals use mid-range noises--the bark--to say >hello, how are you or to introduce themselves to >others they feel are equals. The mid-range is a >music we sing to each other to connect without slipping into anger or intimacy. > >And animals use high-pitched, soft sounds to >make themselves sound small, unthreatening, >adorably appealing, and intimate. We use >high-pitched soft sounds--coos--when we >baby-talk to our young ones or to our >lovers. Tweeters make high sounds. The smaller >the tweeter, the higher the sound. Coos are >musical submission and seduction gestures. > >Shhhhh falls into the coo category, but so do >lots of other sounds. I suspect that shhh isn't >cross-cultural--that it isn't replicated in >Chinese, Japanese, or Mayan. But I'm not at all >sure. Or should that be shhhure? Howard > >ps take a look at this paleopsych conversation >from 1998 in which Martha Sherwood added something intriguing: > > >Martha Sherwood writes: Subj: Re: Language >as display Date: 98???02???23 13:01:14 EST >From: msherw at oregon.uoregon.edu (Martha >Sherwood) To: HowlBloom at aol.com Regarding your >query to Gordon Burghart about geckos, it might >be relevant that the vocalizations accompanying >vampire bat threat displays are within the human >auditory range whereas their other signals are >not. Martha hb: very nifty, Martha. This would >fit in with the coo, bark and growl research, >since the bats are conceivably descending into >what for them is a basso profundo growl to maximize their menace. Howard > >In a message dated 12/9/2005 1:21:22 PM Eastern >Standard Time, guavaberry at earthlink.net writes: >hi everyone, >sorry to interrupt the present conversation . . . >but i've been wondering about this . . . . >What is shhhhh? and does this fall under another UR strategy >a western custom or is it >a world wide "instinct" we all have to use shhhhhh >for shushing a baby to stop crying >or to calm a crying baby or crying child. >Is this another Ur Strategy? >Do all human babies recognize this as the >signal to be quiet? >Do all cultures use this? >I imagine it sounding like a snake's rattle >but that doesn't mean much. I've heard it >the same sound calm's horses and sounds >similar to the word for thank you in mandarin. >Do we know anything about shhhhh? >Appreciate any thoughts you might have. >thanks, >Karen Ellis >Archive >8/16/03 >Re: Ur strategies and the moods of cats and dogs >hb to pavel kurakin: I've had an adventure that will force me to stop >for the night. One of my cats attacked me and tore several holes in >my face, nearly removing one of my eyes. <>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<> The Educational CyberPlayGround http://www.edu-cyberpg.com/ National Children's Folksong Repository http://www.edu-cyberpg.com/NCFR/ Hot List of Schools Online and Net Happenings, K12 Newsletters, Network Newsletters http://www.edu-cyberpg.com/Community/ 7 Hot Site Awards New York Times, USA Today , MSNBC, Earthlink, USA Today Best Bets For Educators, Macworld Top Fifty <>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<> From checker at panix.com Sun Dec 11 03:05:45 2005 From: checker at panix.com (Premise Checker) Date: Sat, 10 Dec 2005 22:05:45 -0500 (EST) Subject: [Paleopsych] Glimpse Abroad: Smelling the Roses Message-ID: Smelling the Roses Glimpse Abroad, 2005 http://www.glimpsefoundation.org/downloads/Spectrum-Winter2005.pdf [These observations are not particularly profound, in and of themselves, but they do say something about life in the United States.] First, the summary from the "Magazine and Journal Reader" feature of the daily bulletin from the Chronicle of Higher Education, 5.12.6 http://chronicle.com/daily/2005/12/2005120601j.htm A glance at the winter issue of glimpse: Readjusting to the States Catching up again to the fast-paced American lifestyle is one of the hardest challenges for American students returning home from study abroad, according to surveys conducted by the travel magazine. Kerala Goodkin, the magazine's editor in chief, writes that Americans obsess over productivity and efficiency, and that such fixations have led to "on the go" eating mentalities, excessive sugar consumption, a reliance on cars, and poor health. That "hurried" lifestyle, she explains, is all the more difficult for students to adjust to after living in countries with more-relaxed outlooks. "Life in the United States seemed so demanding and fast-paced that I just wanted to say 'slow down!' when I got home," she quotes a student who studied in Spain as saying. A common observation in the survey was how devoted Americans are to their work. In the United States, nearly 68 percent of the people in the labor force work a 40-hour week -- a rate that is second only to Japan, notes Ms. Goodkin. Despite that work ethic, foreigners and students returning from abroad consider Americans lazy for relying so heavily on cars. The United States is second to none in the ratio of cars to people, she points out, with 765 automobiles for every 1,000 Americans. Ms. Goodkin considers whether the American lifestyle is worth the costs. Being constantly on the move is unhealthy, she says, adding that the stress of our daily routines can cause organ failure, cancer, accidents, and suicide. She notes that the United States is among the top 10 countries in the world in terms of the proportion of its population that lives in ill health. "When I returned home," she quotes another student as saying, "I wanted to encourage others to love what they have and to smile at traffic jams and long lines. They might just notice something or someone new." A copy of article, "Smelling the Roses," is available at http://www.glimpsefoundation.org/downloads/Spectrum-Winter2005.pdf Information about the magazine is available at http://www.glimpseabroad.org/ --Jason M. Breslow -------------- FAST FOOD, FRAPPUCCINOS, EXPRESSWAYS, POWER LUNCHES, EVEN POWER NAPS ... WHY ARE AMERICANS SO SINGULARLY OBSESSED WITH GETTING THINGS DONE IN A HURRY? In a recent survey, Glimpse asked over 400 study abroad students about the central cultural differences between their home and host countries. One theme surfaced again and again: the challenge of readjusting to the comparatively hurried pace of U.S. life upon returning home from abroad. Says Janna Stansell of California State University, who studied in Spain, "Life in the United States seemed so demanding and fast-paced that I just wanted to say 'slow down!' when I got home." Brian Dolan of University of Colorado at Boulder echoes this sentiment: "In Russia," he says, "everyone was more relaxed and didn't stick to tight schedules as in America. Many people just did things in their own time." We obsess over productivity and efficiency, but are the benefits always worth the cost? This special report examines key cultural trends in the United States--for example, "on-the-go" eating mentalities, excessive caffeine and sugar consumption, reliance on cars, and poor mental and physical health--and compares them to trends in other countries, where residents take more time to stop and smell the roses. MEALS ON WHEELS While the infamous Golden Arches continue to crop up in countries around the world, Americans still reign supreme when it comes to our copious consumption of those greasy, ready- made morsels we qualify as "food." Americans who have lived abroad frequently comment that other countries do not share our "on-the-go" eating mentality and devote much more time to leisurely, multi-course meals, shared in the company of family and friends. "When I returned to the United States I had horrible reverse culture shock. People seemed so rude and Americans' love of fast food disgusted me." Student, University of Cincinnati Studied in United Arab Emirates through American University "After coming home from Florence, Italy, I missed not having a market to walk to every day and buy fresh food for my meals. I had become used to the slow pace of Italy, so when I came back to the States, everyone and everything seemed so rushed! I knew that the States is fast-paced, but I actually felt the difference more than I thought I would." Jenna Tonet, Stonehill College Studied in Italy through API "I think the biggest difference I felt coming home was the pace of life. In London, things moved a lot slower to some extent. It was overwhelming to watch my family rush around all day. I remember the first time I went out to dinner, I was shocked by how quickly the waiter pushed us out. In Europe, you can sit there all night and no one cares." Elizabeth Conner, University of Missouri at Columbia Studied in England TOP 5 COUNTRIES WITH THE MOST MCDONALD'S 12,804 3,598 1,154 1,115 1,093 U.S. Japan Canada UK Germany SANTA GIUSTINA, ITALY Excerpted from "The Sweet Life" by Nicole Graziano There I was, in Santa Giustina, a speck- of-a-city in the northern region of Italy known as Veneto. My supple nature welcomed this shift in setting, and I took comfort in my days with my Italian friend Elisa--waking to tea at the apartment or cappuccinos at the bar a street above. The town was grey, hushed and sealed tightly in a late December chill that hunched our backs and shrunk our necks as we shuffled around open markets and humble piazzas. My favorite part of the day, however, was lunchtime. Each and every afternoon, after Elisa and I had returned from a light trek through town or had finished up a movie, Salvatore, her boyfriend, would return to the apartment from his job at a nearby gas station for a nearly two- hour lunch break. Elisa routinely prepared a feast in preparation for his arrival--minestrone one day, pumpkin gnocchi another, complementing the meals with bread, cheese, nuts and wine. Contrary to the half-hour and 50-minute lunch breaks to which so many Americans are accustomed, these afternoons weren't hurried affairs. Salvatore's mind actually seemed to escape the duties of his workplace--he rarely regarded the clock, relaxing for another 45 minutes or so after eating until he finally lifted his tall, lanky body up from the sofa, draped himself in a black, button-up wool coat and departed once again for work. Outside the rambunctious pant and seduction of Florence, I lived as a shadow of my young Italian counterparts--cooking spaghetti carbonara beside them, sharing their anxieties about electricity and water bills. I came to understand the sweet life, la dolce vita, as I witnessed it through the lives of its inhabitants. YOUR 3 P.M. WAKE-UP CALL All the rushing around we do can sure wear us out. Should we combat our fatigue with a mid-afternoon nap? A leisurely stroll in the park? Forget it. According to Dunkin' Donuts, what we need is a "3 p.m. wake up call"--a.k.a a gargantuan iced coffee that promises to snap us out of our exhaustion with its sugary, caffeinated goodness. Perhaps unsurprisingly, the United States lags behind many Western European countries when it comes to coffee consumption, but if it's not a cup o' Joe we're chugging, it's probably coffee's carbonated cousin: soda. In the realm of soda intake, the United States has everyone else beat, hands down. "I had traveled to Latin America before, so I was already familiar with the slower pace of life there, but it still affected me in Mexico. As Americans, we always feel like we have to be doing something productive, and when we aren't, we get down on ourselves. Yet in Mexico it was okay to sit for an hour after you had already finished eating and talk, or to nap in the middle of the day after lunch, or to go out with friends for a beer on a Tuesday." Christina Shaw, American University Studied in Mexico through SIT "French culture was big on observing things: window shopping, browsing stores, walking around town, eating at cafes on the streets and watching the passersby. The leisurely atmosphere made for a wonderful experience." Molly Mullican, Rice University Studied in France through API "I found the Spanish schedule allowed for a not-so-busy workday and accomodated the relaxation and social needs that every good Spaniard appreciates." Phil Ramirez, Texas State University Studied in Spain through API Wuhu village, China: A man takes a nap in a lotus garden and wilderness park on a quiet summer afternoon. PHOTO by Kate Peterson. ALL WORK AND NO PLAY In a country where we still abide by the "pull- yourself-up-by-your-bootstraps" mentality, where we like to believe that our economic success is defined by how hard we work, we are fairly gung-ho when it comes to putting in the hours. Whereas France recently mandated a 35-hour work week, in the United States, almost 68 percent of our workforce puts in more than 40 hours per week, trailing only Japan (76 percent). "Before I left for France, I was enrolled in 18 credit hours a semester and working two jobs 15 to 25 hours a week. I re-evaluated the whole way I was living when I got home and now take more time to slow down and enjoy myself rather than rushing around to stay busy all the time." Anna Romanosky, University of South Carolina Studied in France through API "In Macedonia, much more time is spent drinking coffee and talking than actually doing work." Andrew C., University of Pittsburgh Studied in Macedonia through IAESTE "Upon returning home, I found that many of my views about the United States had changed. I noticed so many more 'negative' things about our culture, like excessiveness, wastefulness and laziness. Italians essentially 'work to live,' whereas here in the United States we 'live to work.' There are still so many things I compare between our culture and Italian culture." Laura Basil, Ohio University Studied in Italy through API BEHIND THE WHEEL Ironically, while many other countries acknowledge the United States' strong work ethic, they simultaneously view us as "lazy." Maybe that's because for all the hurrying we do, we sure spend a lot of time sitting on our butts. Yes, in the United States, the car reigns supreme; other ways of getting from here to there--for example, walking, biking or taking public transportation--are viewed as grossly inefficient. We want to get there fast why wait around at a bus stop or rely on the meager power of our own two legs? Unsurprisingly, the United States ranks the highest when it comes to the ratio of motor vehicles to people. "American culture is very much dependent on use of cars. In Sevilla I walked everywhere. I miss taking a 30- minute stroll to school." Student, Agnes Scott College Studied in Spain through API "The United States does not slow down, and it was hard to come back to rushing cars, people everywhere, and the overall feeling that there was always something going on. I hated not having a public transportation system and wished we could have a train system in the United States like Europe has." Holly Murdoch, Texas A&M Studied in Italy "I had just spent five months without a car and without the rushing of American life. Wherever I needed to go in Spain, I could walk or take public transportation. When I arrived back in Detroit, I became a bit disgusted at how everything was so impending, everything was an 'emergency.' " Lauren Zakalik, University of Michigan Studied in Spain through API BANGKOK, THAILAND Excerpted from "Yoda and the Skytrain" by Molly Angstman My route to work funnels me, along with crowds of commuters, onto the "sky train"--Bangkok's new elevated transportation system. Every day, without fail, the first 50 people off the escalator in the station see a new train pull up and start running, all the while smiling ear-to-ear like they are doing something really ridiculous. The little uniformed girls with their pigtails and giant backpacks, some barely taller than my waist, treat the 20-meter run as a hilarious adventure, holding hands and giggling, arriving at the train with flushed faces. As the doors quickly close, swallowing their giggles, they leave me to wait for the next train. I think they smile because even the youngest commuters know how inherently silly it is for a Thai person to run for a train. Although Bangkok is now home to big- money transnationals, the pace of life is still traditionally slower than other cities at a similar level of development. Being in a hurry is almost unseemly, but business is still profitable and the trains run like clockwork. Patience is the lauded quality, not promptness. As Buddhists, they will get another go at it anyway. Why rush? If I'm not at work every morning with a comfortable ten minutes to spare, I feel I have failed in my responsibilities as an efficient intern. This is why the second grader with the Winnie the Pooh backpack will always be wiser than me. She hurries because it is funny and exciting, not because she thinks being early makes life better. Despite their glittering efficiency, these trains might never be fast enough for me. So I am taking cultural orientation classes from these mobile philosophers. Lessons learned so far: 1) Spend rush hour with friends, 2) Enjoy the ride, and 3) Never hurry in paradise. WOLLONGONG, AUSTRALIA Excerpted from "Taking Your Time" by Heather Magalski Soon after arriving in Wollongong, Australia, my roommate and I decided to brave the train system to see the Gay and Lesbian Mardi Gras in nearby Sydney. As American city-dwellers, used to following strict schedules and being constantly on-the- go, we made our way to the station about a half-hour before our train was due to arrive. We were in for a surprise. When our train rumbled into the station 30 minutes after its scheduled arrival, we learned that we would have to transfer to a connection, which also ended up being late. When our train came to a sudden halt midway to Sydney, I went into a frenzy. I wanted to know what went wrong, how long it would take to fix and how late we would be. I began to think of my own life in the United States and how I try to cram so many things in at once. Before I had come to Australia, I thought that having a successful life meant being involved in several activities, as I had been pressured to do in order to be accepted at a college. Yet in the time it ended up taking my roommate and I to get to Sydney, I became aware that the true joy in life was taking my time, something the Australian culture has successfully mastered. Realizing this skill, I applied it to the rest of my stay in Australia. I no longer became stressed when it took hours to be served at a restaurant, miles to walk to town for groceries, or several hours to again travel to Sydney. No longer concerned with doing a specific "something," I went on long walks by the beach and sat and listened to many an Australian tell me his or her life story. Instead of always actively participating in something, I now understand that just sitting back and taking in my surroundings has its time and place. ILL EFFECTS Maybe we Americans get a lot done, but is it worth the cost? Our obsession with convenience and efficiency leads to many unhealthy practices, including poor nutritional habits and sedentary lifestyles. Being in constant states of stimulation and frenzy isn't so great for us either--in fact, stress is linked to a number of the leading causes of death in the United States, including heart, liver and lung disease; cancer; accidents; and suicide. Despite our high standards of living and advanced (though not universal) system of medical care, the United States ranks 48th in a comparative study of countries' life expectancies: 77.14 years. Furthermore, it ranks within the top ten for the proportion of its male and female populations who live in ill health. "I had a hard time getting used to the speed of life again in the United States. I liked feeling relaxed and laid back and not worried about getting places on time. I also missed the sense of community I felt in Ecuador. Back in the United States, I noticed how separate and selfish people can be at times." Maret Kane-Panchana, University of Washington Studied in Ecuador through Fundaci?n CIMAS "When I returned home, I had a hard time feeling compassion for those who just rush through their days, who go through the motions without understanding that their connection to the work/people/food/sex/nightlife they experience every day is worth more than a spot on a day-planner. I wanted to encourage others to love what they have and to smile at traffic jams and long lines. They might just notice something or someone new." Jordan Santoni, Appalachian State University Studied in Spain through API SAN JOSE, COSTA RICA Excerpted from "GPS, Costa Rica Style" by Patricia Jempty When I moved to Costa Rica with my family (chastened by the fact that after endless years of study, I had perfected French, not Spanish), I was in for quite a shock. Costa Rica is fairly well developed as far as "Third World" countries go: you can dine at any number of North American chain restaurants and stay only in U.S. chain hotels. (Why you'd want to do this is another question!) The veneer of familiarity may fool you into thinking that, except for language, Costa Rica is just like home. I can assure you, it's not. Let's talk physical. Costa Rica has a rainy season and a dry season. When it rains, the landscape is obliterated and the roads become rivers of mud. When it's dry, the dust permeates your pores and the wind plays catch with any object not nailed down. We're talking extremes here, and they happen every day. But physical aspects aside, it's the country's culture that can truly blind-side you, if you're paying attention. Patience is not just a lofty virtue, it's a necessity if you live in Costa Rica. The locals have it in their blood, or at least in their upbringing. Visitors must learn to adapt. The power grid fails. The water stops running. You can't travel quickly anywhere because most of the roads are notoriously potholed and must be shared with four-legged creatures of all sizes. You'll get there when you get there, which can be a hard lesson for a gringo. Over the years, my way of thinking has slowly adapted and shaped itself to the local manner of doing things. I've grown calmer, less demanding. I've learned to take life as it is offered to me; and in the process, my frustrations with the sometimes maddening aspects of Costa Rica have taken wing like so many butterflies. From checker at panix.com Sun Dec 11 03:05:50 2005 From: checker at panix.com (Premise Checker) Date: Sat, 10 Dec 2005 22:05:50 -0500 (EST) Subject: [Paleopsych] NYT Mag: (Freakonomics) The Economy of Desire Message-ID: The Economy of Desire http://select.nytimes.com/preview/2005/12/11/magazine/1124989462701.html [A good primer.] By STEPHEN J. DUBNER and STEVEN D. LEVITT Analyzing a Sex Survey What is a price? Unless you're an economist, you probably think of a price as simply the amount you pay for a given thing - the number of dollars you surrender for, let's say, Sunday brunch at your favorite neighborhood restaurant. But to an economist, price is a much broader concept. The 20 minutes you spend waiting for a table is part of the price. So, too, is any nutritional downside of the meal itself: a cheeseburger, as the economist Kevin Murphy has calculated, costs $2.50 more than a salad in long-term health implications. There are moral and social costs to tally as well - for instance, the look of scorn delivered by your vegan dining partner as you order the burger. While the restaurant's menu may list the price of the cheeseburger at $7.95, that is clearly just the beginning. The most fundamental rule of economics is that a rise in price leads to less quantity demanded. This holds true for a restaurant meal, a real-estate deal, a college education or just about anything else you can think of. When the price of an item rises, you buy less of it (which is not to say, of course, that you want less of it). But what about sex? Sex, that most irrational of human pursuits, couldn't possibly respond to rational price theory, could it? Outside of a few obvious situations, we generally don't think about sex in terms of prices. Prostitution is one such situation; courtship is another: certain men seem to consider an expensive dinner a prudent investment in pursuit of a sexual dividend. But how might price changes affect sexual behavior? And might those changes have something to tell us about the nature of sex itself? Here is a stark example: A man who is sent to prison finds that the price of sex with a woman has spiked - talk about a supply shortage - and he becomes much more likely to start having sex with men. The reported prevalence of oral sex among affluent American teenagers would also seem to illustrate price theory: because of the possibility of disease or pregnancy, intercourse is expensive - and it has come to be seen by some teenagers as an unwanted and costly pledge of commitment. In this light, oral sex may be viewed as a cheaper alternative. In recent decades, we have witnessed the most exorbitant new price associated with sex: the H.I.V. virus. Because AIDS is potentially deadly and because it can be spread relatively easily by sex between two men, the onset of AIDS in the early 1980's caused a significant increase in the price of gay sex. Andrew Francis, a graduate student in economics at the University of Chicago, has tried to affix a dollar figure to this change. Setting the value of an American life at $2 million, Francis calculated that in terms of AIDS-related mortality, it cost $1,923.75 in 1992 (the peak of the AIDS crisis) for a man to have unprotected sex once with a random gay American man versus less than $1 with a random woman. While the use of a condom greatly reduces the risk of contracting AIDS, a condom is, of course, yet another cost associated with sex. In a study of Mexican prostitution, the Berkeley economist Paul Gertler and two co-authors showed that when a client requested sex without a condom, a prostitute was typically paid a 24 percent premium over her standard fee. Francis, in a draft paper titled "The Economics of Sexuality," tries to go well beyond dollar figures. He puts forth an empirical argument that may fundamentally challenge how people think about sex. As with any number of behaviors that social scientists try to measure, sex is a tricky subject. But Francis discovered a data set that offered some intriguing possibilities. The National Health and Social Life Survey, sponsored by the U.S. government and a handful of foundations, asked almost 3,500 people a rather astonishing variety of questions about sex: the different sexual acts received and performed and with whom and when; questions about sexual preference and identity; whether they knew anyone with AIDS. As with any self-reported data, there was the chance that the survey wasn't reliable, but it had been designed to ensure anonymity and generate honest replies. The survey was conducted in 1992, when the disease was much less treatable than it is today. Francis first looked to see if there was a positive correlation between having a friend with AIDS and expressing a preference for homosexual sex. As he expected, there was. "After all, people pick their friends," he says, "and homosexuals are more likely to have other homosexuals as friends." But you don't get to pick your family. So Francis next looked for a correlation between having a relative with AIDS and expressing a homosexual preference. This time, for men, the correlation was negative. This didn't seem to make sense. Many scientists believe that a person's sexual orientation is determined before birth, a function of genetic fate. If anything, people in the same family should be more likely to share the same orientation. "Then I realized, Oh, my God, they were scared of AIDS," Francis says. Francis zeroed in on this subset of about 150 survey respondents who had a relative with AIDS. Because the survey compiled these respondents' sexual histories as well as their current answers about sex, it allowed Francis to measure, albeit crudely, how their lives may have changed as a result of having seen up close the costly horrors of AIDS. Here's what he found: Not a single man in the survey who had a relative with AIDS said he had had sex with a man in the previous five years; not a single man in that group declared himself to be attracted to men or to consider himself homosexual. Women in that group also shunned sex with men. For them, rates of recent sex with women and of declaring homosexual identity and attraction were more than twice as high as those who did not have a relative with AIDS. Because the sample size was so small - simple chance suggests that no more than a handful of men in a group that size would be attracted to men - it is hard to reach definitive conclusions from the survey data. (Obviously, not every single man changes his sexual behavior or identity when a relative contracts AIDS.) But taken as a whole, the numbers in Francis's study suggest that there may be a causal effect here - that having a relative with AIDS may change not just sexual behavior but also self-reported identity and desire. In other words, sexual preference, while perhaps largely predetermined, may also be subject to the forces more typically associated with economics than biology. If this turns out to be true, it would change the way that everyone - scientists, politicians, theologians - thinks about sexuality. But it probably won't much change the way economists think. To them, it has always been clear: whether we like it or not, everything has its price. Stephen J. Dubner and Steven D. Levitt are the authors of "Freakonomics: A Rogue Economist Explores the Hidden Side of Everything." More information on the academic research behind this column is at [3]www.freakonomics.com. From checker at panix.com Sun Dec 11 03:06:02 2005 From: checker at panix.com (Premise Checker) Date: Sat, 10 Dec 2005 22:06:02 -0500 (EST) Subject: [Paleopsych] NYT Mag: Laptop That Will Save the World, The Message-ID: Laptop That Will Save the World, The http://select.nytimes.com/preview/2005/12/11/magazine/1124989448443.html [How far does anyone predict that the educational achievement gap will be closed internationally?] By MICHAEL CROWLEY Here in America, high-speed wireless Internet has become a commonplace home amenity, and teenagers with Sidekicks can browse the Web on a beach. For many people in developing nations, however, the mere thought of owning a computer remains pure fantasy. But maybe not for long. This year, Nicholas Negroponte, chairman of the Massachusetts Institute of Technology's Media Lab, unveiled a prototype of a $100 laptop. With millions of dollars in financing from the likes of [3]Rupert Murdoch's News Corporation and Google, Negroponte and his colleagues have designed an extremely durable, compact, no-frills laptop, which they'd like to see in the hands of millions of children worldwide by 2008. So how can any worthwhile computer cost less than a pair of good headphones? Through a series of cost-cutting tricks. The laptops will run on free "open source" software, use cheaper "flash" memory instead of a hard disk and most likely employ new LCD technology to drop the monitor's cost to just $35. Each laptop will also come with a hand crank, making it usable even in electricity-free rural areas. Of course, the real computing mother lode is the Internet, to which few developing-world users have access. But the M.I.T. laptops will offer wireless peer-to-peer connections that create a local network. As long as there's an Internet signal somewhere in the network area - and making sure that's the case, even in rural areas, poses a mighty challenge - everyone can get online and use a built-in Web browser. Theoretically, even children in a small African village could have "access to more or less all libraries of the world," Negroponte says. (That's probably not very useful to children who can't read or understand foreign languages.) His team is already in talks with several foreign governments, including those of Egypt, Brazil and Thailand, about bulk orders. Gov. Mitt Romney of Massachusetts has also proposed a bill to buy 500,000 of the computers for his state's children. References 3. http://topics.nytimes.com/top/reference/timestopics/people/m/rupert_murdoch/index.html?inline=nyt-per From checker at panix.com Sun Dec 11 03:06:09 2005 From: checker at panix.com (Premise Checker) Date: Sat, 10 Dec 2005 22:06:09 -0500 (EST) Subject: [Paleopsych] NYT Mag: (Freakonomics) The Economy of Desire Message-ID: The Economy of Desire http://select.nytimes.com/preview/2005/12/11/magazine/1124989462701.html [A good primer.] By STEPHEN J. DUBNER and STEVEN D. LEVITT Analyzing a Sex Survey What is a price? Unless you're an economist, you probably think of a price as simply the amount you pay for a given thing - the number of dollars you surrender for, let's say, Sunday brunch at your favorite neighborhood restaurant. But to an economist, price is a much broader concept. The 20 minutes you spend waiting for a table is part of the price. So, too, is any nutritional downside of the meal itself: a cheeseburger, as the economist Kevin Murphy has calculated, costs $2.50 more than a salad in long-term health implications. There are moral and social costs to tally as well - for instance, the look of scorn delivered by your vegan dining partner as you order the burger. While the restaurant's menu may list the price of the cheeseburger at $7.95, that is clearly just the beginning. The most fundamental rule of economics is that a rise in price leads to less quantity demanded. This holds true for a restaurant meal, a real-estate deal, a college education or just about anything else you can think of. When the price of an item rises, you buy less of it (which is not to say, of course, that you want less of it). But what about sex? Sex, that most irrational of human pursuits, couldn't possibly respond to rational price theory, could it? Outside of a few obvious situations, we generally don't think about sex in terms of prices. Prostitution is one such situation; courtship is another: certain men seem to consider an expensive dinner a prudent investment in pursuit of a sexual dividend. But how might price changes affect sexual behavior? And might those changes have something to tell us about the nature of sex itself? Here is a stark example: A man who is sent to prison finds that the price of sex with a woman has spiked - talk about a supply shortage - and he becomes much more likely to start having sex with men. The reported prevalence of oral sex among affluent American teenagers would also seem to illustrate price theory: because of the possibility of disease or pregnancy, intercourse is expensive - and it has come to be seen by some teenagers as an unwanted and costly pledge of commitment. In this light, oral sex may be viewed as a cheaper alternative. In recent decades, we have witnessed the most exorbitant new price associated with sex: the H.I.V. virus. Because AIDS is potentially deadly and because it can be spread relatively easily by sex between two men, the onset of AIDS in the early 1980's caused a significant increase in the price of gay sex. Andrew Francis, a graduate student in economics at the University of Chicago, has tried to affix a dollar figure to this change. Setting the value of an American life at $2 million, Francis calculated that in terms of AIDS-related mortality, it cost $1,923.75 in 1992 (the peak of the AIDS crisis) for a man to have unprotected sex once with a random gay American man versus less than $1 with a random woman. While the use of a condom greatly reduces the risk of contracting AIDS, a condom is, of course, yet another cost associated with sex. In a study of Mexican prostitution, the Berkeley economist Paul Gertler and two co-authors showed that when a client requested sex without a condom, a prostitute was typically paid a 24 percent premium over her standard fee. Francis, in a draft paper titled "The Economics of Sexuality," tries to go well beyond dollar figures. He puts forth an empirical argument that may fundamentally challenge how people think about sex. As with any number of behaviors that social scientists try to measure, sex is a tricky subject. But Francis discovered a data set that offered some intriguing possibilities. The National Health and Social Life Survey, sponsored by the U.S. government and a handful of foundations, asked almost 3,500 people a rather astonishing variety of questions about sex: the different sexual acts received and performed and with whom and when; questions about sexual preference and identity; whether they knew anyone with AIDS. As with any self-reported data, there was the chance that the survey wasn't reliable, but it had been designed to ensure anonymity and generate honest replies. The survey was conducted in 1992, when the disease was much less treatable than it is today. Francis first looked to see if there was a positive correlation between having a friend with AIDS and expressing a preference for homosexual sex. As he expected, there was. "After all, people pick their friends," he says, "and homosexuals are more likely to have other homosexuals as friends." But you don't get to pick your family. So Francis next looked for a correlation between having a relative with AIDS and expressing a homosexual preference. This time, for men, the correlation was negative. This didn't seem to make sense. Many scientists believe that a person's sexual orientation is determined before birth, a function of genetic fate. If anything, people in the same family should be more likely to share the same orientation. "Then I realized, Oh, my God, they were scared of AIDS," Francis says. Francis zeroed in on this subset of about 150 survey respondents who had a relative with AIDS. Because the survey compiled these respondents' sexual histories as well as their current answers about sex, it allowed Francis to measure, albeit crudely, how their lives may have changed as a result of having seen up close the costly horrors of AIDS. Here's what he found: Not a single man in the survey who had a relative with AIDS said he had had sex with a man in the previous five years; not a single man in that group declared himself to be attracted to men or to consider himself homosexual. Women in that group also shunned sex with men. For them, rates of recent sex with women and of declaring homosexual identity and attraction were more than twice as high as those who did not have a relative with AIDS. Because the sample size was so small - simple chance suggests that no more than a handful of men in a group that size would be attracted to men - it is hard to reach definitive conclusions from the survey data. (Obviously, not every single man changes his sexual behavior or identity when a relative contracts AIDS.) But taken as a whole, the numbers in Francis's study suggest that there may be a causal effect here - that having a relative with AIDS may change not just sexual behavior but also self-reported identity and desire. In other words, sexual preference, while perhaps largely predetermined, may also be subject to the forces more typically associated with economics than biology. If this turns out to be true, it would change the way that everyone - scientists, politicians, theologians - thinks about sexuality. But it probably won't much change the way economists think. To them, it has always been clear: whether we like it or not, everything has its price. Stephen J. Dubner and Steven D. Levitt are the authors of "Freakonomics: A Rogue Economist Explores the Hidden Side of Everything." More information on the academic research behind this column is at [3]www.freakonomics.com. From checker at panix.com Sun Dec 11 03:15:57 2005 From: checker at panix.com (Premise Checker) Date: Sat, 10 Dec 2005 22:15:57 -0500 (EST) Subject: [Paleopsych] Jerry Goodenough: Critical Thinking about Conspiracy Theories Message-ID: Jerry Goodenough: Critical Thinking about Conspiracy Theories http://www.uea.ac.uk/~j097/CONSP01.htm [This is a very good analysis, esp. when it comes to noting that many, many conspiracies posit too many conspirators. As far as the specific analysis of the Kennedy assassination goes, the author makes a very good point about the Mafia being incompetent. I'll send along in a moment excerpts from a new book, "Ultimate Sacrifice," that makes a new case that the Mafia did in fact orchestrate the assassination. According to the book, the Mafia got wind of a CIA plot to murder Castro and threatened to reveal it, thereby causing an international crisis. The Warren Commission, accordingly covered things up, a cover-up which continues. [Still, the charge of incompetence remains. I reinsert my own theory that the assassination was an assisted suicide. JFK knew he had not long to live but did not want to go down in history like Millard Fillmore, whose only achievement was to not install a bath tub in the White House. Just being assassinated would not be enough, so he got the conspirators to leave enough bogus and inconsistent evidence that researchers would never stop spinning theories, all of them imperfect for failure to reconcile the evidence. [The Enlightenment died in six seconds on the Dealey Plaza.] Jerry Goodenough is Professor of Philosophy at the University of East Anglia, Norwich, UK 1. Introduction Conspiracy theories play a major part in popular thinking about the way the world, especially the political world, operates. And yet they have received curiously little attention from philosophers and others with a professional interest in reasoning.[1] Though this situation is now starting to change, it is the purpose of this paper to approach this topic from the viewpoint of critical thinking, to ask if there are particular absences or deformities of critical thinking skills which are symptomatic of conspiracy theorising, and whether better teaching of reasoning may guard against them. That conspiracy thinking is widespread can be seen from any cursory examination of a bookshop or magazine stand. There are not only large amounts of blatant conspiracy work, often dealing with American political assassinations and other events or with the alleged presence of extraterrestrial spacecraft, but also large amounts of writing where a certain degree of conspiracy thinking is more or less implicit. Thus many `alternative' works of medicine, history, archaeology, technology, etc. often depend upon claims, explicit or otherwise, that an establishment or orthodoxy conspires to suppress alternative views. Orthodox medicine in cahoots with the multinational drug companies conspires to suppress the claims of homeopathy, orthodox archaeologists through malice or blindness conspire to suppress the truth about the construction of the Pyramids, and so on. It certainly seems to the jaundiced observer that there is more of this stuff about then ever before. However, conspiracy theorising is now coming to the attention of philosophers. That it has taken this long may be because, as Brian Keeley says in a recent paper, `most academics simply find the conspiracy theories of popular culture to be silly and without merit.' (1999: 109n) But I agree with Keeley's further remark that `it is incumbent upon philosophers to provide analysis of the errors involved with common delusions, if that is indeed what they are.' If a kind of academic snobbishness underlies our previous refusal to get involved here, there may be another reason. Conspiracy theorising, in political philosophy at least, has been identified with irrationality of the worst sort--here the locus classicus may be some dismissive remarks made by Karl Popper in The Open Society and its Enemies (Popper 1996, Vol.2: 94-9). Pigden (1993) shows convincingly that Popper's remarks cannot be taken to support a rational presumption against conspiracy theories in history and politics. But certainly such a presumption exists, particularly amongst political commentators. It tends to manifest itself in a noisy preference for what is termed the `cock-up' theory of history--an unfortunate term that tends to assume that history is composed entirely of errors, accidents and unforeseen consequences. If such a dismal state of affairs were indeed to be the case, then there would seem to be no point in anybody trying to do anything. The cock-up theory, then, is agreeable to all forms of quietism. But we have no reason to believe that there is such a coherent theory, and even less reason to believe that every event must fall neatly into one or other category here; indeed, this insistence on black and white reasoning is, as we shall see, one of the features of conspiracy theorising itself! And what makes the self-satisfied `cock-up' stance even less acceptable is that it ignores the fact that conspiracies are a very real part of our world. No serious historian denies that a somewhat amateurish conspiracy lay behind the assassination of Abraham Lincoln, or that a more professional but sadly less successful conspiracy attempted to assassinate Adolf Hitler in the summer of 1944. Yet such is the presumption behind the cock-up stance that the existence or frequency of genuine conspiracies is often significantly downplayed. (How many people, taking at face value the cock-up theorists' claim that conspiracies are a real rarity in the modern history of democracies, do not know that a mere 13 years before President Kennedy's assassination a serious terrorist conspiracy to murder Harry S. Truman led to a fatal gunfight on the streets of Washington?[2] The cock-up presumption seems to generate a kind of amnesia here.) We require, then, some view of events that allows for the accidental and the planned, the deliberate and the contingent: history as a tapestry of conspiracies and cock-ups and much intentional action that is neither. Pigden (op.cit) satisfactorily demonstrates the unlikelihood of there being any adequate a priori exclusion principle here, in the face of the reality of at least some real conspiracies. Keeley's paper attempts a more rigorous definition of the phenomenon, hoping to separate what he terms Unwarranted Conspiracy Theories (UCTs) from rational or warranted conspiratorial explanations: It is thought that this class of explanation [UCTs] can be distinguished analytically from those theories which deserve our assent. The idea is that we can do with conspiracy theories what David Hume (1748) did with miracles: show that there is a class of explanations to which we should not assent, by definition. (Keeley: 111) and it is part of his conclusion that `this task is not as simple as we might have heretofore imagined.' (ibid.) Keeley concludes that `much of the intuitive "problem" with conspiracy theories is a problem with the theorists themselves, and not a feature of the theories they produce' (Ibid: 126) and it is this point I want to take up in this paper. What sort of thinking goes on in arriving at UCTs and what sort of things go wrong? If we say that conspiracy theorists are irrational, do we mean only that they are illogical in their reasoning? Or are there particular critical thinking skills missing or being misused? 2. Definitions Keeley's use of the term Unwarranted Conspiracy Theory should not mislead us into thinking that all conspiracy theories fall into one or other category here. Warrant is a matter of degree, and so is conspiracy. There are cases where a conspiratorial explanation is plainly rational; take, for instance, the aforementioned July Bomb Plot to kill Hitler, where there is an abundance of historical evidence about the conspirators and their aims. There are cases where such an explanation is clearly irrational: I shall argue later in the paper that this is most probably the case for the assassination of President Kennedy. And there are cases where some conspiratorial explanation may be warranted but it is hard to know how far the warrant should extend. Take, for instance, the murder of the Archduke Franz Ferdinand in Sarajevo in 1914. There was plainly a conspiracy to bring this about: some minutes before Gavril Princips shot the archduke, a co-conspirator was arrested after throwing a bomb (which failed to explode) at the archduke's car. Princips and his fellow students were Serbian nationalists, acting together to demonstrate against the presence of Habsburg influence in the Balkans. But there remains the possibility that they had been infiltrated and manipulated by Yugoslav intelligence elements seeking to provoke a crisis against Austro-Hungary. And there are more extreme claims that the ultimate manipulators here were agents of a world-wide conspiracy, of international Jewry or freemasonry seeking to bring about war. We are fully warranted in adopting the first conspiratorial explanation, but perhaps only partially warranted in thinking there is anything in the second claim[3], while the extreme claims seem to me to be as unwarranted as anything could be. What we require, then, is some definition which will mark off the kind of features which ought to lead us to suspect the warrant of any particular conspiratorial explanation. Keeley lays out a series of these, which I shall list and comment upon. But first he offers his definition of conspiracy theories in general: A conspiracy theory is a proposed explanation of some historical event (or events) in terms of the significant causal agency of a relatively small group of persons--the conspirators--acting in secret... a conspiracy theory deserves the appellation "theory" because it proffers an explanation of the event in question. It proposes reasons why the event occurred... [it] need not propose that the conspirators are all powerful, only that they have played some pivotal role in bringing about the event... indeed, it is because the conspirators are not omnipotent that they must act in secret, for if they acted in public, others would move to obstruct them... [and] the group of conspirators must be small, although the upper bounds are necessarily vague.(116) Keeley's definition here differs significantly from the kind of conspiracy at which Popper was aiming in The Open Society, crude Marxist explanations of events in terms of capitalist manipulation. For one can assume that in capitalist societies capitalists are very nearly all-powerful and not generally hindered by the necessity for secrecy. A greater problem for Keeley's definition, though, is that it seems to include much of the work of central government. Indeed, it seems to define exactly the operations of cabinet government--more so in countries like Britain with no great tradition of governmental openness than in many other democracies. What is clearly lacking here is some additional feature, that the conspirators be acting against the law or against the public interest, or both. This doesn't entirely free government from accusations of conspiracy--does a secret cabinet decision to upgrade a country's nuclear armaments which appears prima facie within the bounds of the law of that country but may breach international laws and agreements count? Is it lawful? In the public interest? A further difficulty with some kind of illegality constraint is that it might tend to rule out what we might otherwise clearly recognise as conspiracy theories. Take, for instance, the widely held belief amongst ufologists that the US government (and others) has acted to conceal the existence on earth of extra-terrestrial creatures, crashed flying saucers at Roswell, and so on. It doesn't seem obvious that governments would be acting illegally in this case--national security legislation is often open to very wide interpretation--and it could be argued that they are acting in the public interest, to avoid panic and so on. (Unless, of course, as some ufologists seem to believe, the government is conspiring with the aliens in order to organise the slavery of the human race!) So we have here what would appear to be a conspiracy theory, and one which has some of the features of Keeley's UCTs, but which is excluded by the illegality constraint. Perhaps the best we can do here is to assert that conspiracy theories are necessarily somewhat vague in this regard; I'll return to this point later. If this gives us a rough idea of what counts as a conspiracy theory, we can then build upon it and Keeley goes on to list five features which he regards as characteristic of Unwarranted Conspiracy Theories: (1) `A UCT is an explanation that runs counter to some received, official, or "obvious" account.' (116-7) This is nothing like a sufficient condition, for the history of even democratic governments is full of post facto surprises that cause us to revise previous official explanations. For instance, for many years the official explanation for Britain's military success in the Second World War was made in terms of superior generalship, better troops, occasional good luck, and so on. The revelation in the 1970s of the successful Enigma programme to break German service codes necessitated wholesale revision of military histories of this period. This was an entirely beneficial outcome, but others were more dubious. The growth of nuclear power in Britain in the 1950s was officially explained in terms of the benefit of cheaper and less polluting sources of electricity. It was only much later that it became clear that these claims were exaggerated and that the true motivation for the construction of these reactors was to provide fissile material for Britain's independent nuclear weapons. Whether such behaviour was either legal or in the public interest is an interesting thought. (1A) `Central to any UCT is an official story that the conspiracy theory must undermine and cast doubt upon. Furthermore, the presence of a "cover story" is often seen as the most damning piece of evidence for any given conspiracy." This is an interesting epistemological point to which I shall return. (2) `The true intentions behind the conspiracy are invariably nefarious'. I agree with this as a general feature, particularly of non-governmental conspiracies, though as pointed out above it is possible for governmental conspiracies to be motivated or justified in terms of preventing public alarm, which may be seen as an essentially beneficial aim. (3) `UCTs typically seek to tie together seemingly unrelated events.' This is certainly true of the more extreme conspiracy theory, one which seeks a grand unified explanation of everything. We have here a progression from the individual CT, seeking to explain one event, to the more general. Carl Oglesby (1976), for instance, seeks to reinterpret many of the key events in post-war American history in terms of a more or less secret war between opposing factions within American capital, an explanation which sees Watergate and the removal of Richard Nixon from office as one side's revenge for the assassination of John Kennedy. At the extreme we have those theories which seek to explain all the key events of western history in terms of a single secret motivating force, something like international freemasonry or the great Jewish conspiracy.[4] It may be taken as a useful rule of thumb here that the greater the explanatory range of the CT, the more likely it is to be untrue. (A point to which Popper himself would be sympathetic!) Finally, one might want to query here Keeley's point about seemingly unrelated events. Many CTs seem to have their origin in a desire to relate events that one might feel ought to go together. Thus many Americans, on hearing of the assassination of Robert Kennedy (itself coming very shortly after that of Martin Luther King) thought these events obviously related in some way, and sought to generate theories linking them in terms of some malevolent force bent on eliminating apparently liberal influences in American politics. They seem prima facie more likely to be related than, say, the deaths of the Kennedy brothers and those of John Lennon or Elvis Presley: any CT linking these does indeed fulfil Keeley's (3). (4) `...the truths behind events explained by conspiracy theories are typically well-guarded secrets, even if the ultimate perpetrators are sometimes well-known public figures.' This is certainly the original belief of proponents of UCTs but it does lead to a somewhat paradoxical situation whereby the alleged secret can become something of an orthodoxy. Thus opinion polls seem to indicate that something in excess of 80% of Americans believe that a conspiracy led to the death of President Kennedy, though it seems wildly unlikely that they all believe in the same conspiracy. It becomes increasingly hard to believe in a well-guarded secret that has been so thoroughly aired in 35 years of books, magazine articles and even Hollywood movies. Pretty much the same percentage of Americans seem to believe in the presence on earth of extra-terrestrials, though whether this tells us more about Americans or about opinion-polls is hard to say. But these facts, if facts they be, would tend to undercut the `benevolent government' UCTs. For there is really no point in `them' keeping the truth from us to avoid panic if most of us already believe this `truth'. The revelation of cast-iron evidence of a conspiracy to kill Kennedy or of the reality of alien visits to Earth would be unlikely to generate more than a ripple of public interest, these events having been so thoroughly rehearsed. (5) `The chief tool of the conspiracy theorist is what I shall call errant data'. By which Keeley means data which is unaccounted for by official explanations, or data which if true would tend to contradict official explanations. These are the marks of the UCT. As Keeley goes on to say (118) `there is no criterion or set of criteria that provide a priori grounds for distinguishing warranted conspiracy theories from UCTs.' One might perhaps like to insist here that UCTs ought to be false, and this is why we are not warranted in believing them, but it is in the nature of many CTs that they cannot be falsified. The best we may do is show why the warrant for believing them is so poor. And one way of approaching this is by way of examining where the thinking that leads to UCTs goes awry. 3. Where CT thinking goes wrong It is my belief that one reason why we should not accept UCTs is because they are irrational. But by this I do not necessarily mean that they are illogical in the sense that they commit logical fallacies or use invalid argument forms--though this does indeed sometimes happen--but rather that they misuse or fail to use a range of critical thinking skills and principles of reasoning. In this section I want to provide a list of what I regard as the key weaknesses of CT thinking, and then in the next section I will examine a case study of (what I regard to be) a UCT and show how these weaknesses operate. My list of points is not necessarily in order of importance. (A) An inability to weigh evidence properly. Different sorts of evidence are generally worthy of different amounts of weight. Of crucial importance here is eye-witness testimony. Considerable psychological research has been done into the strengths and weaknesses of such testimony, and this has been distilled into one of the key critical thinking texts, Norris & King's (1983) Test on Appraising Observations whose Manual provides a detailed set of principles for judging the believability of observation statements. I suspect that no single factor contributes more, especially to assassination and UFO UCTs, than a failure to absorb and apply these principles. (B) An inability to assess evidence corruption and contamination. This is a particular problem with eyewitness testimony about an event that is subsequently the subject of considerable media coverage. And it is not helped by conventions or media events which bring such witnesses together to discuss their experiences--it is not for nothing that most court systems insist that witnesses do not discuss their testimony with each other or other people until after it has been given in court. There is a particular problem with American UCTs since the mass media there are not governed by sub judice constraints, and so conspiratorial theories can be widely aired in advance of any court proceedings. Again Norris & King's principles (particularly IV. 10 & 12) should warn against this.[5] But we do not need considerable delay for such corruption to occur: it may happen as part of the original act of perception. For instance, in reading accounts where a group of witnesses claim to have identified some phenomenon in the sky as a spaceship or other unknown form of craft, I often wonder if this judgement occurred to all of them simultaneously, or if a claim by one witness that this was a spaceship could not act to corrupt the judgmental powers of other witnesses, so that they started to see this phenomenon `as' a spacecraft in preference to some more mundane explanation. (C) Misuse or outright reversal of a principle of charity: wherever the evidence is insufficient to decide between a mundane explanation and a suspicious one, UCTs tend to pick the latter. The critical thinker should never be prejudiced against occupying a position of principled neutrality when the evidence is more or less equally balanced between two competing hypotheses. And I would argue that there is much to be said for operating some principle of charity here, of always picking the less suspicious hypothesis of two equally supported by the evidence. My suspicion is that in the long run this would lead to a generally more economical belief structure, that reversing the principle of charity ultimately tends to blunt Occam's Razor, but I cannot hope to prove this. (D) The demonisation of persons and organisations. This may be regarded as either following from or being a special case of (C). Broadly, this amounts to moving from the accepted fact that X once lied to the belief that nothing X says is trustworthy, or taking the fact that X once performed some misdeed as particular evidence of guilt on other occasions. In the former case, adopting (D) would demonise us all, since we have lied on some occasion or other. This is especially problematic for UCTs involving government organisations or personnel, since all governments reserve the right to lie or mislead if they feel it is in the national interest to do so. But proof that any agency lied about one event ought not to be taken as significant proof that they lied on some other occasion. It goes against the character of the witness, as lawyers are wont to say, but then no sensible person should believe that governments are perfectly truthful. The second case is more difficult. It is a standard feature of Anglo-Saxon jurisprudence that the fact that X has a previous conviction should not be given in evidence against them, nor revealed to the jury until after a verdict is arrived at. The reasoning here is that generally evidence of X's previous guilt is not specific evidence for his guilt on the present occasion; it is possible for it to be the case that X was guilty then and is innocent now, and so the court should not be prejudiced against him. But there is an exception to this, at least in English law, where there are significant individual features shared between X's previous proven modus operandi and that of the present offence under consideration; evidence of a consistent pattern may be introduced into court. But, the rigid standards of courtroom proof aside, it is not unreasonable for the police to suspect X on the basis of his earlier conviction. This may not be fair to X (if he is trying to go straight) but it is epistemologically reasonable. The trouble for UCTs, as we shall see, is that most governments have a long record of previous convictions, and the true UC theorist may regard this not just as grounds for a reasonable suspicion but as itself evidence of present guilt. (E) The canonisation of persons or (more rarely) organisations. This may be regarded as the mirror-image of (D). Here those who are regarded as the victims of some set of events being explained conspiratorially tend to be presented, for the purpose of justifying the explanation, as being without sin, or being more heroic or more threatening to some alleged set of private interests than the evidence might reasonably support. (F) An inability to make rational or proportional means-end judgements. This is perhaps the greatest affront to Occam's Razor that one finds in UCTs. Such theories are often propounded with the explanation that some group of conspirators have been acting in furtherance of some aim or in order to prevent some action taking place. But one ought to ask whether such a group of conspirators were in a position to further their aim in some easier or less expensive or less risky fashion. Our assumption here is not the principle of charity mentioned in (C) above, that our alleged conspirators are too nice or moral to resort to nefarious activities. We should assume only that our conspirators are rational people capable of working out the best means to a particular end. This is a defeasible assumption--stupidity is not totally unknown in the political world--but it is nevertheless an assumption that ought to guide us unless we have evidence to the contrary. A difficulty that should be mentioned here is that of establishing the end at which the conspiracy is aimed, made more difficult for conspiracies that never subsequently announce these things. For the state of affairs brought about by the conspirators may, despite their best efforts, not be that at which they aimed. If this is what happens then making a rational means-end judgement to the actual result of the conspiracy may be a very different matter from doing the same thing to the intended results. (G) Evidence against a UCT is always evidence for. This is perhaps the point that would most have irritated Karl Popper with his insistence that valid theories must always be capable of falsification. But it is an essential feature of UCTs; they do not just argue that on the evidence available a different conclusion should be drawn from that officially sanctioned or popular. Rather, the claim is that the evidence supporting the official verdict is suspect, fraudulent, faked or coerced. And this belief is used to support the nature of the conspiracy, which must be one powerful or competent enough to fake all this evidence. What we have here is a difference between critically assessing evidence--something I support under (A) above--and the universal acid of hypercritical doubt. For if we start with the position that any piece of evidence may be false then it is open to us to support any hypothesis whatsoever. Holocaust revisionists would have us believe that vast amounts of evidence supporting the hypothesis of a German plot to exterminate Europe's Jews are fake. As Robert Anton Wilson (1989: 172) says, `a conspiracy that can deceive us about 6,000,000 deaths can deceive us about anything, and that it takes a great leap of faith for Holocaust Revisionists to believe that World War II happened at all.' Quite so. What is needed here is that I might term meta-evidence, evidence about the evidence. My claim would be that the only way to keep Occam's Razor shiny here is to insist on two different levels of critical analysis of evidence. Evidence may be rejected if it doesn't fit a plausible hypothesis--this is what everyone must do in cases where there is apparently contradictory evidence, and there can be no prima facie guidelines for rejection here apart from overall epistemological economy. But evidence may only be impeached--accused of being deliberately faked, forged, coerced, etc.--if we have further evidence of this forgery: that a piece of evidence does not fit our present hypothesis is not by itself any warrant for believing that the evidence is fake. (H) We should put no trust in what I here term the fallacy of the spider's web. That A knows B and that B knows C is no evidence at all that A has even heard of C. But all too often UCTs proceed in this fashion, weaving together a web of conspirators on the basis of who knows who. But personal acquaintance is not necessarily a transitive relation. The falsity of this belief in the epistemological importance of webs of relationships can be demonstrated with reference to the show-business party game known sometimes as `Six Degrees of Kevin Bacon'. The object of the game is to select the name of an actor or actress and then link them to the film-actor Kevin Bacon through no more than six shared appearances. (E.g. A appeared with B in film X, B appeared with C in film Y, C appeared with D in film Z, and D appears in Kevin Bacon's latest movie: thus we link A to Bacon in four moves.) The plain fact is that most of us know many people, and important people in public office tend to have dealings with a huge number of people, so just about anybody in the world can be linked to somebody else in a reasonably small number of such links. I can demonstrate the truth of this proposition with reference to my own case, that of a dull and unworldly person who doesn't get out much. For I am separated by only two degrees from Her Majesty The Queen (for I once very briefly met the then Poet Laureate, who must himself have met the Queen if only at his inauguration) which means I am separated by only three degrees from all the many important political figures that the Queen herself has met, including names like Churchill and De Gaulle. Which further means that only four degrees separate me from Josef Stalin (met by Churchill at Yalta) and just five degrees from Adolf Hitler (who never met Churchill but did meet prewar Conservative politicians like Chamberlain and Halifax who were known to Churchill). Given the increasing amounts of travel and communication that have taken place in this century, it should be possible to connect me with just about anybody in the world in the requisite six stages. But so what? Connections like these offer the possibility of communication and influence, but offer no evidence for its actuality. (I) The classic logical fallacy of post hoc ergo propter hoc. This is the most common strictly logical fallacy to be found in political conspiracy theories, especially those dealing with assassinations and suspicious deaths. And broadly it takes the shape of claiming that since event X happened after the death of A, A's death was brought about in order to cause or facilitate the occurrence of X. The First World War happened after the death of the Archduke Franz Ferdinand, and there is clearly a sense in which it happened because of his death: there is a causal chain leading from the death to Austrian outrage, to a series of Austrian demands upon Serbia, culminating in Austria's declaration of war against Serbia, Russia's declaration against Austria, and, via a series of interlinked treaty obligations, most of the nations of Europe ending up at war with one another. Though these effects of the assassination may now appear obvious, one problem for the CT proponent is that hindsight clarifies these matters enormously: such a progression may not have been at all obvious to the people involved in these events at the time. And it is even harder to believe that bringing about such an outcome was in any of their interests. (Austria plainly had an interest in shoring up its authority in the Balkans but not, given its many structural weaknesses, in engaging in a long and destructive war. The outcome, which anyone might have predicted as likely, was the economic ruin and subsequent political dissolution of the entire Austro-Hungarian empire.) Attempting to judge the rationality of a proposed CT here as an explanation for some such set of events runs into two problems. Firstly, though an outcome may now seem obvious to us, it may not have appeared so obvious to people at the time, either in its nature or in its expensiveness. Thus there may well have been people who thought that assassinating Franz Ferdinand in order to trigger a crisis in relations between Austria and Serbia was a sensible policy move, precisely because they did not anticipate a general world war occurring as a result and may have thought a less expensive conflict, a limited war of independence between Serbia and Austria, worth the possible outcome of freeing more of the Balkans from Austrian domination. And secondly, if we cannot attribute hindsight to the actors in such events, neither can we ascribe to them a perfect level of rationality: it is always possible for people engaged in such actions to possess a poor standard of means-end judgement. But, bearing these caveats in mind, one might still wish to propound two broad principles here for distinguishing whether an event is a genuine possible motive for an earlier conspiracy or just an instance of post hoc ergo propter hoc. Firstly, could any possible conspirators, with the knowledge they possessed at the time, have reasonably foreseen such an outcome? And secondly, granted that such an outcome could have been desired, are the proposed conspiratorial events a rational method of bringing about such an outcome? That a proposed CT passes these tests is, of course, no guarantee that we are dealing here with a genuine conspiracy; but a failure to pass them is a significant indicator of an unwarranted CT. 4. A case-study of CT thinking--the assassination of President Kennedy With these diagnostic indicators of poor critical thinking in place, I would now like to apply them to a typical instance of CT (and, to my mind, unwarranted CT) thinking.[6] On 22 November, 1963 President John F. Kennedy was assassinated in Dallas, Texas. Two days later, the man accused of his murder, Lee Harvey Oswald, was himself murdered in the basement of the Dallas Police Headquarters. These two events (and perhaps particularly the second, coming as it did so rapidly after the first) led to a number of accusations that Kennedy's death had been the result of a conspiracy of which Oswald may or not have been a part. Books propounding such theories emerged even before the Warren Commission issued its report on the assassination in August 1964. Writing at this time in his essay `The Paranoid Style in American Politics' the political scientist Richard Hofstadter could say; "Conspiratorial explanations of Kennedy's assassination have a far wider currency in Europe than they do in the United States." (Hofstadter 1964: 9) Hofstadter's view of the American paranoid style was one of small cults of a right-wing or racist or anti-Catholic or anti-Freemason bent whose descendants are still to be found in the Ku Klux Klan, the John Birch Society, the Michigan Militia, etc.. But within a couple of years of the emergence of the Warren Report and, more importantly, its 26 volumes of evidence, a new style of conspiratorial thinking emerged. While some right-wing conspiratorial theories remained[7], the bulk of the conspiracy theories propounded to explain the assassination adopted a position from the left of centre, accusing or assuming that some conspiracy of right-wing elements and/or some part of the US Government itself had been responsible for the assassination. A complete classification of such CTs is not necessary here[8], but I ought perhaps to point to a philosophically interesting development in the case. As a result of public pressure resulting from the first wave of CT literature, a congressional committee was established in 1977 to investigate Kennedy's assassination; it instituted a thorough examination of the available evidence and was on the verge of producing a report endorsing the Warren Commission's conclusions when it discovered what was alleged to be a sound recording of the actual assassination. Almost solely on the basis of this evidence--which was subsequently discredited by a scientific panel put together by the Department of Justice--the Congressional committee decided that there had probably been a conspiracy, asserting on the basis of very little evidence that the Mafia was the most probable source of this conspiracy. What was significant about this congressional investigation was the effect its thorough investigation of the forensic and photographic evidence in the case had. Many of the alleged discrepancies in this evidence, which had formed the basis for the many calls to establish such an investigation, were shown to be erroneous. This did not lead to the refutation of CTs but rather to a new development: the balance of CT claims now went from arguing that there existed evidence supporting a conspiratorial explanation to arguing that all or most of the evidence supporting the lone-assassin hypothesis had been faked, a new level of epistemological complexity. A representative CT of this type was propounded in Oliver Stone's hit 1992 Hollywood film JFK .[9] It asserts that a coalition of interests within the US governmental structure, including senior members of the armed forces, FBI, CIA, Secret Service and various Texas law-enforcement agencies, together with the assistance of members of organised crime, conspired to arrange the assassination of President Kennedy and the subsequent framing of an unwitting or entirely innocent Oswald for the crime. Motives for the assassination vary but most such CTs now agree on such motives as (a) preventing Kennedy after his supposed re-election from reversing US involvement in Vietnam, (b) protecting right-wing industrial interests, especially Texan oil interests, from what were regarded as possible depredations by the Kennedy administration, (c) instigating another and more successful US invasion of Cuba, and (d) halting the judicial assault waged by the Kennedy administration under Attorney General Robert Kennedy against the interests of organised crime. Such a CT scores highly on Keeley's five characteristic features of Unwarranted Conspiracy Theories outlined above. It runs counter to the official explanation of the assassination, though it has now itself become something of a popular orthodoxy, one apparently subscribed to by a majority of the American population. The alleged intentions behind the conspiracy are indeed nefarious, using the murder of a democratically-elected leader to further the interests of a private cabal. And it does seem to seek to tie together seemingly unrelated events. The most obvious of these is in terms of the assassination's alleged motive: it seeks to link the assassination with the subsequent history of America's involvement in Vietnam. But a number of other connections are made at other levels of explanation. For instance, the deaths of various people connected in one way or another with the assassination are linked together as being in some way related to the continuing cover-up by the conspirators. Keeley's fourth claim, that the truth behind an event being explained by a UCT be a typically well-guarded secret is, as I pointed out above, much harder to justify now in a climate where most people apparently believe in the existence of such a conspiracy. But Keeley's fifth claim, that the chief tool here is errant data, remains true. The vast body of published evidence on the assassination has been picked over with remarkable care for signs of discrepancy and contradiction, signs which are regarded as providing the strongest evidence for such a conspiracy. What now seems to me to be an interesting development in these more paranoid UCTs, as I mention above, is the extent to which unerrant data is now regarded as a major feature of such conspiracy theories. But how do these Kennedy assassination CTs rate against my own list of what I regard as critical thinking weaknesses? (A) An inability to weigh evidence properly. Here they score highly. Of particular importance is the inability to judge the reliability or lack thereof of eyewitness testimony, and an unwillingness or inability to discard evidence which does not fit. On the first point, most Kennedy CTs place high reliance on the small number of people who claimed at the time (and the somewhat larger number who claim now--see point (B) below) that they heard more than three shots fired in Dealey Plaza or that they heard shots fired from some other location than the Book Depository, both claims that if true would rule out the possibility of Oswald's acting alone. Since the overwhelming number of witnesses whose opinions have been registered did not hear more than three shots, and tended to locate the origin of these shots in the general direction of the Depository (which, in an acoustically misleadingly arena like Dealey Plaza is perhaps the best that could be hoped for), the economical explanation is to assume, unless further evidence arises, that the minority here are mistaken. Since the assassination was an unexpected, rapid and emotionally laden event--all key features for weakening the reliability of observation, according to the Principles of Appraising Observations in Norris & King (1983), it is only to be expected that there would be a significant portion of inconsistent testimony. The wonder here is that there is such a high degree of agreement over the basic facts. We find a similar misuse of observational principles in conspiratorial interpretations of the subsequent murder of Police Officer Tippit, where the majority of witnesses who clearly identified Oswald as the killer are downplayed in favour of the minority of witnesses--some at a considerable distance and all considerably surprised by the events unfolding in front of them--who gave descriptions of the assailant which did not match Oswald. Experienced police officers are used to eye-witness testimony of sudden and dramatic events varying considerably and, like all researchers faced with a large body of evidence containing discrepancies, must discard some evidence as worthless. Since Oswald was tracked almost continuously from the scene of Tippit's shooting to the site of his own arrest, and since forensic evidence linked the revolver found on Oswald to the shooting, the most economical explanation again is that the majority of witnesses were right in their identification of Oswald and the minority were mistaken. This problem of being unable to discard errant data is central to the creation of CTs since, as Keeley says: The role of errant data in UCTs is critical. The typical logic of a UCT goes something like this: begin with errant facts.... The official story all but ignores this data. What can explain the intransigence of the official story tellers in the face of this and other contravening evidence? Could they be so stupid and blind? Of course not; they must be intentionally ignoring it. The best explanation is some kind of conspiracy, an intentional attempt to hide the truth of the matter from the public eye. (Keeley 1999: 199) Such a view in the Kennedy case ignores the fact that the overwhelming amount of errant data on which CTs have been constructed, far from being hidden, was openly published in the 26 volumes of Warren Commission evidence. This has led to accusations that it was `hidden in plain view', but one can't help feeling that a more efficient conspiracy would have suppressed such inconvenient data in the first place. The standard position that errant data is likely to be false, that eye-witness testimony and memory is sometimes unreliable, that persisting pieces of physical evidence are preferable, etc., in short that Occam's Razor will insist on cutting and throwing away some of the data is constantly rejected in Kennedy CT literature. Perhaps the most extravagant example of this, amounting almost to a Hegelian synthesis of assassination conspiracy theories, is Lifton (1980). Seeking to reconcile the major body of testimony that Kennedy was shot from behind with a small body of errant data that he possessed a wound in the front of his body, the author dedicates over 600 pages to the construction of the most baroque conspiracy theory imaginable. In Lifton's thesis, Kennedy was shot solely from the front, and then the conspirators gained access to his body during its journey back to Washington and were able to doctor it so that at the subsequent post mortem examination it showed signs of being shot only from the rear. Thus the official medical finding that Kennedy was only shot from the rear can be reconciled with the general CT belief that he was shot from the front (too) in a theory that seems to show that everybody is right. Apart from the massive complication of such a plan--clearly going against my point (F)--and its medical implausibility, such a thesis actually reverses Occam's Razor by creating more errant data than there was to start with. For if Kennedy was shot only from the front, we now need some explanation for why the great majority of over 400 witnesses at the scene believed that the shots were coming from behind him! And this challenge is one that is ducked by the great majority of CTs: if minority errant data is to be preferred as reliable, then we require some explanation for the presence of the majority data now being rejected. But Lifton at least got one thing right. In accounting for the title of his book he writes: The "best evidence" concept, impressed on all law students, is that when you seek to determine a fact from conflicting data, you must arrange the data according to a hierarchy of reliability. All data are not equal. Some evidence (e.g. physical evidence, or a scientific report) is more inherently error-free, and hence more reliable, than other evidence (e.g. an eye-witness account). The "best" evidence rules the conclusion, whatever volume of contrary evidence there may be in the lower categories.[10] Unfortunately Lifton takes this to mean that conspirators who were able to decide the nature of the autopsy evidence would thereby lay down a standard for judging or rejecting as incompatible the accompanying eye-witness testimony. But given the high degree of unanimity among eye-witnesses on this occasion, and given the existence of corroborating physical evidence (a rifle and cartridges forensically linked to the assassination were found in the Depository behind Kennedy, the registered owner of the rifle was a Depository employee, etc.), all that the alleged body-tampering could hope to achieve is make the overall body of evidence more suspicious because more contradictory. Only if the body of reliable evidence was more or less balanced between a conspiratorial and non-conspiratorial explanation could this difficulty be avoided. But it is surely over-estimating the powers, predictive and practical, of such a conspiracy that they could hope to guarantee this situation beforehand. (B) An inability to assess evidence corruption and contamination. Though, as I note above, such contamination of eye-witness testimony may occur contemporaneously, it is a particular problem for the more long-standing CTs. In the Kennedy case, many witnesses of the assassination who at the time gave accounts broadly consistent with the explanation have subsequently amended or extended their accounts to include material that isn't so consistent. Witnesses, for instance, who at the time located all the shots as coming from the Book Depository subsequently gave accounts in which they located shots from other directions, most notably the notorious `grassy knoll', or later told of activity on the knoll which they never mentioned in their original statements. (Posner (1993) charts a number of these changes in testimony.) What is interesting about many of these accounts is that mundane explanations for these changes--I later remembered that..., I forgot to mention that...--tend to be eschewed in favour of more conspiratorial explanations. Such witnesses may deny that the signed statements made at the time accurately reflect what they told the authorities, or may say that the person interviewing them wasn't interested in writing down anything that didn't cohere with the official explanation of the assassination, and so on. Such explanations face serious difficulties. For one thing, since many of these statements were taken on the day of the assassination or very shortly afterwards, it would have to be assumed that putative conspirators already knew which facts would cohere with an official explanation and which wouldn't, which may imply an implausible degree of foreknowledge. A more serious problem is that these statements were taken by low-level members of the various investigatory bodies, police, FBI, Secret Service, etc.; to assert that such statements were manipulated by these people entails that they were members of the conspiracy. And this runs up against a practical problem for mounting conspiracies, that the more people who are in a conspiracy, the harder it is going to be to enforce security. A more plausible explanation for these changes in testimony might be that witnesses who provided testimony broadly supportive of the official non-conspiratorial explanation subsequently came into contact with some of the enormous quantity of media coverage suggesting less orthodox explanations and, consciously or unconsciously, have adjusted their recollections accordingly. The likelihood of such things happening after a sufficiently thorough exposure to alternative explanations may underlie Norris & King's principle II.1: An observation statement tends to be believable to the extent that the observer was not exposed, after the event, to further information relevant to describing it. (If the observer was exposed to such information, the statement is believable to the extent that the exposure took place close to the time of the event described.)[11] Their parenthesised time principle clearly renders a good deal of more recent Kennedy eye-witness testimony dubious after three and a half decades of exposure to vast amounts of further information in the mass media, not helped by `assassination conferences' where eye-witnesses have met and spoken with each other. One outcome of these two points is that, in the unlikely event of some living person being seriously suspected of involvement in the assassination, a criminal trial would be rendered difficult if not impossible. Such are the published discrepancies now within and between witnesses' testimonies that there would be enormous difficulties in attempting to render a plausibly consistent defence or prosecution narrative on their basis. (C) Misuse or outright reversal of a principle of charity. Where an event may have either a suspicious or an innocent explanation, and there is no significant evidence to decide between them, CTs invariably opt for the suspicious explanation. In part this is due to a feature deriving from Keeley's point (3) above, about CTs seeking to tie together seemingly unrelated events, but perhaps taken to a new level. Major CTs seek a maximally explanatory hypothesis, one which accounts for all of the events within its domain, and so they leave no room for the out of the ordinary event, the unlikely, the accident, which has no connection whatsoever with the conspiratorial events being hypothesised. The various Kennedy conspiracy narratives contain a large number of these events dragooned into action on the assumption that no odd event could have an innocent explanation. There is no better example of this than the Umbrella Man, a character whose forcible inclusion in conspiratorial explanations demonstrates well how a determined attempt to maintain this reversed principle of charity may lead to the most remarkable deformities of rational explanation. When pictorial coverage of the assassination entered the public domain, in newspaper photographs within the next few days, and more prominently in still from the Zapruder movie film of the events subsequently published in LIFE magazine, it became clear that one of the closest bystanders to the presidential limousine was a man holding a raised umbrella, and this at a time when it was clearly not raining. This odd figure rapidly became the focus of a number of conspiratorial hypotheses. Perhaps the most extreme of these originates with Robert Cutler (1975). According to Cutler, the Umbrella Man had a weapon concealed with the umbrella enabling him to fire a dart or flechette, perhaps drugged, into the president's neck, possibly for the purpose of immobilising him while the other assassins did their work. The only actual evidence to support this hypothesis is that the front of Kennedy's neck did indeed possess a small punctate wound, described by the medical team treating him as probably a wound of entrance but clearly explainable in the light of the full body of forensic evidence as a wound of exit for a bullet fired from above and behind the presidential motorcade. Consistent, in other words, with being the work of Oswald. There is no other supportive evidence for Cutler's hypothesis. (Cutler, of course, explains this in terms of the conspirators being able to control the subsequent autopsy and so conceal any awkward evidence; he thus complies with my principle (G) below.) More importantly, it seems inherently unlikely on other grounds. Since the Umbrella Man was standing on the public sidewalk, right next to a number of ordinary members of the public and in plain view of hundreds of witnesses, many of whom would have been looking at him precisely because he was so close to the president, its seems unlikely that a conspiracy could guarantee that he could get away with his lethal behaviour without being noticed by someone. And the proposed explanation for all this rigmarole, the stunning of the target, is entirely unnecessary: most firearms experts agree that the president was a pretty easy target unstunned. If Cutler's explanation hasn't found general favour with the conspiracy community, another has, but this too has equally strange effects upon reasoning clearly. The first version of this theory has the Umbrella Man signalling the presence of the target--movie-film of the assassination clearly shows that the raised umbrella is being waved or shaken. This hypothesis seems to indicate that the conspiracy had hired assassins who couldn't be relied upon to recognise the President of the United States when they saw him seated in his presidential limousine--the one with the president's flag on--next to the most recognisable first lady in American history. An apparently more plausible hypothesis is that it is the Umbrella Man who gives the signal for the team of assassins to open fire. (A version of this hypothesis can still be seen as late as 1992 in the movie JFK.) What I find remarkable here is that nobody seems to have thought this theory through at all. Firstly, the Umbrella Man is clearly on the sidewalk a few feet from the president while our alleged assassins are located high up in the Book Depository, in neighbouring buildings, or on top of the grassy knoll way to the front of the president. How, then, can he know what they can see from their different positions? How can he tell from his location that they now have clear shots at the target? (Dealey Plaza is full of trees, road signs and other obstructions, not to mention large numbers of police officers and members of the public who might be expected to get in the way of a clear view here.) And secondly, such an explanation actually weakens the efficiency of the alleged assassination conspiracy. (Here my limited boyhood experience of firing an air-rifle with telescopic sights finally comes in handy!) In order to make sense of the Umbrella Man as signaller, something like the following sequence of events must occur. Each rifleman focuses upon the presidential target through his telescopic sight, tracking the target as it moves at some ten to twelve miles per hour. Given the very narrow focus of such sights, he cannot see the Umbrella Man. To witness the signal, he must keep taking his eye away from the telescopic sight, refocussing it until he can see the distant figure on the sidewalk, and when the signal is given, put his eye back to the sight, re-focus again, re-adjust the position of the rifle since the target has continued to move while he was not looking at it, and then fire. This is not an efficient recipe for accurate target-shooting. Oliver Stone eliminates some of these problems in the version he depicts in the movie JFK. Here each of his three snipers is accompanied by a spotter, equipped with walkie-talkie and binoculars. While the sniper focuses on the target, the spotter looks out for the signal from the Umbrella Man and then orally communicates the order to open fire. But now, given what I have already said about the problem with the Umbrella Man's location, it is hard to see what purpose he serves that could not be better served by the spotters. He drops out of the equation. He is, as Wittgenstein says somewhere, a wheel that spins freely because it is not connected to the rest of the machinery. Occam's Razor would cut him from the picture, but Occam is no firm favourite of UCT proponents. In 1978, when the House Select Committee on Assassinations held public hearings on the Kennedy case, a Mr. Louis de Witt came forward to confess to being the Umbrella Man. He claimed that he came to Dealey Plaza in order to barrack the president as he went past, and that he was carrying a raised umbrella because he had heard that, perhaps for some obscure reason connected with the president's father's stay in London as US Ambassador during the war, the Kennedy family has a thing about umbrellas. De Witt hadn't come forward in the 15 years since the assassination since he had had no idea about the proposed role of the Umbrella man in the case. This part of his explanation seems to me to be eminently plausible: those of us with an obsessive interest in current affairs find it hard to grasp just how many people never read the papers or watch TV news. There is something almost endearing about de Witt, an odd character whose moment of public eccentricity seems to have enmired him in decades of conspiratorial hypotheses without his realising it. Needless to say, conspiracy theorists did not accept de Witt's testimony at face value. Some argued that he was a stooge put forward by the authorities to head off investigation into the real Umbrella Man, others that de Witt himself must be lying to conceal a more sinister role in these events, though I know of no evidence to support either of these conclusions. What this story makes clear is that an unwillingness to abandon discrepant events as unrelated, an unwillingness to abandon this reverse principle of charity here whereby all such events are conspiratorial unless clearly proven otherwise, rapidly leads to remarkable mental gymnastics, to hypotheses that are excessively complex and even internally inconsistent, (The Umbrella Man as signaller makes the assassination harder to perform.) But, such are the ways of human psychology, once such an event has been firmly embedded within a sufficiently complex hypothesis, no amount of contradictory evidence would seem to be able to shift it. The Umbrella Man has by now been invested with such importance as to become one of the great myths of the assassination, against which mere evidentiary matters can have no effect. (D) The demonisation of persons and organisations. This weakness takes a number of forms in the Kennedy case, which I shall treat separately. (i) Guilt by reputation. The move from the fact that some body--the FBI, the CIA, the mafia, the KGB--has a proven record of wrong-doing in the past to the claim that they were capable of wrong-doing in the present case doesn't seem unreasonable. But the stronger claim that past wrong-doing is in some sense evidence for present guilt is much more problematic, particularly when differences between the situations are overlooked. This is especially true of the role of the CIA in Kennedy CTs. Senator Church's 1976 congressional investigation into the activities of US intelligence agencies provided clear evidence that in the period 1960-63 elements of the CIA, probably under the instructions of or at least with the knowledge of the White House, had conspired with Cuban exiles and members of organised crime to attempt the assassination of Cuban leader Fidel Castro. Evidence also emerged of CIA involvement in the deaths of other foreign leaders--Trujillo in the Dominican Republic, Lumumba in the Congo, etc.. These findings were incorporated in Kennedy CTs as evidence to support the probability that the CIA, or at least certain members of it, were also responsible for the death of Kennedy. Once an assassin, always an assassin? Such an argument neglects the fact that the CIA could reasonably believe that they were acting in US interests, possibly lawfully since they were acting under the guidance or instruction of the White House. This belief is not open to them in the case of killing their own president, a manifestly unlawful act and one hard to square with forwarding US interests. (Evidence that Soldier X willingly shoots at the soldiers of other countries when ordered to do so is no evidence that he would shoot at soldiers of his own country, with or without orders. The situations are plainly different.) At best the Church Committee evidence indicated that the CIA had the capacity to organise assassinations, not that it had either the willingness or the reason to assassinate its own leader. (ii) Guilt by association. This takes the form of impeaching the credibility of any member of a guilty organisation. Since both the FBI and the CIA (not to mention, of course, the KGB or the mafia) had proven track records of serious misbehaviour in this period, it is assumed that all members of these organisations, and all their activities, are equally guilty. Thus the testimony of an FBI agent can be impeached solely on the grounds that he is an FBI agent, any activity of the CIA can be characterised as nefarious solely because it is being carried out by the CIA. Such a position ignores the fact that such organisations have many thousands of employees and carry out a wide range of mundane duties. It is perfectly possible for a member of such an organisation to be an honest and patriotic citizen whose testimony is as believable as anyone else's. Indeed, given my previous point that for security reasons the smaller the conspiratorial team the more likely it is to be successful, it would seem likely that the great majority of members of such organisations would be innocent of any involvement in such a plot. (I would hazard a guess that the same holds true of the KGB and the mafia, both organisations with a strong interest in security.) (iii) Exaggerating the power and nature of organisations. Repeatedly in such CTs we find the assumption that organisations like the CIA or the mafia are all-powerful, all-pervasive. capable of extraordinary foreknowledge and planning.[12] This assumption has difficulty in explaining the many recorded instances of inefficiency or lack of knowledge that these organisations constantly demonstrate. (There is a remarkable belief in conspiratorial circles, combining political and paranormal conspiracies, that the CIA has or had access to a circle of so-called `remote viewers', people with extra-sensory powers who were able through paranormal means to provide them with information about the activities of America's enemies that couldn't be discovered in any other way. Such a belief has trouble in easily accommodating the fact that the CIA was woefully unprepared for the sudden break-up of the Soviet Union and Warsaw Pact, or for the fact that America's intelligence organisations first learned of the start of the Gulf War when Kuwaiti embassy employees looked out of the window and saw Iraqi tanks coming down the road! Sadly, it appears to be true that people calling themselves remote viewers took very substantial fees from the CIA though whether this tells us more about the gullibility of people in paranoid institutions or their carefree attitude towards spending public money I should not care to say.) The more extreme conspiracy theories may argue that such organisations are only pretending to be inefficient, in order to fool the public about the true level of their efficiency. Such a position is, as Popper would no doubt have pointed out, not open to refutation. (iv) Demonising individuals. As with organisations, so with people. Once plausible candidates for roles in an assassination conspiracy are identified, they are granted remarkable powers and properties, their wickedness clearly magnified. In Kennedy CTs there is no better example of this than Meyer Lansky, the mafia's `financial wizard'. Lansky was a close associate of America's premier gangster of the 1940s, Charles `Lucky' Luciano. Not actually a gangster himself (and, technically, not actually a member of the mafia either, since Lansky--as a Jew--could not join an exclusively Sicilian brotherhood) Lansky acted as a financial adviser. He organised gambling activities for Luciano and probably played a significant role in the mafia involvement in the development of Las Vegas, and in subsequent investments of the Luciano family's money, including those in pre-revolutionary Cuba, after Luciano's deportation to Sicily. So much is agreed. But Lansky in CT writing looms ever larger, as a man of remarkable power and influence, ever ready to use it for malign purposes, a vast and evil spider at the centre of an enormous international web, maintaining his influence with the aid of the huge sums of money which organised crime was reaping from its empire.[13] Thus there is no nefarious deed concerning the assassination or its cover-up with which Lansky cannot be linked. This picture wasn't dented in the least by Robert Lacey's detailed biography of Lansky published in 1991. Lacey, drawing upon a considerable body of publicly available evidence--not least the substantial body generated by Lansky's lawsuit to enable him, as a Jew, to emigrate to Israel, was able to show that Lansky, far from being the mob's eminence grise, was little more than a superannuated book-keeper. The arch manipulator, supposedly empowered by the mafia's millions, led a seedy retirement in poverty and was on record as being unable to afford healthcare for his relatives. The effect of reading Lacey's substantially documented biography is rather like that scene in `The Wizard of Oz' when the curtain is drawn back and the all-powerful wizard is revealed to be a very ordinary little man. The 1990s saw the publication of a remarkable amount of material about the workings of American organised crime, much of it gleaned from FBI and police surveillance during the successful campaign to imprison most of its leaders. This material reveals that mafia bosses tend to be characterised by a very limited vocabulary, a remarkable propensity for brutality and a considerable professional cunning often mixed with truly breath-taking stupidity. That they could organise a large-scale assassination conspiracy, and keep quiet about it for more than thirty-five years, seemed even less likely. As I point out below, they would almost certainly not have wanted to. (E) The canonisation of persons or (more rarely) organisations. In the Kennedy case, this has taken the form of idealising the President himself. In order to make a conspiratorial hypothesis look more plausible under (F) below, it is necessary to make the victim look as much as possible like a significant threat to the interests of the putative conspirators. In this case, Kennedy is depicted as a liberal politician, one who was a threat to established economic interests, one who took a lead in the contemporary campaign to end institutionalised discrimination against black people, and, perhaps most importantly, one who was or became something of a foreign policy dove, supporting less confrontational policies in the Cold War to the extent of being prepared to terminate US involvement in South Vietnam. This canonisation initially derives from the period immediately after the assassination, a period marked by the emergence of a number of works about the Kennedy administration from White House insiders like Theodore Sorensen, Pierre Salinger and the Camelot house historian, Arthur Schlesinger, works which tended to confirm the idealisation of the recently dead president, particularly when implicitly compared with the difficulties faced by the increasingly unpopular Lyndon Johnson. >From the mid 1970s Kennedy's personal character came under considerable criticism, partly resulting from the publication of biographies covering his marriage and sexual life, and the personal lives of the Kennedy family. More importantly, for our purposes, were the stream of revelations which emerged from the congressional investigations of this time which indicated the depth of feeling in the Kennedy White House about Cuba; most important here were the Church Committee's revelations that the CIA had conspired with members of organised crime to bring about the assassination of Fidel Castro. These, coming hard on the heels of the revelations of various criminal conspiracies within the Nixon White House, stoked up the production of CTs. (And provided a new motivation for the Kennedy assassination: that Castro or his sympathisers had found out about these attempts and had Kennedy killed in revenge.) But they also indicated that the Kennedy brothers were much harder cold war warriors than had perhaps previously been thought. The changing climate of the 1980s brought a new range of biographies and memoirs--Reeves, Parmet, Wofford, etc.--which situated Kennedy more firmly in the political mainstream. It became that he was not by any means an economic or social liberal--on the question of racial segregation he had to be pushed a lot since he tended to regard the activities of Martin Luther King and others as obstructing his more important social policies. And Kennedy adopted a much more orthodox stance on the cold war than many had allowed: this was, after all, the candidate who got himself elected in 1960 by managing in the famous `missile gap' affair to appear tougher on communism than Richard Nixon, no mean feat. Famously, Kennedy adopted a more moderate policy during the Cuban missile crisis than some of those recommended by his military advisers, but this can be explained more in terms of Kennedy having a better grasp of the pragmatics of the situation than in terms of his being a foreign policy liberal of some sort. This changing characterisation of Kennedy, this firm re-situating of his administration within the central mainstream of American politics--a mainstream which appears considerably to the right in European terms--has been broadly rejected by proponents of Kennedy assassination CTs (some of whom also reject the critical characterisation of his personal life). The reason for this is that it plainly undercuts any motivation for some part of the American political establishment to have Kennedy removed. It is unlikely that any of Kennedy's reforming policies, economic or social, could seriously have been considered such a threat to establishment interests. It is even more unlikely when one considers that much of Kennedy's legislative programme was seriously bogged down in Congress and was unlikely to be passed in anything but a heavily watered-down form during his term. Much of this legislation was forced through after the assassination by Kennedy's successor, Lyndon Johnson being a much more astute and experienced parliamentarian. The price for this social reform, though, was Johnson's continued adherence to the verities of cold war foreign policy over Vietnam. I leave consideration of Kennedy's Vietnam policy to the next section. (F) An inability to make rational or proportional means-end judgements. The major problem here for any Kennedy assassination CT is to come up with a motive. Such a motive must not only be of major importance to putative conspirators, it must also rationally justify a risky, expensive--and often astonishingly complicated--illegal conspiracy. Which is to say that such conspirators must see the assassination as the only or best way of bringing about their aim. The alleged motives can be broadly divided into two categories. Firstly, revenge. Kennedy was assassinated in revenge for the humiliation he inflicted upon Premier Khrushchev over the Cuban missile crisis, or for plotting the assassination of Fidel Castro, or for double-crossing organised crime over alleged agreements made during his election campaign. The problem with each of these explanations is that the penalties likely to be suffered if one is detected far outweigh any rational benefits. Had Castro's hand been detected behind the assassination--something which Johnson apparently thought all too likely--this would inevitably have swung American public opinion behind a US military invasion of Cuba and overthrow of Castro's rule. If Khrushchev has been identified as the ultimate source of the assassination, the international crisis would have been even worse, and could well have edged the world considerably closer towards nuclear war than happened in the Cuban missile crisis. One can only make sense of such explanations on the basis of an assumption that the key conspirators are seriously irrational in this respect, and this is an assumption that we should not make without some clear evidence to support it. The second category of explanations for the assassination are instrumental: Kennedy was assassinated in order to further some specific policy or to prevent him from furthering some policy which the conspirators found anathema. Here candidates include: to protect Texas oil-barons' economic interests, to frustrate the Kennedy administration's judicial assault upon organised crime, to bring about a more anti-Castro presidency, and--the one that plays the strongest role in contemporary Kennedy CTs such as Oliver Stone's--to prevent an American withdrawal from Vietnam. A proper response to the suggestion of any of these as a rational motive for the assassination should be to embark upon a brief cost-benefit analysis. We have to factor in not only the actual costs of organising such a conspiracy (and, in the case of the more extreme Kennedy CTs, of maintaining it for several decades afterwards to engage in what has been by any standards a pretty inefficient cover-up) but also the potential costs to be faced if the conspiracy is discovered, the assassination fails, etc.. Criminals by and large tend to be rather poor at estimating their chances of being caught; murder and armed robbery have very high clear-up rates compared to, say, burglary of unoccupied premises. The continued existence of professional armed robbers would seem to indicate that they underestimate their chances of being caught or don't fully appreciate the comparative benefits of other lines of criminal activity. But though assassination conspirators are by definition criminals, we are to assume here that they are figures in the establishment, professional men in the intelligence, military and political communities, and so likely to be more rational in their outlook than ordinary street criminals. (Though this is a defeasible assumption, since the post-war history of western intelligence agencies has indicated a degree of internal paranoia sometimes bordering on the insane. A substantial part of British intelligence, for instance, spent almost two decades trying to prove that the then head of MI5 was a Soviet agent, a claim that appears to have no credibility at all.) If we assume that the Mafia played such a role in an assassination conspiracy, it is still plausible to believe that they would consider the risks of failure. In fact, we have some evidence to support this belief since, though organised crime is by and large a very brutal institution, in the US--as opposed to the very different conditions prevailing in Italy--it maintains a policy of not attacking dangerous judges or politicians. When in the 1940s senior Mafia boss Albert Anastasia proposed murdering Thomas Dewey, then a highly effective anti-crime prosecutor in New York and subsequently a republican presidential candidate in 1948, the response was to have Anastasia murdered rather than risk the troubles that Dewey's assassination would have brought down upon the heads of organised crime. An even more effective prosecutor, Rudolph Giuliani, remained unscathed throughout his career. Against the risks of being caught, we have to balance the costs of trying to achieve one's goal by some other less dramatic and probably more legal path. The plain fact is that there are a large number of legal and effective ways of changing a president's mind or moderating his behaviour. One can organise public campaigns, plant stories in the press, stimulate critical debate in congress, assess or manipulate public opinion through polls etc. When the health care industry in the US wanted to defeat the Clinton administrations reform proposals, for instance, they didn't opt for assassination but went instead for a highly successful campaign to bring congress and substantial parts of public opinion against the proposals, which soon became dead in the water. On the specific case of American withdrawal from Vietnam, all of the above applies. In the first case, following on from (E) above, it can be plausibly argued that Kennedy had no such intention. He certainly on occasion floated the idea, sounding out people around him, but this is something that politicians do all the time as part of the process of weighing policy options and shouldn't be taken as evidence for such an option. But to see Kennedy as seriously considering such an option is to see him as a figure considerably out of the Democratic mainstream. He would certainly have been aware of the effects that an Asian policy can have upon domestic matters; as a young congressman he would have been intimately aware of the effect that the fall of China to communism in 1949 had upon the last Democratic administration, severely weakening Harry Truman's effectiveness. For years afterwards the Democrats were regarded as the people who "lost China" despite the fact that there was nothing they could have done--short of an all-out war, like that occurring in Korea shortly afterwards, which couldn't possibly be won without the use of nuclear weapons and all that entails. Kennedy's administration had a much stronger presence in South Vietnam and it can reasonably be asked whether he would have wanted to run the risk of becoming the president who "lost Vietnam". He would also have been aware of the problem that ultimately faced Lyndon Johnson, that one could only maintain a forceful policy of domestic reform by mollifying congress over matters of foreign policy. The price for Johnson's Great Society reforms was a continued adherence to a policy of involvement in Vietnam, long after Johnson himself--fully aware of this bind--doubted the wisdom of this policy. Kennedy's domestic reforms were already in legislative difficulties; to believe that he was prepared to withdraw from Vietnam, then, is to believe that he was effectively abandoning his domestic programmes. (That Kennedy was alleged to be considering such an action in his second term, if re-elected, doesn't affect this point. He would still have been a lame-duck president, and would also have weakened the chances of any possible Democratic successor, something that would certainly have been of interest to other members of his party.) It thus appears unlikely that Kennedy would have seriously considered withdrawing completely from Vietnam. But if he had, a number of options were available to opponents of such a policy. Firstly, as noted above, they could have encouraged opposition to such a policy in congress and other important institutions, and among the American public. There was certainly a strongly sympathetic Republican and conservative Democrat presence in congress to form the foundations of such an opposition, as well as among newspaper publishers and other media outlets. If Kennedy had underestimated the domestic problems that withdrawal would cause him, such a campaign would concentrate his mind upon them. And secondly, opponents could work to change Kennedy's mind. They could do this by controlling the information available for Kennedy and his advisers. In particular, military sources could manipulate the information flowing from Vietnam itself. (That Kennedy thought something like this was happening may be indicated by his insistence on sending civilian advisers to Vietnam to report back to him personally.) This policy worked well in Johnson's time--the control of information over the trivial events in the Bay of Tonkin in 1965 was manipulated to indicate a serious crisis which thus forced Johnson into inserting a heavy military presence into South Vietnam in response. There is no reason to believe that such a policy would not have worked if Kennedy had still been in office. At the very least, it would be rational to adopt such a policy first, to try cheap, legal and probably efficient methods of bringing about one's goal before even contemplating such a dramatic, illegal and high-risk activity as assassination. (I omit here any consideration of the point that members of the American establishment might feel a moral revulsion at the idea of taking such action against their own president. Such a claim may well be true, but the argument from rationality does not require it.) At bottom what we face here is what we might term Goodenough's Paradox of Conspiracies: the larger or more powerful an alleged conspiracy, the less need they have for conspiring. A sufficiently large collection of members of the American political, intelligence and military establishment--the kind of conspiracy being alleged by Oliver Stone et al.--wouldn't need to engage in such nefarious activity since they would have the kind of organisation, influence, access to information, etc. that could enable them to achieve their goal efficiently and legally. The inability noted in (F) to make adequate means-end decisions means that UCT proponents fail to grasp the force of this paradox. (G) Evidence against a UCT is always evidence for. The tendency of modern CTs has been to move from conspiracies which try to keep their nefarious activities secret to more pro-active conspiracies which go to a good deal of trouble to manufacture evidence either that there was a different conspiracy or that there was no conspiracy at all. This is especially true of Kennedy assassination CTs. The epistemological attitude of Kennedy CTs has changed notably over the years. In the period 1964-76 the central claim of such theories was that the evidence collected by the Warren Commission and made public, when fairly assessed, did not support the official lone assassin hypothesis but indicated the presence of two or more assassins and therefore a conspiracy. Public pressure in the aftermath of Watergate brought about a congressional investigation of the case. In its 1980 report the House Select Committee eventually decided, almost solely on the basis of subsequently discredited acoustic evidence, that there had indeed been a conspiracy. But more importantly, the committee's independent panels of experts re-examined the key evidence, photographic, forensic and ballistic, and decided that it supported the Warren Commission's conclusion. This led to a sea-change in CTs from 1980 onwards. Given the preponderance of independently verified `best evidence' supporting the lone assassin hypothesis, CT proponents began to argue that some or all of this evidence had been faked. This inevitably entailed a much larger conspiracy than had previously been hypothesised, one that not only assassinated the president but also was able to gain access to the evidence of the case afterwards in order to change it, suppress it or manufacture false evidence. They thus fell foul of (F) above. Since the reason for such CTs was often to produce a hypothesis supported by much weaker evidence, eye-witness testimony and so on, they would tend to fall foul of (A), (B) and (C) as well. One problem with such CTs was that they tended to disagree with one another over which evidence had been faked. Thus many theorists argued that the photographic and X-ray record of the presidential post mortem had been tampered with to conceal evidence of conspiracy, while Lifton (1980) as we saw argued that the record was genuine but the body itself had been tampered with. Other theorists, e.g. Fetzer & co., argue that the X-rays indicate a conspiracy while the photographs do not, implying that the photographs have been tampered with. This latter, widespread belief introduces a new contradiction into the case, since it posits a conspiracy of tremendous power and organisation, able to gain access to the most important evidence of the case, yet one which is careless or stupid enough not to make sure that the evidence it leaves behind is fully consistent. (And, of course, it goes against the verdict of the House Committee's independent panel of distinguished forensic scientists and radiographers that the record of the autopsy was genuine, and consistent, both internally and with the hypothesis that Oswald alone was the assassin.) Of particular interest here is the Zapruder movie film of the assassination. Stills from this film were originally published, in the Warren Report and in the press, to support the official lone assassin hypothesis. When a bootleg copy of this film surfaced in the mid 1970s it was taken as significant evidence against the official version and most CTs since then have relied upon one interpretation or another of this film for support. But now that it is clear, especially since better copies of the film are now available, that the wounds Kennedy suffers in the film do not match those hypothesised by those CT proponents arguing for the falsity of the autopsy evidence, some of these proponents now claim to detect signs that the Zapruder film itself has been faked, and there has been much discussion about the chain of possession of this film in the days immediately after the assassination to see if there is any possibility of its being in the hands of someone who could have tampered with it. What is happening here is that epistemologically these CTs are devouring their own tails. If the evidence that was originally regarded as foundational for proving the existence of a conspiracy is now itself impeached, then this ought to undermine the original conspiracy case. If no single piece of evidence in the case can be relied upon then we have no reason for believing anything at all, and the abyss of total scepticism yawns. Interestingly there seems to be a complete lack of what I termed above `meta-evidence', that is, actual evidence that any of this evidence has been faked. Reasons for believing in this forgery hypothesis tend to fall into one of three groups. (i) It is claimed that some sign of forgery can be detected in the evidence itself. Since much of this evidence consists of poor quality film and photographs taken at the assassination scene, these have turned into blurred Rorschach tests where just about anything can be seen if one squints long and hard enough. In the case of the autopsy X-rays, claims of apparent fakery tend to be made by people untrained in radiography and the specialised medical skill of reading such X-rays. (ii) Forgery is hypothesised to explain some alleged discrepancy between two pieces of evidence. Thus when differences are alleged to exist between the autopsy photographs and the X-rays it is alleged that one or other (or both) have been tampered with. (iii) Forgery is hypothesised in order to explain away evidence that is clearly inconsistent with the proposed conspiracy hypothesis. An interesting case of the latter involves the so-called `backyard photos', photographs supposedly depicting Oswald standing in the yard of his house and posing with his rifle, pistol and various pieces of left-wing literature. For Oswald himself was confronted with these by police officers after his arrest and claimed then that they had been faked--he had had some employment experience in the photographic trade and claimed to know how easily such pictures could be faked. And ever since then CT proponents have made the same claims. But one problem with such claims is that evidence seldom exists in a vacuum, but is interconnected with other evidence. Thus we have the sworn testimony of Oswald's wife that she took the photographs, the evidence of independent photographic experts that the pictures were taken with Oswald's camera, documentary evidence in his own handwriting that Oswald ordered the rifle in the photos and was the sole hirer of the PO box to which it was delivered, eyewitness evidence that Oswald possessed such a rifle and that one of these photos had been seen prior to the assassination, and so on. To achieve any kind of consistency with the forgery hypothesis all of this evidence must itself be faked or perjured. Thus the forgery hypothesis inevitably ends up impeaching the credibility of such a range of evidence that a conspiracy of enormous proportions and efficiency is entailed, a conspiracy which runs into the problems raised in (F) above. These problems are so severe that the forgery hypothesis must be untenable without the existence of some credible meta-evidence, some proof that acts of forgery took place. Without such meta-evidence, all we have is an unjustifiable attempt to convert evidence against a conspiracy into evidence for merely on the grounds that the evidence doesn't fit the proposed CT, which is an example of (A) too. (H) The fallacy of the spider's web. This form of reasoning has been central to many of the conspiratorial works about the JFK assassination: indeed, Duffy (1988) is entitled The Web! Scott (1977) was perhaps the first full-length work in this tradition. It concentrates on drawing links between Oswald and the people he came into contact with, and the murky worlds of US intelligence, anti-Castro Cuban groups and organised crime, eventually linking in this fashion the world of Dealey Plaza with that of the Watergate building and the various secret activities of the Nixon administration. Such a project is indeed an interesting one, one which enlightens us considerably about the world of what Scott terms `parapolitics'. It is made especially easy by the fact that Oswald in his short life had at least tangential connections with a whole range of suspicious organisations, including the CIA, the KGB, pro- and anti-Castro Cuban groups, the US Communist Party and other leftist organisations, organised crime figures in New Orleans and Texas, and so on. And considerable webs can be drawn outwards, from Oswald's contacts to their contacts, and so on. As I say, such research is intrinsically interesting, but the fallacy occurs when it is used in support of a conspiracy theory. For all that it generates is suspicion, not evidence. That Oswald knew X or Y is evidence only that he might have had an opportunity to conspire with them, and doesn't support the proposition that he did. The claim is even weaker for people that Oswald only knew at second or third or fourth hand. And some of these connections are much less impressive than authors claim: that Oswald knew people who ultimately knew Meyer Lansky becomes much less interesting when, as I noted in (D) above, Lansky is seen as much more minor figure than the almost omnipotent organised crime kingpin he is often depicted as. Ultimately this fallacy depends upon a kind of confusion between quantity and quality, one that seems to believe that a sufficient quantity of suspicion inevitably metamorphoses into something like evidence. There is, as the old saying has it, no smoke without fire, and surely such an inordinate quantity of smoke could only have been produced by a fire of some magnitude. But thirty years of research haven't found much in the way of fire, only more smoke. Some of the more outrageous CTs here have been discredited--inasmuch as such CTs can ever be discredited--and the opening of KGB archives in recent years and access to living KGB personnel has shown that Oswald's contacts with that organisation were almost certainly innocent. Not only is there no evidence that Oswald ever worked for the KGB, but those KGB officers who monitored Oswald closely during his two year stay in the USSR were almost unanimously of the opinion that he was too unbalanced to be an employee of any intelligence organisation. But a problem with suspicion is that it cannot be easily dispelled. Since web-reasoning never makes clear exactly what the nature of Oswald's relationship with his various contacts was, it is that much harder to establish the claim that they were innocent. Ultimately, this can only be done negatively, by demonstrating the sheer unlikeliness of Oswald being able to conspire with anyone. The ample evidence of the sheer contingency of Oswald's presence in the book depository on the day of the assassination argues strongly against his being part of a conspiracy to kill the president. Whether in fact he was a part of some other conspiracy, as some authors have argued, is an interesting question but one not directly relevant to assassination CTs. (I) The classic logical fallacy of post hoc ergo propter hoc. This applies to all those assassination CTs which seek to establish some motive for Kennedy's death from some alleged events occurring afterwards. The most dramatic of these, as featured in Oliver Stone's film, is the argument from America's disastrous military campaign in Vietnam. US military involvement escalated after Kennedy's death, therefore it happened because of Kennedy's death, therefore Kennedy's death was brought about in order to cause an increased American presence in Vietnam. The frailty of this reasoning is obvious. As I pointed out in (F) above, such a view attributes to the proposed conspirators a significant inability to match ends and means rationally. In addition there is no end to the possible effects that can be proposed here. Ultimately everything that is regarded as immoral about modern America can be traced back to the assassination. As I pointed out in a recent lecture, what motivates this view is: a desire for a justification of a view of America as essentially a benign and divinely inspired force in the world, a desire held in the face of American sin in Vietnam and elsewhere. There are plausible explanations for Vietnam and Watergate in terms of the domination of post-war foreign policy by cold-war simplicities, and the growth of executive power at the expense of legislative controls, and so on. They are, for those not interested in political science, dull explanations. Above all, they do not provide the emotional justification of conspiratorial explanations. To view Vietnam as the natural outcome of foreign policy objectives of the cold-war establishment, of a set of attitudes shared by both Republican and Democrat, above all to view it as the express wish of the American people--opinion polls registered majority support for the war until after the Tet disaster in 1968--is ultimately to view Vietnam as the legitimate and rational outcome of the American system at work. A quasi-religious view of America as `the city on the hill', the place where God will work out his purpose for men, cannot afford to entertain these flaws. Hence the appeal of an evil conspiracy on which these sins can be heaped.[14] Underlying this reasoning, then, is an emotional attachment to a view of America as fundamentally decent combined with a remarkable ignorance about the real nature of politics. All of the features of America's history after 1963 that can be used as a possible motive for the assassination can be equally or better explained in terms of the ordinary workings of US politics. Indeed many of them, including the commitment to Vietnam and the aggressively murderous attitude towards Castro's Cuba, can be traced to Kennedy's White House and earlier. Though CT theorists often proclaim their commitment to realism and a hard-headed attitude towards matters, it seems clear that their reliance upon this kind of reasoning is motivated more by emotion than by facts. 5. Conclusions The accusation is often made that conspiracy theorists, particularly of the more extreme sort, are crazy, or immature, or ignorant. This response to UCTs may be at least partly true but does not make clear how CT thinking is going astray. What I have tried to show is how various weaknesses in arguing, assessing evidence, etc. interact to produce not just CTs but unwarranted CTs. A conspiratorial explanation can be the most reasonable explanation of a set of facts, but where we can identify the kinds of critical thinking problems I have outlined here, a CT becomes increasingly unwarranted. Apart from these matters logical and epistemological, it seems to me that there is also an interesting psychological component to the generation of UCTs. Human beings possess an innate pattern-seeking mechanism, imposing order and explanation upon the data presented to us. But this mechanism can be too sensitive and we start to see patterns where there are none, leading to a refusal to recognise the sheer amount of contingency and randomness in the world. Perhaps, as Keeley says, "the problem is a psychological one of not recognizing when to stop searching for hidden causes".[15] Seeing meaning where there is none leads to seeing evidence where there is none: a combination of evidential faults reinforces the view that our original story, our originally perceived pattern, is correct--a pernicious feedback loop which reinforces the belief of the UCT proponent in their own theory. And here criticism cannot help, for the criticism--and indeed the critic--become part of the pattern, part of the problem, part, indeed, of the conspiracy.[16] Conspiracy theories are valuable, like any other type of theory, for there are indeed conspiracies. We want to find a way to preserve all that is useful in the CT as a way of explaining the world while avoiding the UCT which at worst slides into paranoid nonsense. I agree with Keeley that there can be no exact dotted line along which Occam's Razor can be drawn here. Instead, we require a greater knowledge of the thinking processes which underlie CTs and the way in which they can offend against good standards of critical thinking. There is no way to defeat UCTs; the more entrenched they are, the more resistance to disproof they become. Like some malign virus of thinking, they possess the ability to turn their enemies' powers against them, making any supposedly neutral criticism of the CT itself part of the conspiracy. It is this sheer irrefutability that no doubted irritated Popper so much. If we cannot defeat UCTs through refutation then perhaps the best we can do is inoculate against them by a better development of critical thinking skills. These ought not to be developed in isolation--it is a worrying feature of this field that many otherwise critical thinkers become prone to conspiracy theorising when they move outside of their own speciality--but developed as an essential prerequisite for doing well in any field of intellectual endeavour. Keeley concludes that there is nothing straightforwardly analytic that allows us to distinguish between good and bad conspiracy theories... The best we can do is track the evaluation of given theories over time and come to some consensus as to when belief in the theory entails more scepticism than we can stomach.[17] Discovering whether or to what extent a particular CT adheres to reasonable standards of critical thinking practice gives us a better measure of its likely acceptability than mere gastric response, while offering the possibility of being able to educate at least some people against their appeal, as potential consumers or creators of unwarranted conspiracy theories. BIBLIOGRAPHY Blakey, G. Robert & Billings, Richard (1981) Fatal Hour -The Plot to Kill the President, N.Y.:Berkeley Publishing Cutler, Robert (1975) The Umbrella Man, Manchester, Mass.: Cutler Designs Donovan, Robert J.(1964) The Assassins, N.Y.: Harper Books Duffy, James. R. (1988) The Web, Gloucester: Ian Walton Publishing Eddowes, Michael (1977) The Oswald File, N.Y.: Ace Books Fetzer, James (ed.) (1997) Assassination Science, Chicago, IL: Open Court Publishing Fisher, Alec & Scriven, Michael (1997) Critical Thinking - Its Definition and Assessment, Norwich: Centre for Critical Thinking, U.E.A. Hofstadter, Richard P. (1964) The Paranoid Style in American Politics, London: Jonathan Cape Hume, David (1748) Enquiry Concerning Human Understanding, ed. by P.H. Nidditch 1975, Oxford: Oxford University Press. Keeley, Brian L. (1999) `Of Conspiracy Theories', Journal of Philosophy 96, 109-26. Lacey, Robert (19901) Little Man, London: Little Brown Lifton, David (1980) Best Evidence, London: Macmillan. 2nd ed. 1988 N.Y.: Carroll & Graf Norris, S.P. & King, R. (1983) Test on Appraising Observations, St Johns Newfoundland: Memorial University of Newfoundland. Norris, S.P. & King, R. (1984) `Observational ability: Determining and extending its presence', Informal Logic 6, 3-9. Oglesby, Carl (1976) The Yankee-Cowboy War , 2nd ed. 1977, N.Y.: Berkley Publishing Pigden, Charles (1993) `Popper revisited, or What Is Wrong With Conspiracy Theories?', Philosophy of the Social Sciences 25, 3-34. Popkin, Richard H. (1966) The Second Oswald , London: Sphere Books Popper, Karl (1945) The Open Society and its Enemies, 5th ed. 1966, London, Routledge. Posner, Gerald (1993) Case Closed, N.Y.: Random House Scheim, David E. (1983) Contract On America, Silver Spring, Maryland: Argyle Press Scott, Peter Dale (1977) Crime and Cover-Up, Berkeley, Cal: Westworks Stone, Jim (1991) Conspiracy of One , Fort Worth TX: Summit Group Stone, Oliver & Sklar, Zachary (1992) JFK - The Movie, New York: Applause Books. Thompson, Josiah(1967) Six Seconds in Dallas , 2nd ed. 1976, N.Y.: Berkeley Publishing Wilson, Robert Anton (1989) `Beyond True and False', in Schultz, T. (ed.) The Fringes of Reason, New York: Harmony. ______________________ [1] And this even though professional philosophers may themselves engage in conspiracy theorising! See, for instance, Popkin (1966), Thompson (1966) or Fetzer (1998) for examples of philosophers writing in support of conspiracy theories concerning the JFK assassination. [2] See Donovan 1964 for more on this. [3] Historians, it seems, still disagree about whether or to what extent Princips' group was being manipulated. [4] And the most extreme UCT I know manages to combine this with both ufology and satanism CTs, in David Icke's ultimate paranoid fantasy which explains every significant event of the last two millennia in terms of the sinister activities of historical figures who share the blood-line of reptilian aliens who manipulate us for their purposes, using Jews, freemasons, etc. as their fronts. Those interested in Mr. Icke's more specific allegations (which I omit here at least partly out of a healthy regard for Britain's libel laws) are directed to his website, http://www.davidicke.com/. [5] See Norris & King 1983 & 1984 for full details of and support for these principles. [6] I don't propose to argue for my position here. Interested readers are pointed in the direction of Posner (1994), a thorough if somewhat contentious anti-conspiratorial work whose fame has perhaps eclipsed the less dogmatic but equally anti-conspiratorial Stone (1990). [7] One of the first of which, from the charmingly palindromic Revilo P. Oliver, is cited by Hofstadter. Oliver, a member of the John Birch Society, which had excoriated Kennedy as a tool of the Communists throughout his presidency, asserted that it was international Communism which had murdered Kennedy in order to make way for a more efficient tool! Right-wind theories blaming either Fidel Castro or Nikita Khrushchev continued at least into the 1980s: see, for instance, Eddowes (1977). [8] And probably not possible! The sheer complexity of the assassination CT community and the number of different permutations of alleged assassins has frown enormously, especially over the last twenty years. In particular, the number of avowedly political CTs is hard to determine since they fade into other areas of CT, in particular those dealing with the influence of organised crime and those dealing with an alleged UFO cover-up, not to mention those even more extreme CTs which link the assassination to broader conspiracies of international freemasonry etc.. [9] See not only the movie but also Stone & Sklar (1992), a heavily annotated version of the film's script which also includes a good deal of the published debate about the film, for and against. [10] Lifton 1980: 132 [11] Norris & King (1983), quoted in Fisher & Scriven (1997). [12] For a remarkable instance of the exaggeration of the power of organised crime in the US and its alleged role in Kennedy's death see Scheim (1983) or, perhaps more worryingly, Blakey & Billings (1981). I say `more worryingly' because Blakey was Chief Counsel for the congressional investigation into Kennedy's death which reported in 1980 and so presumably is heavily responsible for the direction that investigation took. [13] This view of Lansky is widespread throughout the Kennedy literature. See, for instance, Peter Dale Scott's short (1977) which goes into Lansky's alleged connections in great detail. [14] From "(Dis)Solving the Kennedy Assassination", presented to the Conspiracy Culture Conference at King Alfred's College, Winchester, in July 1998. [15] Keeley 1999: 126 [16] Anyone who doubts this should try to argue for Oswald as lone assassin on an internet discussion group! It is not just that one is regarded as wrong or naive or ignorant. One soon becomes accused of sinister motives, of being a witting or unwitting agent of the on-going disinformation exercise to conceal the truth. (I understand that much the same is true of discussions in ufology fora.) [17] Keeley 1999: 126 From checker at panix.com Sun Dec 11 03:16:01 2005 From: checker at panix.com (Premise Checker) Date: Sat, 10 Dec 2005 22:16:01 -0500 (EST) Subject: [Paleopsych] Lamar Waldron, with Thom Hartmann: Ultimate Sacrifice Message-ID: Lamar Waldron, with Thom Hartmann: Ultimate Sacrifice: John and Robert Kennedy, the Plan for a Coup in Cuba, and the Murder of JFK (excerpts) A BUZZFLASH GUEST CONTRIBUTION by Thom Hartmann and Lamar Waldron Thom Hartmann, a regular BuzzFlash contributor, is coauthor of the newly released book Ultimate Sacrifice, which explores the theory that Kennedy's death leads back to the mob. Carroll & Graf, the publishers of Ultimate Sacrifice, have granted BuzzFlash permission to post the authors' introduction to the book, including end notes, as an aid to those wishing to further explore specific details. BuzzFlash is not in the business of solving the Kennedy mystery, and in fact we doubt that it will ever be irrefutably solved, even once all related documents become declassified. This text, however, makes a serious contribution, as our http://www.buzzflash.com/reviews/05/11/rev05124.html review has indicated. http://www.buzzflash.com/premiums/05/11/pre05167.html Ultimate Sacrifice: John and Robert Kennedy, the Plan for a Coup in Cuba, and the Murder of JFK is available as a BuzzFlash premium. * * * FOR MORE THAN FOUR DECADES since his death in 1963, John F. Kennedy has captured the imagination of the American people. Myth and conjecture have swirled around JFK, his political legacy, his family, and its multiple tragedies. Admirers and critics have examined every detail of his life and work, gradually lifting one veil after another to shed new light on his presidency, from his maneuvering behind the scenes during the Cuban Missile Crisis to his personal weaknesses. Nonetheless, the secret with the most profound and catastrophic effect on America has remained hidden. Ultimate Sacrifice reveals this secret for the first time, transforming the history of the Kennedy years and providing the missing piece to one of the great puzzles of post-war America: the true circumstances behind JFK's assassination on November 22, 1963. Seventeen years ago, Thom Hartmann and I began writing a book about the battles of President Kennedy and his brother, Attorney General Robert F. Kennedy, against the Mafia and Fidel Castro. Drawing on new information and exclusive interviews with those who worked with the Kennedys, in addition to thousands of recently declassified files, we discovered that John and Robert Kennedy had devised and were executing a secret plan to overthrow Fidel Castro on December 1, 1963. "The Plan for a Coup in Cuba" (as it was titled in a memo for the Joint Chiefs of Staff) would include a "palace coup" to eliminate Castro, allowing a new Cuban "Provisional Government" to step into the power vacuum, and would be supported by a "full-scale invasion" of Cuba by the US military, if necessary.[1] The "Plan for a Coup in Cuba" was fully authorized by JFK and personally run by Robert Kennedy. Only about a dozen people in the US government knew the full scope of the plan, all of whom worked for either the military, the CIA, or reported directly to Robert. The Kennedys' plan was prepared primarily by the US military, with the CIA playing a major supporting role. Input was also obtained from key officials in a few other agencies, but most of those who worked on the plan knew only about carefully compartmentalized aspects, believing it to be a theoretical exercise in case a Cuban official volunteered to depose Fidel. Unique and different from any previously disclosed operation, the Kennedys' "Plan for a Coup in Cuba" is revealed in this book for the first time. The CIA's code name for their part of the coup plan has never surfaced in any book, article, or government investigation. Officially declassified in 1999, "AMWORLD" is the cryptonym the CIA used for the plan in its classified internal documents. Since the overall coup plan was under the personal control of Attorney General Kennedy, who did not use a code-name for it, we call it "C-Day" in this book, a name entirely of our own invention. Its evocation of D-Day is intentional, since the Kennedys' plan included the possibility of a US military invasion. C-Day was undoubtedly one of the most secret covert operations in United States history. In its secrecy, however, lay tragedy. Even though the Kennedys' coup plan never came to fruition, three powerful Mafia dons?Carlos Marcello, Santo Trafficante, and Johnny Rosselli?learned of the plan and realized that the government would go to any lengths to avoid revealing it to the public. With that knowledge, the three mob bosses were able to assassinate JFK in a way that forced the truth to be buried for over forty years. Marcello, Trafficante, and Rosselli undertook this extraordinary act of vengeance in order to halt the Kennedy administration's unrelenting prosecution of them and their allies. The Kennedy Justice Department had vigorously pursued Marcello, even subjecting him to a brief, nightmarish deportation. Once he returned, Marcello hated the Kennedy brothers with a deep and vengeful passion. The two other Mafia bosses suffered similar pursuit, and eventually Marcello, Trafficante, and Rosselli decided that their only way to avoid prison or deportation was to kill JFK. Our investigation has produced clear evidence that the crime bosses arranged the assassination so that any thorough investigation would expose the Kennedys' C-Day coup plan. They were confident that any such exposure could push America to the brink of war with Cuba and the Soviet Union, meaning that they could assassinate JFK with relative impunity. They did not carry out the act themselves, but used trusted associates and unwitting proxies. The most widely known are Jack Ruby and Lee Harvey Oswald, who were both in contact with associates of Marcello, Trafficante, and Rosselli in the months before the assassination. Reports in government files show that Oswald and Ruby knew about parts of the Kennedys' plan and even discussed it with others. Robert Kennedy told several close associates that Carlos Marcello was behind JFK's death, but he couldn't reveal what he knew to the public or to the Warren Commission without C-Day being uncovered. As this book shows, RFK and other key government officials worried that exposure of the plan could trigger another nuclear confrontation with the Soviets, just a year after the Cuban Missile Crisis. None of the seven governmental committees that investigated aspects of the assassination, including the Warren Commission, were officially told about the Kennedys' C-Day plan.[2] However, over the decades, each successive committee came increasingly close to discovering both the plan and the associates of Marcello who assassinated JFK. We were able to piece together the underlying story by building on the work of those committees, former government investigators, and revelations in four million documents that were declassified in the 1990s. Key to our efforts were new and often exclusive interviews with many Kennedy insiders who worked on the coup plan or dealt with its consequences, some of whom revealed aspects of JFK's assassination and the coup plan for the first time. They include Secretary of State Dean Rusk, Press Secretary Pierre Salinger, and the Kennedys' top Cuban exile aide, Enrique "Harry" Ruiz-Williams. Their inside information allows us to tell the story, even though a 1998 report about the JFK Assassinations Records Review Board confirms that "well over a million CIA records" related to JFK's murder have not yet been released.[3] NBC News' Tom Brokaw confirmed on his September 29, 1998 broadcast that "millions" of pages remain secret and won't be released until the year 2017.[4] By necessity, Ultimate Sacrifice examines this complex story from several angles. Part One documents every aspect of the Kennedys' C-Day plan and how it developed, beginning with the Cuban Missile Crisis. Though it is widely believed that JFK agreed not to invade Cuba in order to end the Cuban Missile Crisis in the fall of 1962, Secretary of State Rusk told us that the "no-invasion" pledge was conditional upon Castro's agreement to on-site UN inspections for nuclear weapons of mass destruction (a term that JFK first used). Historians at the National Security Archive confirmed that because Castro refused such inspections, the pledge against invasion never went into effect.[5] Consequently, in the spring of 1963, John and Robert Kennedy started laying the groundwork for a coup against Fidel Castro that would eventually be set for December 1, 1963. Robert Kennedy put the invasion under the control of the Defense Department because of the CIA's handling of 1961's Bay of Pigs disaster. The "Plan for a Coup in Cuba," as written by JFK's Secretary of the Army Cyrus Vance with the help of the State Department and the CIA, called for the coup leader to "neutralize" Cuban leader "Fidel Castro and . . . [his brother] Raul" in a "palace coup." Then, the coup leader would "declare martial law" and "proclaim a Provisional Government" that would include previously "selected Cuban exile leaders" who would enter from their bases in Latin America.[6] Then, at the invitation of the new government, after "publicly announcing US intent to support the Provisional Government, the US would initiate overt logistical and air support to the insurgents" including destroying "those air defenses which might endanger the air movement of US troops into the area." After the "initial air attacks" would come "the rapid, incremental introduction of balanced forces, to include full-scale invasion" if necessary. The first US military forces into Cuba would be a multiracial group of "US military-trained free Cubans," all veterans of the Bay of Pigs.[7] Upon presidential authorization, the US would "recognize [the] Provisional Government . . . warn [the] Soviets not to intervene" and "assist the Provisional Government in preparing for . . . free elections."[8] This "palace coup" would be led by one of Castro's inner circle, himself a well-known revolutionary hero.[9] This man, the coup leader, would cause Castro's death, but without taking the credit or blame for doing so. The coup leader would be part of the new Provisional Government in Cuba, along with a select group of Cuban exiles?approved by the Kennedys?who ranged from conservative to progressive.[10] The identity of the coup leader is known to the authors, and has been confirmed by Kennedy associates and declassified documents. However, US national security laws may prevent the direct disclosure of past US intelligence assets even long after their deaths, so we will not directly name the coup leader in this book. Since we have no desire to violate national security laws or endanger US intelligence assets, we will only disclose official information that has been declassified or is available in the historical record. We have uncovered historical accounts of Cuban leaders that have been long overlooked by the public or are in newly released government files. For example, a formerly secret cable sent to the CIA director on December 10, 1963?just nine days after the original date for the C-Day coup?reports "Che Guevara was alleged to be under house arrest for plotting to overthrow Castro," according to "a Western diplomat."[11] Newly declassified documents and other research cast Che's growing disenchantment with Fidel Castro in a new light. These revelations include Che's secret meetings with three people close to the Kennedys, followed by yet another house arrest after a CDay exile leader was captured in Cuba. The Kennedys did not see C-Day as an assassination operation, but rather as an effort to help Cubans overthrow a Cuban dictator. A June 1963 CIA memo from one of Robert Kennedy's Cuban subcommittees of the National Security Council explains the Kennedy policy as "Cubans inside Cuba and outside Cuba, working" together to free their own country.[12] Nor was C-Day an attempt to install another US-backed dictator in Cuba, like the corrupt Batista regime that had been overthrown by Castro and many others on January 1, 1959. The Kennedys' goal in 1963 was simply a free and democratic Cuba. As several Kennedy associates told us, the only man who knew everything about C-Day was Robert Kennedy, the plan's guiding force.[13] Secretary of the Army Cyrus Vance was one of the few military leaders who knew the full scope of C-Day while the plan was active. The others were generals the Kennedys especially trusted, including Chairman of the Joint Chiefs of Staff Maxwell Taylor and General Joseph Carroll, head of the Defense Intelligence Agency (DIA). High CIA officials involved in C-Day included CIA Director John McCone, Deputy Director for Plans Richard Helms, Desmond FitzGerald, and key field operatives like David Morales and David Atlee Phillips. Most high US officials didn't know about C-Day prior to JFK's assassination. There is no evidence that Lyndon Johnson was told anything about C-Day prior to JFK's death. Likewise, no evidence exists showing that Secretary of Defense Robert McNamara knew about C-Day before JFK's assassination. Dean Rusk told us he did not learn about the actual C-Day plan until soon after JFK's death.[14] There is no evidence that Edward Kennedy was told about the plan. Documents and sources indicate that FBI Director J. Edgar Hoover had no active role in C-Day, although he may have learned a great deal about it from field reports. The Secret Service was even less informed about C-Day, which no doubt hindered their actions when serious threats seemingly related to Cuba surfaced against JFK in the weeks before C-Day. However, officials ranging from Dean Rusk to hawkish Air Force Chief of Staff General Curtis LeMay were needed for the planning of C-Day, so the Kennedys used a shrewd technique that let those officials participate in planning for C-Day while keeping them in the dark about the plan itself. Rusk, LeMay, and others were simply told that all the planning was needed "just in case" a coup happened in Cuba. Officials like Rusk and LeMay were generally aware of other CIA efforts against Castro in the fall of 1963, such as the CIA's AMTRUNK operation, which looked for disaffected Cuban military officers. Some US officials also knew about a CIA asset named Rolando Cubela, a disgruntled mid-level Cuban official who the CIA code-named AMLASH. However, unlike AMWORLD?the CIA's portion of C-Day?neither of those operations reached high in the Cuban government or was close to producing results in the fall of 1963. The Kennedys' "just in case" technique allowed extensive planning to be done for all facets of the military invasion and the post-coup Provisional Government without revealing C-Day or the coup leader's identity to most of those doing the planning. If the C-Day coup had actually occurred, Rusk and the other officials not privy to the full plan would nonetheless have been fully prepared for its aftermath, with plans they had already approved and helped create.[15] While such tightly compartmentalized secrecy kept C-Day from becoming widely known within the government and protected C-Day from public exposure, it also contributed to JFK's death. In 1963, the public would have been shocked to learn that two months before JFK was shot in Dallas, US officials under the direction of Robert Kennedy began making contingency plans to deal with the "assassination of American officials."[16] In the event of an assassination (expected to happen only outside the US), these contingency plans would have mandated certain security measures, and, as this book documents, such principles would be applied to and responsible for much of the secrecy surrounding the JFK assassination. Robert Kennedy and the others making the contingency plans were concerned only about possible retaliation by Castro for C-Day. They failed to consider the threat from others the Attorney General had targeted, especially Mafia bosses Carlos Marcello, Santo Trafficante, and Johnny Rosselli. The Kennedys and key aides had gone to great lengths to keep the Mafia out of C-Day. The CIA's earlier efforts with the Mafia to assassinate Castro?which began in 1959 under Vice President Richard Nixon?had complicated the Kennedys' intense prosecution of the Mafia. Without telling the Kennedys, the CIA was continuing to work with the Mafia on plots against Castro in the fall of 1963, which helped to allow associates of Marcello, Trafficante, and Rosselli to infiltrate the plans for C-Day. In Part II, we will show how?and why?mob bosses Carlos Marcello, Santo Trafficante, and Johnny Rosselli worked together to penetrate the Kennedys' C-Day plan and assassinate JFK. In 1963, Carlos Marcello was America's most ruthless and secretive Mafia boss, completely free of FBI wiretaps. From his New Orleans headquarters, he ruled a territory that included Louisiana, Mississippi, and parts of Texas and Alabama.[17] Marcello's Mafia family was the oldest in North America, able to stage major "hits" without needing the approval of the national Mafia organization, and his associates had a long history of targeting government officials who got in their way.[18] The Kennedys had pursued Marcello since 1959, even before JFK was elected president. Recently declassified FBI documents confirm that just a few years before his own death, Carlos Marcello confessed on three occasions to informants that he had had JFK killed.[19] Tampa godfather Santo Trafficante was Marcello's closest Mafia ally. Trafficante's territory included much of Florida, as well as parts of Alabama, and his organization provided a major conduit for the French Connection heroin trade, whose primary routes included New York City, Texas, New Orleans, Georgia's Fort Benning, Montreal, Chicago, and Mexico City. The Internet magazine Salon noted that Trafficante "had been driven out of the lucrative Havana casino business by Castro and" that he "had been recruited in the CIA" plots with the Mafia to kill Castro months before JFK became president.[20] Like Marcello, Trafficante later confessed his involvement in JFK's assassination.[21] Johnny Rosselli, according to his biographers, also claimed to know what had really happened in Dallas, and he sometimes worked with both Trafficante and Marcello. Rosselli was the Chicago Mafia's point man in Hollywood and Las Vegas, and his close friends included Frank Sinatra and Marilyn Monroe. Internal CIA reports admit that they recruited Rosselli and Trafficante for their own plots to assassinate Castro prior to JFK's election in 1960. Unknown to the Kennedys, Rosselli was continuing in that role in the fall of 1963.[22] Jack Ruby met with Rosselli just weeks before JFK's assassination, had met much earlier with Santo Trafficante, and had numerous ties to Carlos Marcello, according to government investigators.[23] Ultimate Sacrifice reveals new information from Pierre Salinger?a member of the Kennedys' first organized crime investigation team?that just weeks before Jack Ruby shot Oswald, Ruby received a large payoff in Chicago from someone working for a close ally of Marcello and Trafficante.[24] Ruby also made surprising comments that wound in up the Warren Commission's files but not in their report. Just weeks after Ruby's arrest for shooting Oswald in 1963, an FBI document quotes Ruby as talking about "an invasion of Cuba" that "was being sponsored by the United States Government."[25] Ultimate Sacrifice shows how Carlos Marcello, Santo Trafficante, and Johnny Rosselli were able to keep their roles in JFK's death from being exposed because they had infiltrated C-Day. Long-secret government files confirm that ten men who worked for the mob bosses had learned about CDay. Five of those ten actually worked on C-Day, giving the Mafia chieftains a pipeline directly into C-Day and the plans for keeping it secret. Less than a dozen trusted associates of the mob bosses were knowingly involved in the hit on JFK. Though Mafia hits against officials are rare, the Mafia families of Carlos Marcello, Santo Trafficante, and Johnny Rosselli had killed officials who threatened their survival. Nine years earlier, Santo Trafficante's organization had helped to assassinate the newly elected Attorney General of Alabama because he was preparing to shut down Trafficante's operations in notoriously corrupt Phenix City.[26] In 1957, associates of Marcello and Rosselli assassinated the president of Guatemala, a murder that was quickly blamed on a seemingly lone Communist patsy who, like Lee Harvey Oswald, was then killed before he could stand trial. Just nine months before JFK's murder in November 1963, Rosselli's Chicago Mafia family had successfully assassinated a Chicago city official, using an associate of Jack Ruby.[27] The House Select Committee on Assassinations (HSCA) found in 1979 that Marcello and Trafficante had had the means and the motive to assassinate JFK. Before the HSCA could question Rosselli, he "was kidnapped, murdered, dismembered, and sunk" in the ocean in an oil drum that later surfaced.[28] But the CIA didn't tell the HSCA about AMWORLD or other aspects of C-Day, so the HSCA couldn't uncover exactly how Marcello and Trafficante did it or Rosselli's role in working with them. Newly declassified files, many unavailable to the HSCA, show that Marcello, Trafficante, and Rosselli penetrated C-Day and used parts of it as cover to assassinate JFK. By using the secrecy surrounding C-Day, the mob bosses could target JFK not only in Dallas, but also in two earlier attempts, one of which is revealed in this book for the first time. They first attempted to kill JFK in Chicago on November 1, 1963, and then in Tampa on November 18, before succeeding in Dallas on November 22. Since Chicago was home to Rosselli's Mafia family, Tampa was Trafficante's headquarters, and Dallas was in Marcello's territory, the risk was shared between the three bosses. While the Chicago attempt?thwarted when JFK canceled his motorcade at the last minute? was briefly noted by Congressional investigators in the 1970s, the attempt to assassinate JFK during his long Tampa motorcade has never been disclosed in any book or government report. It was withheld from the Warren Commission and all later investigations, even though the Tampa plot was uncovered by authorities and revealed to JFK before he began his motorcade ?which he continued, despite the danger. With C-Day set to begin the following week, JFK planned to give a speech in Miami just hours after his trip to Tampa, a speech that included a message written to the C-Day coup leader in Cuba that promised him JFK's personal support.[29] Canceling the Tampa motorcade simply wasn't an option for JFK or Bobby, even though the motorcade would reportedly be the longest of JFK's presidency, slowly making its way past teeming crowds and many unsecured buildings. Our interviews with officials from Florida law enforcement and the Secret Service, supported by newspaper files and declassified CIA and FBI documents, reveal that the Tampa attempt to kill JFK shares a dozen striking parallels to what happened in Dallas four days later. They include a young male suspect who was a former defector with links to both the Fair Play for Cuba Committee and Russia, just like Lee Harvey Oswald. As in Dallas, JFK's Tampa motorcade also included a hard left turn in front of a tall red-brick building with many unguarded windows?a key site that officials feared might be used by snipers to target JFK. John and Robert Kennedy kept the Tampa assassination attempt secret at the time, and Robert Kennedy kept it secret until his death in 1968. The Secret Service, FBI, CIA, and other agencies have similarly maintained silence about it, as well as keeping secret other information about the assassination that might have exposed the Kennedys' C-Day coup plan. In November 1994, the authors first informed the JFK Assassination Review Board about the Tampa assassination attempt. The Review Board had been created by Congress in 1992 and appointed by President Clinton soon after, to release all the JFK records. But just weeks after we told the Board about the Tampa attempt, the Secret Service destroyed their records for that time period. That does not implicate the Secret Service or the FBI or the CIA (as an organization) in JFK's assassination. As the book shows, officials were forced into such cover-ups because the Mafia bosses had tied the potentially destabilizing C-Day plan to their attempts to assassinate JFK in Chicago, Tampa, and finally Dallas. Within hours of JFK's assassination, Robert Kennedy suspected that someone linked to Marcello and Trafficante, and to C-Day, was involved in his brother's death. The afternoon of JFK's death, Robert Kennedy revealed his suspicion to Pulitzer Prize-winning reporter Haynes Johnson, who was meeting with C-Day exile leader Enrique "Harry" Ruiz-Williams.[30] Evan Thomas, author of a biography of Robert Kennedy and a Newsweek editor, said "Robert Kennedy had a fear that he had somehow gotten his own brother killed" and that his "attempts to prosecute the mob and to kill Castro had backfired in some terrible way."[31] It has been publicly known only since 1992 that Robert Kennedy told a few close advisers that New Orleans mob boss Marcello was behind JFK's assassination, as we confirmed with Kennedy aide Richard Goodwin. Salon received additional confirmation of Mafia involvement from Robert Kennedy's former press secretary, Frank Mankiewicz, who conducted a secret investigation of JFK's death for Robert.[32] Goodwin and Mankiewicz are just two of over a dozen associates of Robert Kennedy who either heard his belief in a conspiracy in his brother's death or who believe in a conspiracy themselves. Among them are Justice Department prosecutors Ronald Goldfarb, Robert Blakey, and Walter Sheridan, as well as Robert's first biographer, Jack Newfield. Others include JFK's CIA Director John McCone, the President's personal physician at his autopsy Admiral George Burkley, and JFK aides Dave Powers, Kenneth O'Donnell, and Arthur Schlesinger, Jr.[33] This book adds to that list Pierre Salinger and Robert's top Cuban exile aide "Harry" Ruiz-Williams, plus another Kennedy aide who worked on C-Day. Most of those associates of Robert Kennedy point to a conspiracy involving Carlos Marcello or his close allies. In suspecting that C-Day was such a powerful weapon, history has proven the Mafia bosses correct. JFK's death threw the whole US government into turmoil, but the intelligence agencies were especially frantic: Their numerous and extensive anti-Castro plots were so secret that they needed to be kept not only from the Congress and the public, but also from the Warren Commission. Although many Warren Commission findings were discredited by later government investigators, Evan Thomas recently told ABC News that the commission achieved its real purpose. He said that after JFK's assassination, "the most important thing the United States government wanted to do was reassure the public that there was not some plot, not some Russian attack, not some Cuban attack." As a result, Thomas concluded, "the number one goal throughout the upper levels of the government was to calm that fear, and bring a sense of reassurance that this really was the work of a lone gunman."[34] President Lyndon Johnson and the Warren Commission were also under tremendous time pressure: With Johnson facing an election in less than a year, the Commission had to assemble a staff, review and take testimony, and issue their final report just ten months after JFK's death. "There was a cover-up," Evan Thomas confirmed to ABC News, explaining that in the Warren Commission's "haste to reassure everybody, they created an environment that was sure to come around and bite them." He emphasized that Earl Warren, Lyndon B. Johnson, J. Edgar Hoover, and others were not covering up a plot to kill JFK, as some have speculated. Instead, they covered up "for their own internal bureaucratic reasons?because Hoover wanted to keep his job, and because Bobby Kennedy didn't want to be embarrassed, or the CIA didn't want to have the public know they were trying to kill somebody," like Fidel Castro.[35] It was not until 2004 that Joseph Califano, assistant to Secretary of the Army Cyrus Vance in 1963, briefly hinted at the sensitive operation that Robert Kennedy had managed and had withheld from the Warren Commission. Califano wrote: "No one on the Warren Commission . . . talked to me or (so far as I know) anyone else involved in the covert attacks on Castro. . . . The Commission was not informed of any of the efforts of Desmond FitzGerald, the CIA and Robert Kennedy to eliminate Castro and stage a coup" in the fall of 1963.[36] Since Robert Kennedy knew more about C-Day than anyone else, his death in 1968 helped to ensure that C-Day stayed secret from all later government investigations into the assassination. The anti-Castro operations of the 1960s that were hidden from the Warren Commission only started to be uncovered by the investigations spawned by Watergate in the 1970s: the Senate Watergate Committee (which took secret testimony from Johnny Rosselli), the Rockefeller Commission, the Pike Committee, and the Church Committee.[37] More details about those CIA plots were uncovered by the House Select Committee on Assassinations in the late 1970s, though many of their discoveries weren't declassified until the late 1990s by the Assassination Records Review Board (ARRB). C-Day, far more sensitive and secret than any of those anti-Castro plots, was never officially disclosed to any of those seven government committees. The military nature of C-Day also helps to explain why it has escaped the efforts of historians and Congressional investigators for forty years. The CDay coup plan approved by Joint Chiefs Chairman General Maxwell Taylor was understandably classified TOP SECRET when it was created in 1963. But twenty-six years later, the Joint Chiefs reviewed the coup plan documents and decided that they should still remain TOP SECRET.[38] The documents might have remained officially secret for additional decades, or forever, if not for the JFK Assassination Records Review Board, created by Congress in the wake of the furor surrounding the film JFK. After efforts by the authors and others, the Review Board finally located and declassified some of the C-Day files just a few years ago. However, someone who worked with the Review Board confirmed to a highly respected Congressional watchdog group, OMB Watch, that "well over one million CIA records" related to JFK's assassination have not yet been released.[39] The C-Day documents that have been released show just the tip of the iceberg, often filled with the names of CIA assets and operations whose files have never been released, even to Congressional investigators. Part Three of Ultimate Sacrifice shows how C-Day affected history and continues to impact American lives. It provides a new perspective on LBJ's operations against Cuba, and how they impacted the war in Vietnam. Ultimate Sacrifice casts Watergate in a whole new light since it involved a dozen people linked to various aspects of C-Day. On a more personal level, Ultimate Sacrifice also solves the tragedy of Abraham Bolden, the first black Presidential Secret Service agent, who was framed by the Mafia and sent to prison when he tried to tell the Warren Commission about the Chicago and Tampa assassination attempts against JFK. His career and life ruined, Bolden has spent the last forty years seeking a pardon.[40] Now, new information from the CIA and other sources shows that the man behind Bolden's framing was an associate of Rosselli and Trafficante, someone linked to JFK's assassination who had penetrated C-Day while working for the Mafia. JFK made the ultimate sacrifice in his quest to bring democracy to Cuba using C-Day. Instead of staying safely in the White House, he put his own life on the line, first in Tampa and finally in Dallas. It has long been known that JFK talked about his own assassination the morning before he was shot. He commented to an aide about how easy it would be for someone to shoot him from a building with a high-powered rifle. Just hours earlier, JFK had demonstrated to his wife Jackie how easily someone could have shot him with a pistol.[41] We now know the reason for JFK's comments, since he knew that assailants from Chicago and Tampa were still at large, and that he himself was getting ready to stage a coup against Castro the following week. John Kennedy once said: "A man does what he must?in spite of personal consequences, in spite of obstacles and dangers." He didn't just mouth the slogan that Americans should be willing to "pay any price" and "bear any burden"?he paid the highest price, making the ultimate sacrifice a leader can make for his country. JFK had always been obsessed with courage, from PT-109 to Profiles in Courage to his steely resolve during the Cuban Missile Crisis.[42] So it's not surprising that he died as he had lived, demonstrating the courage that had obsessed him all his life, and making the ultimate sacrifice for his country. Until 1988, we had no more interest in the JFK assassination than the average person?but the twenty-fifth anniversary of the JFK assassination spawned numerous books and articles, many of which focused on evidence that a conspiracy was involved in JFK's death. The only question seemed to be: Which conspiracy? Conspirators included anti-Castro forces, elements of the CIA, and the Mafia. We started to look more closely at what had already been published about the assassination. We felt that a book focused solely on Bobby Kennedy's battles against the Mafia and against Castro in 1963 might also yield some interesting perspectives on the JFK assassination. We expected the research to require reading a dozen books, looking at a few hundred documents, and trying to interview some Kennedy associates?something that might take a year at most. That was seventeen years, dozens of sources, hundreds of books, and hundreds of thousands of documents ago. We started by looking at the work of the six government commissions (the Review Board had not yet been created) and focused on areas that previous writers hadn't been able to fully explore. When we compiled all that data into a massive database, we realized that their findings weren't mutually exclusive at all?in fact, when their data was grouped together, it filled in gaps and told a coherent story. Putting all their data together didn't make the conspiracy bigger, as one might have expected. It actually made it smaller, since it became clear?for example?that one conspirator could be a Cuban exile, a CIA asset, and also work for the Mafia. However, we were stymied because much key information was still classified, much of it involving anti- Castro operations and associates of godfathers such as Carlos Marcello. We needed to find someone who knew the information and would talk, or some type of document the government couldn't classify top secret?a newspaper, for instance. Our first break came the day we discovered an article in the Washington Post dated October 17, 1989 about the tragic death of Pepe San Roman, the Cuban exile who had led the Kennedys' ill-fated Bay of Pigs invasion. One sentence caught our attention: It said that in 1963, Pepe's brother had been "sent by Robert Kennedy to Central American countries to seek aid for a second invasion" of Cuba.[43] We were puzzled. A "second invasion" of Cuba in 1963? Surely it must be wrong. None of the history books or government committees had ever mentioned a US invasion of Cuba planned for 1963. But a check of newspaper files from the summer and fall of 1963 uncovered a few articles confirming that there had been activity by Kennedy-backed Cuban exiles in Central America at that time. In January 1990, we arranged to interview JFK's Secretary of State, Dean Rusk. When we asked him about the "second invasion" of Cuba in 1963, he confirmed that indeed there were such plans. They weren't the same as the CIA-Mafia plots, which he only learned about later. Nor were they the CIA's assassination plot with a mid-level Cuban official named Rolando Cubela. Rusk described the "second invasion" as a "coup" and said that it wasn't going to be just some Cuban exiles in boats like the Bay of Pigs, but would involve the US military. Rusk indicated that the "second invasion" plans were active at the time JFK died in November 1963 and that the plan was personally controlled by Bobby Kennedy, but that he, Rusk, hadn't learned about it until just after JFK's death. We theorized that there might be some connection between JFK's assassination and the second invasion of Cuba. We asked ourselves why Bobby would cover up crucial information about his own brother's murder?especially if he thought Marcello was behind it. What could be more important than exposing his brother's killers? Well, during the Cold War, one thing that would be more important than the death of a president would be the deaths of millions of Americans in a nuclear exchange with the Soviets. Revealing such a plan after JFK's death, just a year after the tense nuclear standoff of the Cuban Missile Crisis, could have easily sparked a serious and dangerous confrontation with the Soviets. That fear could explain why so much about JFK's assassination had been covered up for so long. At the time, this was a very novel hypothesis, but we agreed that it made sense in light of what we had uncovered so far. Slowly, over the next few years, we found scattered pieces of evidence. For example, at the National Security Archive in Washington, we found a partially censored memo from one of Bobby Kennedy's secretive subcommittees of the National Security Council that discussed "Contingency Plans" in case Fidel Castro retaliated against the US by attempting the "assassination of American officials." The memo was written just ten days before JFK's assassination, and talked about "the likelihood of a step-up in Castro-incited subversion and violence" in response to some US action.[44] The document had been declassified a year after the HSCA had finished its work, and had never been seen by any of the government commissions that had investigated the assassination. We were shocked when Dave Powers, head of the John F. Kennedy Presidential Library in Boston and a close aide to JFK, vividly described seeing the shots from the "grassy knoll." Powers said he and fellow JFK aide Kenneth O'Donnell clearly saw the shots, since they were in the limo right behind JFK. Powers said they felt they were "riding into an ambush"?explaining for the first time why the driver of JFK's limo slowed after the first shot. Powers also described how he was pressured to change his story for the Warren Commission.[45] We quickly found confirmation of Power's account of the shots in the autobiography of former House Speaker Tip O'Neill (and later, from the testimony of two Secret Service agents in the motorcade with Powers and O'Donnell).[46] Months after talking with Powers, we made another startling discovery: a planned attempt to kill JFK during his Tampa motorcade on November 18, 1963. It was mentioned in only two small Florida newspaper articles, each in just one edition of the newspaper and then only after JFK was killed in Dallas. Nothing appeared at the time of the threat, even though authorities had uncovered the plot prior to JFK's motorcade. It was clear that someone had suppressed the story. We decided to pursue Cuban exile and Bay of Pigs veteran Enrique Ruiz-Williams, who had been interviewed by former FBI agent William Turner in 1973. Williams had told Turner that he had been working on the plan with high CIA officials in Washington?something rare for Cuban exiles?on November 22, 1963. The timing was right, since Rusk had told us that the coup/invasion plan was active when JFK died. A former Kennedy aide confirmed Williams's connection to Bobby and the CIA to William Turner. We eventually found Harry Williams, and in a most unlikely place: the snowy mountains of Colorado, about as far from the tropical climate of his native Cuba as one could imagine. Thoughtful and highly intelligent, he quickly grasped that we had done our homework and already knew many of the pieces of the puzzle?just not how they all fit together. Then in the twilight of his life, he wanted to see the truth come out, as long as the spotlight was kept away from him. By the end of our second interview on that first trip, Harry had given us a detailed overview of the Kennedys' secret plan to overthrow Castro on December 1, 1963 and how it was connected to JFK's assassination. We finally understood how associates of Marcello, Trafficante, and Rosselli had learned of the plan and used parts of it against JFK?forcing Bobby Kennedy and key government officials into a much larger cover-up, to protect national security. After getting the overview of C-Day from Harry?and more details from the Kennedy associates he led us to?we were able to make sense of previously released documents that had baffled investigators for decades. In 1993 we gave a short presentation of our discoveries at a historical conference in Dallas that included top historians, journalists, and former government investigators. Some of those experts were able not only to get additional documents released by the Review Board, but also to provide us with additional information that they had previously uncovered. In 1994, a brief summary of our findings was featured on the History Channel and in Vanity Fair. In November 1994, we gave the Review Board written testimony about our discovery of the Tampa assassination attempt and the Kennedys' C-Day "Plan for a Coup in Cuba in the Fall of 1963" (the quote is from our actual submission). Three years later, in 1997, the Review Board located and released a trove of documents confirming what Harry had told us about CDay, including the first declassified documents from fall 1963 entitled "Plan for a Coup in Cuba." It was only in 1998, after the Review Board had finished its work and submitted its final report to the president and Congress, that we learned that the Secret Service had destroyed records covering the Tampa attempt just weeks after we first revealed it to the Review Board. It took us fifteen years to uncover the full story, bringing together all these files and obscure articles in one place?and that was only because we were able to build on decades of work by dedicated historians, journalists, and government investigators. We also had the help of almost two dozen people who had worked with John or Robert Kennedy, who told us what files to look for and gave us the framework for C-Day, especially Harry Williams. Now we can tell the full story in much more detail, quoting directly from hundreds of government documents from the National Archives. These files, many quoted for the first time, verify everything Kennedy insiders had told us, long before most of those files were released. The files support what we said publicly over ten years ago, to the Review Board, to the History Channel, and in Vanity Fair. Some of the very records that prove C-Day's existence also show connections between C-Day and JFK's assassination, and how C-Day was penetrated by the associates of Mafia bosses Carlos Marcello, Santo Trafficante, and Johnny Rosselli. The secrecy surrounding the Kennedys' fall 1963 coup plan?and the Mafia's penetration of it?created most of the continuing controversies about the JFK assassination. Was Lee Harvey Oswald an innocent patsy, an active participant in the conspiracy to kill JFK, or a participant in a US intelligence operation that went awry? As we lay out the evidence about C-Day, and how the Mafia used it to kill JFK, it will answer that and other questions that have long baffled historians, investigators, and the public. All the secrecy that shrouded C-Day in 1963, and in the decades since, has had a tremendous impact on American life and politics. While much of the ensuing cover-up of C-Day and its links to JFK's assassination had a legitimate basis in national security, we also document which agencies covered up critical intelligence failures that allowed JFK's assassination to happen. Since C-Day was never exposed, and its lessons never learned, its legacy has continued to harm US relations and intelligence. Ultimate Sacrifice shows how the ongoing secrecy surrounding C-Day and the JFK assassination has continued to cost American lives. * * * NOTES About sources, quotes, and interviews: All government documents cited in these endnotes have been declassified and are available at the National Archives facility in College Park, Md., near Washington, D.C. Information about many of them, and full copies of a few, are available at the National Archives and Records Administration Web site. Regarding interviews conducted by the authors for this book, for brevity we have used "we" to refer to interviews conducted by the authors, even if only one of us was present for a particular interview. Within quotes in the book, we have sometimes standardized names (such as "Harry Williams") for clarity. 1. Army copy of Department of State document, 1963, Record Number 198-10004-10072, Califano Papers, Declassified 7-24- 97. CIA memo, AMWORLD 11-22-63, #84804, declassified 1993. 2. The last government committee, The Assassinations Records Review Board, was finally unofficially informed of the Coup Plan by one of the authors, via written testimony sent on 11-9-94 for the Review Board's 11-18-94 public hearing in Dallas, as noted in the Board's FY 1995 Report. The earlier committees were the Warren Commission, the Watergate Committee, the Rockefeller Commission, the Pike Committee (and its predecessor, the Nedzi Committee), the Church Committee, and the House Select Committee on Assassinations. 3. "A Presumption of Disclosure: Lessons from the John F. Kennedy Assassination Records Review Board," by OMB Watch, available at ombwatch.com. 4. NBC Nightly News with Tom Brokaw 9-29-98. 5. John F. Kennedy address at Rice University, 9-12-62, from Public Papers of the Presidents of the United States, v. 1, 1962, pp. 669-670. 6. Army document, Summary of plan dated 9-26-63, Califano Papers, Record Number 198-10004-10001, declassified 10-7-97. 7. Army copy of Department of State document, 1963, Record Number 198-10004-10072, Califano Papers, Declassified 7-24-97. 8. Army document, Summary of plan dated 9-26-63, Califano Papers, Record Number 198-10004-10001, declassified 10-7-97. 9. Interview with Harry Williams 7-24-93; interview with confidential C-Day Defense Dept. source 7-6-92; classified message to Director from JMWAVE, CIA/DCD Document ID withheld to protect US intelligence asset but declassified 3-94. 10. The following is just one of many: Joint Chiefs of Staff document, dated 12-4-63 with 11-30-63 report from Cyrus Vance, Record Number 202-10002-101116, declassified 10-7-97. 11. CIA cable to Director, 12-10-63, CIA 104-10076-10252, declassified 8-95; David Corn, Blond Ghost: Ted Shackley and the CIA's Crusades (New York: Simon & Schuster, 1994), p. 110. 12. House Select Committee on Assassinations vol. X, p. 77. 13. Interview with Harry Williams 2-24-92; interview with confidential Kennedy C-Day aide source 3-17-92; interview with confidential C-Day Defense Dept. source 7-6-92. 14. Interview with Dean Rusk 1-8-90. 15. Foreign Relations of the United States, Volume XI, Department of State, #370, 10-8-63; 12-6-63 CIA Document, from JMWAVE to Director, released during the 1993 CIA Historical Review Program. 16. From the John F. Kennedy Presidential Library, NLK 78-473, declassified 5-6-80. 17. John H. Davis, Mafia Kingfish: Carlos Marcello and the Assassination of John F. Kennedy (New York: McGraw-Hill, 1989), pp. 49, 64, many others. 18. Ibid. 19. FBI DL 183A-1f035-Sub L 3.6.86 and FBI Dallas 175-109 3.3.89, cited by A. J. Weberman; CR 137A-5467-69, 6-9-88, cited by Brad O'Leary and L. E. Seymour, Triangle of Death (Nashville: WND Books, 2003). 20. David Talbot, "The man who solved the Kennedy assassination," Salon.com, 11-22-03. 21. Jack Newfield, "I want Kennedy killed," Penthouse 5-92; Frank Ragano and Selwyn Raab, Mob Lawyer (New York: Scribners, 1994), pp. 346-54, 361; "Truth or Fiction?" St. Petersburg Times, 4-18-94. Charles Rappleye and Ed Becker, All American Mafioso: The Johnny Rosselli Story (New York: Barricade, 1995). 22. William Scott Malone, "The Secret Life of Jack Ruby," New Times 1-23-78; Bradley Ayers, The War that Never Was: An Insider's Account of CIA Covert Operations Against Cuba (Canoga Park, Calif.: Major Books, 1979), pp. 59, 129; The CIA's Inspector General's Report on the CIA-Mafia plots. 23. Malone, op. cit.; HSCA Final Report and volumes, many passages. 24. Phone interviews with Pierre Salinger 4-3-98, 4-10-98; interview with confidential source 4-14-98. 25. Warren Commission Exhibit #2818. (In mid-December 1963, after JFK's death and LBJ put C-Day on hold, Ruby placed the date for the invasion in May 1964.) 26. Atlanta Journal-Constitution 5-19-02, pp. C-1, C-6; John Sugg, "Time to Pull the Sharks' Teeth," Creative Loafing weekly newspaper, Atlanta edition, 12-11-03, p. 27. 27. G. Robert Blakey and Richard N. Billings, The Plot to Kill the President (New York: Times Books, 1981), p. 288. 28. Charles Rappleye and Ed Becker, All American Mafioso (New York: Barricade, 1995), p. 315. 29. Church Committee Report, Vol. V, officially The Investigation of the Assassination of President John F. Kennedy: Performance of the Intelligence Agencies, pp. 19-21; 8-30-77 CIA document, "Breckinridge Task Force" report, commenting on Church Committee Report, Vol. V, document ID 1993.07.27.18:36:29:430590, declassified 1993; Thomas G. Paterson, Contesting Castro: The United States and the Triumph of the Cuban Revolution (New York: Oxford University Press, 1994), p. 261, citing Bundy memo, "Meeting with the President," Dec. 19, 1963; Arthur Schlesinger, Jr., Robert Kennedy and His Times (New York: Ballantine, 1979), p. 598; Gus Russo, Live by the Sword: The Secret War against Castro and the Death of JFK (Baltimore: Bancroft Press, 1978), p. 278. 30. Haynes Johnson, "One Day's Events Shattered America's Hopes and Certainties," Washington Post 11-20-83. 31. ABCNEWS.com, 11-20-03, "A Brother's Pain," interview with Evan Thomas. 32. Phone interviews with Pierre Salinger 4-3-98, 4-10-98; interview with confidential source 4-14-98. 33. Re: Arthur M. Schlesinger, Jr., Parade magazine 6-7-98 citing Jack Newfield; re McCone: Schlesinger, op. cit., p. 664; Blakey, op. cit., many passages; re Sheridan: John H. Davis, The Kennedy Contract (New York: Harper Paperbacks, 1993), p. 154, and Evan Thomas, Robert Kennedy, p. 338; re O'Donnell: William Novak, Man of the House: The Life and Political Memoirs of Speaker Tip O'Neil (New York: Random House, 1987), p. 178; Jack Newfield, "I want Kennedy killed," Penthouse 5-92; re Burkley, Gus Russo, Live by the Sword (Baltimore: Bancroft Press, 1978), p. 49; Ronald Goldfarb, Perfect Villains, Imperfect Heroes: Robert F. Kennedy's War against Organized Crime (New York: Random House), pp. 258-299. 34. ABCNEWS.com, op. cit. 35. Ibid. 36. Joseph A. Califano, Jr., Inside: A Public and Private Life (New York: Public Affairs, 2004), p. 125. 37. In addition, the predecessor of the Pike Committee?the Nedzi Committee?got close to aspects of JFK's assassination and C-Day when it investigated CIA activities during Watergate. 38. The following document was "systematically reviewed by JCS on 19 Oct 1989 Classification continued"?Joint Chiefs of Staff document, dated 12-4-63 with 11-30-63 report from Cyrus Vance, 80 total pages, Record Number 202-10002- 101116, declassified 10-7-97. 39. "A Presumption of Disclosure: Lessons from the John F. Kennedy Assassination Records Review Board," by OMB Watch, available at ombwatch.com. 40. Interview with ex-Secret Service Agent Abraham Bolden 4-15-98; House Select Committee on Assassinations Report 231, 232, 636, New York Times 12-6-67; Abraham Bolden file at the Assassination Archives and Research Center. 41. Michael R. Beschloss, The Crisis Years: Kennedy and Khrushchev, 1960-1963 (New York: Edward Burlingame Books, 1991) pp. 670, 671. 42. John Mitchell?the commander of JFK's PT boat unit?would become attorney general under Nixon, before his conviction due to a scandal related to C-Day. 43. Myra MacPherson, "The Last Casualty of the Bay of Pigs," Washington Post 10-17-89. 44. From the John F. Kennedy Presidential Library, NLK 78-473, declassified 5-6-80; article by Tad Szulc in the Boston Globe 5-28- 76 and a slightly different version of the same article in The New Republic 6-5-76 45. Interview with Dave Powers 6-5-91 at the John F. Kennedy Presidential Library. 46. Novak, op. cit., p. 178. * * * Thom Hartmann and Lamar Waldron A BUZZFLASH GUEST CONTRIBUTION Thom Hartmann is a progressive talk radio host and writes, among other things, the http://www.buzzflash.com/hartmann/default.htm "Independent Thinker Book of the Month" reviews for BuzzFlash. Lamar Waldron is an Atlanta-based writer and historical researcher. From shovland at mindspring.com Sun Dec 11 15:27:55 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 11 Dec 2005 07:27:55 -0800 Subject: [Paleopsych] Jerry Goodenough: Critical Thinking about ConspiracyTheories In-Reply-To: Message-ID: The reason people see conspiracies is because they are actually there. This gentlemen is one of a cadre of intellectual prostitutes who willingly takes money to help conceal the truth. -----Original Message----- From: paleopsych-bounces at paleopsych.org [mailto:paleopsych-bounces at paleopsych.org]On Behalf Of Premise Checker Sent: Saturday, December 10, 2005 7:16 PM To: paleopsych at paleopsych.org Subject: [Paleopsych] Jerry Goodenough: Critical Thinking about ConspiracyTheories Jerry Goodenough: Critical Thinking about Conspiracy Theories http://www.uea.ac.uk/~j097/CONSP01.htm [This is a very good analysis, esp. when it comes to noting that many, many conspiracies posit too many conspirators. As far as the specific analysis of the Kennedy assassination goes, the author makes a very good point about the Mafia being incompetent. I'll send along in a moment excerpts from a new book, "Ultimate Sacrifice," that makes a new case that the Mafia did in fact orchestrate the assassination. According to the book, the Mafia got wind of a CIA plot to murder Castro and threatened to reveal it, thereby causing an international crisis. The Warren Commission, accordingly covered things up, a cover-up which continues. [Still, the charge of incompetence remains. I reinsert my own theory that the assassination was an assisted suicide. JFK knew he had not long to live but did not want to go down in history like Millard Fillmore, whose only achievement was to not install a bath tub in the White House. Just being assassinated would not be enough, so he got the conspirators to leave enough bogus and inconsistent evidence that researchers would never stop spinning theories, all of them imperfect for failure to reconcile the evidence. [The Enlightenment died in six seconds on the Dealey Plaza.] Jerry Goodenough is Professor of Philosophy at the University of East Anglia, Norwich, UK 1. Introduction Conspiracy theories play a major part in popular thinking about the way the world, especially the political world, operates. And yet they have received curiously little attention from philosophers and others with a professional interest in reasoning.[1] Though this situation is now starting to change, it is the purpose of this paper to approach this topic from the viewpoint of critical thinking, to ask if there are particular absences or deformities of critical thinking skills which are symptomatic of conspiracy theorising, and whether better teaching of reasoning may guard against them. That conspiracy thinking is widespread can be seen from any cursory examination of a bookshop or magazine stand. There are not only large amounts of blatant conspiracy work, often dealing with American political assassinations and other events or with the alleged presence of extraterrestrial spacecraft, but also large amounts of writing where a certain degree of conspiracy thinking is more or less implicit. Thus many `alternative' works of medicine, history, archaeology, technology, etc. often depend upon claims, explicit or otherwise, that an establishment or orthodoxy conspires to suppress alternative views. Orthodox medicine in cahoots with the multinational drug companies conspires to suppress the claims of homeopathy, orthodox archaeologists through malice or blindness conspire to suppress the truth about the construction of the Pyramids, and so on. It certainly seems to the jaundiced observer that there is more of this stuff about then ever before. However, conspiracy theorising is now coming to the attention of philosophers. That it has taken this long may be because, as Brian Keeley says in a recent paper, `most academics simply find the conspiracy theories of popular culture to be silly and without merit.' (1999: 109n) But I agree with Keeley's further remark that `it is incumbent upon philosophers to provide analysis of the errors involved with common delusions, if that is indeed what they are.' If a kind of academic snobbishness underlies our previous refusal to get involved here, there may be another reason. Conspiracy theorising, in political philosophy at least, has been identified with irrationality of the worst sort--here the locus classicus may be some dismissive remarks made by Karl Popper in The Open Society and its Enemies (Popper 1996, Vol.2: 94-9). Pigden (1993) shows convincingly that Popper's remarks cannot be taken to support a rational presumption against conspiracy theories in history and politics. But certainly such a presumption exists, particularly amongst political commentators. It tends to manifest itself in a noisy preference for what is termed the `cock-up' theory of history--an unfortunate term that tends to assume that history is composed entirely of errors, accidents and unforeseen consequences. If such a dismal state of affairs were indeed to be the case, then there would seem to be no point in anybody trying to do anything. The cock-up theory, then, is agreeable to all forms of quietism. But we have no reason to believe that there is such a coherent theory, and even less reason to believe that every event must fall neatly into one or other category here; indeed, this insistence on black and white reasoning is, as we shall see, one of the features of conspiracy theorising itself! And what makes the self-satisfied `cock-up' stance even less acceptable is that it ignores the fact that conspiracies are a very real part of our world. No serious historian denies that a somewhat amateurish conspiracy lay behind the assassination of Abraham Lincoln, or that a more professional but sadly less successful conspiracy attempted to assassinate Adolf Hitler in the summer of 1944. Yet such is the presumption behind the cock-up stance that the existence or frequency of genuine conspiracies is often significantly downplayed. (How many people, taking at face value the cock-up theorists' claim that conspiracies are a real rarity in the modern history of democracies, do not know that a mere 13 years before President Kennedy's assassination a serious terrorist conspiracy to murder Harry S. Truman led to a fatal gunfight on the streets of Washington?[2] The cock-up presumption seems to generate a kind of amnesia here.) We require, then, some view of events that allows for the accidental and the planned, the deliberate and the contingent: history as a tapestry of conspiracies and cock-ups and much intentional action that is neither. Pigden (op.cit) satisfactorily demonstrates the unlikelihood of there being any adequate a priori exclusion principle here, in the face of the reality of at least some real conspiracies. Keeley's paper attempts a more rigorous definition of the phenomenon, hoping to separate what he terms Unwarranted Conspiracy Theories (UCTs) from rational or warranted conspiratorial explanations: It is thought that this class of explanation [UCTs] can be distinguished analytically from those theories which deserve our assent. The idea is that we can do with conspiracy theories what David Hume (1748) did with miracles: show that there is a class of explanations to which we should not assent, by definition. (Keeley: 111) and it is part of his conclusion that `this task is not as simple as we might have heretofore imagined.' (ibid.) Keeley concludes that `much of the intuitive "problem" with conspiracy theories is a problem with the theorists themselves, and not a feature of the theories they produce' (Ibid: 126) and it is this point I want to take up in this paper. What sort of thinking goes on in arriving at UCTs and what sort of things go wrong? If we say that conspiracy theorists are irrational, do we mean only that they are illogical in their reasoning? Or are there particular critical thinking skills missing or being misused? 2. Definitions Keeley's use of the term Unwarranted Conspiracy Theory should not mislead us into thinking that all conspiracy theories fall into one or other category here. Warrant is a matter of degree, and so is conspiracy. There are cases where a conspiratorial explanation is plainly rational; take, for instance, the aforementioned July Bomb Plot to kill Hitler, where there is an abundance of historical evidence about the conspirators and their aims. There are cases where such an explanation is clearly irrational: I shall argue later in the paper that this is most probably the case for the assassination of President Kennedy. And there are cases where some conspiratorial explanation may be warranted but it is hard to know how far the warrant should extend. Take, for instance, the murder of the Archduke Franz Ferdinand in Sarajevo in 1914. There was plainly a conspiracy to bring this about: some minutes before Gavril Princips shot the archduke, a co-conspirator was arrested after throwing a bomb (which failed to explode) at the archduke's car. Princips and his fellow students were Serbian nationalists, acting together to demonstrate against the presence of Habsburg influence in the Balkans. But there remains the possibility that they had been infiltrated and manipulated by Yugoslav intelligence elements seeking to provoke a crisis against Austro-Hungary. And there are more extreme claims that the ultimate manipulators here were agents of a world-wide conspiracy, of international Jewry or freemasonry seeking to bring about war. We are fully warranted in adopting the first conspiratorial explanation, but perhaps only partially warranted in thinking there is anything in the second claim[3], while the extreme claims seem to me to be as unwarranted as anything could be. What we require, then, is some definition which will mark off the kind of features which ought to lead us to suspect the warrant of any particular conspiratorial explanation. Keeley lays out a series of these, which I shall list and comment upon. But first he offers his definition of conspiracy theories in general: A conspiracy theory is a proposed explanation of some historical event (or events) in terms of the significant causal agency of a relatively small group of persons--the conspirators--acting in secret... a conspiracy theory deserves the appellation "theory" because it proffers an explanation of the event in question. It proposes reasons why the event occurred... [it] need not propose that the conspirators are all powerful, only that they have played some pivotal role in bringing about the event... indeed, it is because the conspirators are not omnipotent that they must act in secret, for if they acted in public, others would move to obstruct them... [and] the group of conspirators must be small, although the upper bounds are necessarily vague.(116) Keeley's definition here differs significantly from the kind of conspiracy at which Popper was aiming in The Open Society, crude Marxist explanations of events in terms of capitalist manipulation. For one can assume that in capitalist societies capitalists are very nearly all-powerful and not generally hindered by the necessity for secrecy. A greater problem for Keeley's definition, though, is that it seems to include much of the work of central government. Indeed, it seems to define exactly the operations of cabinet government--more so in countries like Britain with no great tradition of governmental openness than in many other democracies. What is clearly lacking here is some additional feature, that the conspirators be acting against the law or against the public interest, or both. This doesn't entirely free government from accusations of conspiracy--does a secret cabinet decision to upgrade a country's nuclear armaments which appears prima facie within the bounds of the law of that country but may breach international laws and agreements count? Is it lawful? In the public interest? A further difficulty with some kind of illegality constraint is that it might tend to rule out what we might otherwise clearly recognise as conspiracy theories. Take, for instance, the widely held belief amongst ufologists that the US government (and others) has acted to conceal the existence on earth of extra-terrestrial creatures, crashed flying saucers at Roswell, and so on. It doesn't seem obvious that governments would be acting illegally in this case--national security legislation is often open to very wide interpretation--and it could be argued that they are acting in the public interest, to avoid panic and so on. (Unless, of course, as some ufologists seem to believe, the government is conspiring with the aliens in order to organise the slavery of the human race!) So we have here what would appear to be a conspiracy theory, and one which has some of the features of Keeley's UCTs, but which is excluded by the illegality constraint. Perhaps the best we can do here is to assert that conspiracy theories are necessarily somewhat vague in this regard; I'll return to this point later. If this gives us a rough idea of what counts as a conspiracy theory, we can then build upon it and Keeley goes on to list five features which he regards as characteristic of Unwarranted Conspiracy Theories: (1) `A UCT is an explanation that runs counter to some received, official, or "obvious" account.' (116-7) This is nothing like a sufficient condition, for the history of even democratic governments is full of post facto surprises that cause us to revise previous official explanations. For instance, for many years the official explanation for Britain's military success in the Second World War was made in terms of superior generalship, better troops, occasional good luck, and so on. The revelation in the 1970s of the successful Enigma programme to break German service codes necessitated wholesale revision of military histories of this period. This was an entirely beneficial outcome, but others were more dubious. The growth of nuclear power in Britain in the 1950s was officially explained in terms of the benefit of cheaper and less polluting sources of electricity. It was only much later that it became clear that these claims were exaggerated and that the true motivation for the construction of these reactors was to provide fissile material for Britain's independent nuclear weapons. Whether such behaviour was either legal or in the public interest is an interesting thought. (1A) `Central to any UCT is an official story that the conspiracy theory must undermine and cast doubt upon. Furthermore, the presence of a "cover story" is often seen as the most damning piece of evidence for any given conspiracy." This is an interesting epistemological point to which I shall return. (2) `The true intentions behind the conspiracy are invariably nefarious'. I agree with this as a general feature, particularly of non-governmental conspiracies, though as pointed out above it is possible for governmental conspiracies to be motivated or justified in terms of preventing public alarm, which may be seen as an essentially beneficial aim. (3) `UCTs typically seek to tie together seemingly unrelated events.' This is certainly true of the more extreme conspiracy theory, one which seeks a grand unified explanation of everything. We have here a progression from the individual CT, seeking to explain one event, to the more general. Carl Oglesby (1976), for instance, seeks to reinterpret many of the key events in post-war American history in terms of a more or less secret war between opposing factions within American capital, an explanation which sees Watergate and the removal of Richard Nixon from office as one side's revenge for the assassination of John Kennedy. At the extreme we have those theories which seek to explain all the key events of western history in terms of a single secret motivating force, something like international freemasonry or the great Jewish conspiracy.[4] It may be taken as a useful rule of thumb here that the greater the explanatory range of the CT, the more likely it is to be untrue. (A point to which Popper himself would be sympathetic!) Finally, one might want to query here Keeley's point about seemingly unrelated events. Many CTs seem to have their origin in a desire to relate events that one might feel ought to go together. Thus many Americans, on hearing of the assassination of Robert Kennedy (itself coming very shortly after that of Martin Luther King) thought these events obviously related in some way, and sought to generate theories linking them in terms of some malevolent force bent on eliminating apparently liberal influences in American politics. They seem prima facie more likely to be related than, say, the deaths of the Kennedy brothers and those of John Lennon or Elvis Presley: any CT linking these does indeed fulfil Keeley's (3). (4) `...the truths behind events explained by conspiracy theories are typically well-guarded secrets, even if the ultimate perpetrators are sometimes well-known public figures.' This is certainly the original belief of proponents of UCTs but it does lead to a somewhat paradoxical situation whereby the alleged secret can become something of an orthodoxy. Thus opinion polls seem to indicate that something in excess of 80% of Americans believe that a conspiracy led to the death of President Kennedy, though it seems wildly unlikely that they all believe in the same conspiracy. It becomes increasingly hard to believe in a well-guarded secret that has been so thoroughly aired in 35 years of books, magazine articles and even Hollywood movies. Pretty much the same percentage of Americans seem to believe in the presence on earth of extra-terrestrials, though whether this tells us more about Americans or about opinion-polls is hard to say. But these facts, if facts they be, would tend to undercut the `benevolent government' UCTs. For there is really no point in `them' keeping the truth from us to avoid panic if most of us already believe this `truth'. The revelation of cast-iron evidence of a conspiracy to kill Kennedy or of the reality of alien visits to Earth would be unlikely to generate more than a ripple of public interest, these events having been so thoroughly rehearsed. (5) `The chief tool of the conspiracy theorist is what I shall call errant data'. By which Keeley means data which is unaccounted for by official explanations, or data which if true would tend to contradict official explanations. These are the marks of the UCT. As Keeley goes on to say (118) `there is no criterion or set of criteria that provide a priori grounds for distinguishing warranted conspiracy theories from UCTs.' One might perhaps like to insist here that UCTs ought to be false, and this is why we are not warranted in believing them, but it is in the nature of many CTs that they cannot be falsified. The best we may do is show why the warrant for believing them is so poor. And one way of approaching this is by way of examining where the thinking that leads to UCTs goes awry. 3. Where CT thinking goes wrong It is my belief that one reason why we should not accept UCTs is because they are irrational. But by this I do not necessarily mean that they are illogical in the sense that they commit logical fallacies or use invalid argument forms--though this does indeed sometimes happen--but rather that they misuse or fail to use a range of critical thinking skills and principles of reasoning. In this section I want to provide a list of what I regard as the key weaknesses of CT thinking, and then in the next section I will examine a case study of (what I regard to be) a UCT and show how these weaknesses operate. My list of points is not necessarily in order of importance. (A) An inability to weigh evidence properly. Different sorts of evidence are generally worthy of different amounts of weight. Of crucial importance here is eye-witness testimony. Considerable psychological research has been done into the strengths and weaknesses of such testimony, and this has been distilled into one of the key critical thinking texts, Norris & King's (1983) Test on Appraising Observations whose Manual provides a detailed set of principles for judging the believability of observation statements. I suspect that no single factor contributes more, especially to assassination and UFO UCTs, than a failure to absorb and apply these principles. (B) An inability to assess evidence corruption and contamination. This is a particular problem with eyewitness testimony about an event that is subsequently the subject of considerable media coverage. And it is not helped by conventions or media events which bring such witnesses together to discuss their experiences--it is not for nothing that most court systems insist that witnesses do not discuss their testimony with each other or other people until after it has been given in court. There is a particular problem with American UCTs since the mass media there are not governed by sub judice constraints, and so conspiratorial theories can be widely aired in advance of any court proceedings. Again Norris & King's principles (particularly IV. 10 & 12) should warn against this.[5] But we do not need considerable delay for such corruption to occur: it may happen as part of the original act of perception. For instance, in reading accounts where a group of witnesses claim to have identified some phenomenon in the sky as a spaceship or other unknown form of craft, I often wonder if this judgement occurred to all of them simultaneously, or if a claim by one witness that this was a spaceship could not act to corrupt the judgmental powers of other witnesses, so that they started to see this phenomenon `as' a spacecraft in preference to some more mundane explanation. (C) Misuse or outright reversal of a principle of charity: wherever the evidence is insufficient to decide between a mundane explanation and a suspicious one, UCTs tend to pick the latter. The critical thinker should never be prejudiced against occupying a position of principled neutrality when the evidence is more or less equally balanced between two competing hypotheses. And I would argue that there is much to be said for operating some principle of charity here, of always picking the less suspicious hypothesis of two equally supported by the evidence. My suspicion is that in the long run this would lead to a generally more economical belief structure, that reversing the principle of charity ultimately tends to blunt Occam's Razor, but I cannot hope to prove this. (D) The demonisation of persons and organisations. This may be regarded as either following from or being a special case of (C). Broadly, this amounts to moving from the accepted fact that X once lied to the belief that nothing X says is trustworthy, or taking the fact that X once performed some misdeed as particular evidence of guilt on other occasions. In the former case, adopting (D) would demonise us all, since we have lied on some occasion or other. This is especially problematic for UCTs involving government organisations or personnel, since all governments reserve the right to lie or mislead if they feel it is in the national interest to do so. But proof that any agency lied about one event ought not to be taken as significant proof that they lied on some other occasion. It goes against the character of the witness, as lawyers are wont to say, but then no sensible person should believe that governments are perfectly truthful. The second case is more difficult. It is a standard feature of Anglo-Saxon jurisprudence that the fact that X has a previous conviction should not be given in evidence against them, nor revealed to the jury until after a verdict is arrived at. The reasoning here is that generally evidence of X's previous guilt is not specific evidence for his guilt on the present occasion; it is possible for it to be the case that X was guilty then and is innocent now, and so the court should not be prejudiced against him. But there is an exception to this, at least in English law, where there are significant individual features shared between X's previous proven modus operandi and that of the present offence under consideration; evidence of a consistent pattern may be introduced into court. But, the rigid standards of courtroom proof aside, it is not unreasonable for the police to suspect X on the basis of his earlier conviction. This may not be fair to X (if he is trying to go straight) but it is epistemologically reasonable. The trouble for UCTs, as we shall see, is that most governments have a long record of previous convictions, and the true UC theorist may regard this not just as grounds for a reasonable suspicion but as itself evidence of present guilt. (E) The canonisation of persons or (more rarely) organisations. This may be regarded as the mirror-image of (D). Here those who are regarded as the victims of some set of events being explained conspiratorially tend to be presented, for the purpose of justifying the explanation, as being without sin, or being more heroic or more threatening to some alleged set of private interests than the evidence might reasonably support. (F) An inability to make rational or proportional means-end judgements. This is perhaps the greatest affront to Occam's Razor that one finds in UCTs. Such theories are often propounded with the explanation that some group of conspirators have been acting in furtherance of some aim or in order to prevent some action taking place. But one ought to ask whether such a group of conspirators were in a position to further their aim in some easier or less expensive or less risky fashion. Our assumption here is not the principle of charity mentioned in (C) above, that our alleged conspirators are too nice or moral to resort to nefarious activities. We should assume only that our conspirators are rational people capable of working out the best means to a particular end. This is a defeasible assumption--stupidity is not totally unknown in the political world--but it is nevertheless an assumption that ought to guide us unless we have evidence to the contrary. A difficulty that should be mentioned here is that of establishing the end at which the conspiracy is aimed, made more difficult for conspiracies that never subsequently announce these things. For the state of affairs brought about by the conspirators may, despite their best efforts, not be that at which they aimed. If this is what happens then making a rational means-end judgement to the actual result of the conspiracy may be a very different matter from doing the same thing to the intended results. (G) Evidence against a UCT is always evidence for. This is perhaps the point that would most have irritated Karl Popper with his insistence that valid theories must always be capable of falsification. But it is an essential feature of UCTs; they do not just argue that on the evidence available a different conclusion should be drawn from that officially sanctioned or popular. Rather, the claim is that the evidence supporting the official verdict is suspect, fraudulent, faked or coerced. And this belief is used to support the nature of the conspiracy, which must be one powerful or competent enough to fake all this evidence. What we have here is a difference between critically assessing evidence--something I support under (A) above--and the universal acid of hypercritical doubt. For if we start with the position that any piece of evidence may be false then it is open to us to support any hypothesis whatsoever. Holocaust revisionists would have us believe that vast amounts of evidence supporting the hypothesis of a German plot to exterminate Europe's Jews are fake. As Robert Anton Wilson (1989: 172) says, `a conspiracy that can deceive us about 6,000,000 deaths can deceive us about anything, and that it takes a great leap of faith for Holocaust Revisionists to believe that World War II happened at all.' Quite so. What is needed here is that I might term meta-evidence, evidence about the evidence. My claim would be that the only way to keep Occam's Razor shiny here is to insist on two different levels of critical analysis of evidence. Evidence may be rejected if it doesn't fit a plausible hypothesis--this is what everyone must do in cases where there is apparently contradictory evidence, and there can be no prima facie guidelines for rejection here apart from overall epistemological economy. But evidence may only be impeached--accused of being deliberately faked, forged, coerced, etc.--if we have further evidence of this forgery: that a piece of evidence does not fit our present hypothesis is not by itself any warrant for believing that the evidence is fake. (H) We should put no trust in what I here term the fallacy of the spider's web. That A knows B and that B knows C is no evidence at all that A has even heard of C. But all too often UCTs proceed in this fashion, weaving together a web of conspirators on the basis of who knows who. But personal acquaintance is not necessarily a transitive relation. The falsity of this belief in the epistemological importance of webs of relationships can be demonstrated with reference to the show-business party game known sometimes as `Six Degrees of Kevin Bacon'. The object of the game is to select the name of an actor or actress and then link them to the film-actor Kevin Bacon through no more than six shared appearances. (E.g. A appeared with B in film X, B appeared with C in film Y, C appeared with D in film Z, and D appears in Kevin Bacon's latest movie: thus we link A to Bacon in four moves.) The plain fact is that most of us know many people, and important people in public office tend to have dealings with a huge number of people, so just about anybody in the world can be linked to somebody else in a reasonably small number of such links. I can demonstrate the truth of this proposition with reference to my own case, that of a dull and unworldly person who doesn't get out much. For I am separated by only two degrees from Her Majesty The Queen (for I once very briefly met the then Poet Laureate, who must himself have met the Queen if only at his inauguration) which means I am separated by only three degrees from all the many important political figures that the Queen herself has met, including names like Churchill and De Gaulle. Which further means that only four degrees separate me from Josef Stalin (met by Churchill at Yalta) and just five degrees from Adolf Hitler (who never met Churchill but did meet prewar Conservative politicians like Chamberlain and Halifax who were known to Churchill). Given the increasing amounts of travel and communication that have taken place in this century, it should be possible to connect me with just about anybody in the world in the requisite six stages. But so what? Connections like these offer the possibility of communication and influence, but offer no evidence for its actuality. (I) The classic logical fallacy of post hoc ergo propter hoc. This is the most common strictly logical fallacy to be found in political conspiracy theories, especially those dealing with assassinations and suspicious deaths. And broadly it takes the shape of claiming that since event X happened after the death of A, A's death was brought about in order to cause or facilitate the occurrence of X. The First World War happened after the death of the Archduke Franz Ferdinand, and there is clearly a sense in which it happened because of his death: there is a causal chain leading from the death to Austrian outrage, to a series of Austrian demands upon Serbia, culminating in Austria's declaration of war against Serbia, Russia's declaration against Austria, and, via a series of interlinked treaty obligations, most of the nations of Europe ending up at war with one another. Though these effects of the assassination may now appear obvious, one problem for the CT proponent is that hindsight clarifies these matters enormously: such a progression may not have been at all obvious to the people involved in these events at the time. And it is even harder to believe that bringing about such an outcome was in any of their interests. (Austria plainly had an interest in shoring up its authority in the Balkans but not, given its many structural weaknesses, in engaging in a long and destructive war. The outcome, which anyone might have predicted as likely, was the economic ruin and subsequent political dissolution of the entire Austro-Hungarian empire.) Attempting to judge the rationality of a proposed CT here as an explanation for some such set of events runs into two problems. Firstly, though an outcome may now seem obvious to us, it may not have appeared so obvious to people at the time, either in its nature or in its expensiveness. Thus there may well have been people who thought that assassinating Franz Ferdinand in order to trigger a crisis in relations between Austria and Serbia was a sensible policy move, precisely because they did not anticipate a general world war occurring as a result and may have thought a less expensive conflict, a limited war of independence between Serbia and Austria, worth the possible outcome of freeing more of the Balkans from Austrian domination. And secondly, if we cannot attribute hindsight to the actors in such events, neither can we ascribe to them a perfect level of rationality: it is always possible for people engaged in such actions to possess a poor standard of means-end judgement. But, bearing these caveats in mind, one might still wish to propound two broad principles here for distinguishing whether an event is a genuine possible motive for an earlier conspiracy or just an instance of post hoc ergo propter hoc. Firstly, could any possible conspirators, with the knowledge they possessed at the time, have reasonably foreseen such an outcome? And secondly, granted that such an outcome could have been desired, are the proposed conspiratorial events a rational method of bringing about such an outcome? That a proposed CT passes these tests is, of course, no guarantee that we are dealing here with a genuine conspiracy; but a failure to pass them is a significant indicator of an unwarranted CT. 4. A case-study of CT thinking--the assassination of President Kennedy With these diagnostic indicators of poor critical thinking in place, I would now like to apply them to a typical instance of CT (and, to my mind, unwarranted CT) thinking.[6] On 22 November, 1963 President John F. Kennedy was assassinated in Dallas, Texas. Two days later, the man accused of his murder, Lee Harvey Oswald, was himself murdered in the basement of the Dallas Police Headquarters. These two events (and perhaps particularly the second, coming as it did so rapidly after the first) led to a number of accusations that Kennedy's death had been the result of a conspiracy of which Oswald may or not have been a part. Books propounding such theories emerged even before the Warren Commission issued its report on the assassination in August 1964. Writing at this time in his essay `The Paranoid Style in American Politics' the political scientist Richard Hofstadter could say; "Conspiratorial explanations of Kennedy's assassination have a far wider currency in Europe than they do in the United States." (Hofstadter 1964: 9) Hofstadter's view of the American paranoid style was one of small cults of a right-wing or racist or anti-Catholic or anti-Freemason bent whose descendants are still to be found in the Ku Klux Klan, the John Birch Society, the Michigan Militia, etc.. But within a couple of years of the emergence of the Warren Report and, more importantly, its 26 volumes of evidence, a new style of conspiratorial thinking emerged. While some right-wing conspiratorial theories remained[7], the bulk of the conspiracy theories propounded to explain the assassination adopted a position from the left of centre, accusing or assuming that some conspiracy of right-wing elements and/or some part of the US Government itself had been responsible for the assassination. A complete classification of such CTs is not necessary here[8], but I ought perhaps to point to a philosophically interesting development in the case. As a result of public pressure resulting from the first wave of CT literature, a congressional committee was established in 1977 to investigate Kennedy's assassination; it instituted a thorough examination of the available evidence and was on the verge of producing a report endorsing the Warren Commission's conclusions when it discovered what was alleged to be a sound recording of the actual assassination. Almost solely on the basis of this evidence--which was subsequently discredited by a scientific panel put together by the Department of Justice--the Congressional committee decided that there had probably been a conspiracy, asserting on the basis of very little evidence that the Mafia was the most probable source of this conspiracy. What was significant about this congressional investigation was the effect its thorough investigation of the forensic and photographic evidence in the case had. Many of the alleged discrepancies in this evidence, which had formed the basis for the many calls to establish such an investigation, were shown to be erroneous. This did not lead to the refutation of CTs but rather to a new development: the balance of CT claims now went from arguing that there existed evidence supporting a conspiratorial explanation to arguing that all or most of the evidence supporting the lone-assassin hypothesis had been faked, a new level of epistemological complexity. A representative CT of this type was propounded in Oliver Stone's hit 1992 Hollywood film JFK .[9] It asserts that a coalition of interests within the US governmental structure, including senior members of the armed forces, FBI, CIA, Secret Service and various Texas law-enforcement agencies, together with the assistance of members of organised crime, conspired to arrange the assassination of President Kennedy and the subsequent framing of an unwitting or entirely innocent Oswald for the crime. Motives for the assassination vary but most such CTs now agree on such motives as (a) preventing Kennedy after his supposed re-election from reversing US involvement in Vietnam, (b) protecting right-wing industrial interests, especially Texan oil interests, from what were regarded as possible depredations by the Kennedy administration, (c) instigating another and more successful US invasion of Cuba, and (d) halting the judicial assault waged by the Kennedy administration under Attorney General Robert Kennedy against the interests of organised crime. Such a CT scores highly on Keeley's five characteristic features of Unwarranted Conspiracy Theories outlined above. It runs counter to the official explanation of the assassination, though it has now itself become something of a popular orthodoxy, one apparently subscribed to by a majority of the American population. The alleged intentions behind the conspiracy are indeed nefarious, using the murder of a democratically-elected leader to further the interests of a private cabal. And it does seem to seek to tie together seemingly unrelated events. The most obvious of these is in terms of the assassination's alleged motive: it seeks to link the assassination with the subsequent history of America's involvement in Vietnam. But a number of other connections are made at other levels of explanation. For instance, the deaths of various people connected in one way or another with the assassination are linked together as being in some way related to the continuing cover-up by the conspirators. Keeley's fourth claim, that the truth behind an event being explained by a UCT be a typically well-guarded secret is, as I pointed out above, much harder to justify now in a climate where most people apparently believe in the existence of such a conspiracy. But Keeley's fifth claim, that the chief tool here is errant data, remains true. The vast body of published evidence on the assassination has been picked over with remarkable care for signs of discrepancy and contradiction, signs which are regarded as providing the strongest evidence for such a conspiracy. What now seems to me to be an interesting development in these more paranoid UCTs, as I mention above, is the extent to which unerrant data is now regarded as a major feature of such conspiracy theories. But how do these Kennedy assassination CTs rate against my own list of what I regard as critical thinking weaknesses? (A) An inability to weigh evidence properly. Here they score highly. Of particular importance is the inability to judge the reliability or lack thereof of eyewitness testimony, and an unwillingness or inability to discard evidence which does not fit. On the first point, most Kennedy CTs place high reliance on the small number of people who claimed at the time (and the somewhat larger number who claim now--see point (B) below) that they heard more than three shots fired in Dealey Plaza or that they heard shots fired from some other location than the Book Depository, both claims that if true would rule out the possibility of Oswald's acting alone. Since the overwhelming number of witnesses whose opinions have been registered did not hear more than three shots, and tended to locate the origin of these shots in the general direction of the Depository (which, in an acoustically misleadingly arena like Dealey Plaza is perhaps the best that could be hoped for), the economical explanation is to assume, unless further evidence arises, that the minority here are mistaken. Since the assassination was an unexpected, rapid and emotionally laden event--all key features for weakening the reliability of observation, according to the Principles of Appraising Observations in Norris & King (1983), it is only to be expected that there would be a significant portion of inconsistent testimony. The wonder here is that there is such a high degree of agreement over the basic facts. We find a similar misuse of observational principles in conspiratorial interpretations of the subsequent murder of Police Officer Tippit, where the majority of witnesses who clearly identified Oswald as the killer are downplayed in favour of the minority of witnesses--some at a considerable distance and all considerably surprised by the events unfolding in front of them--who gave descriptions of the assailant which did not match Oswald. Experienced police officers are used to eye-witness testimony of sudden and dramatic events varying considerably and, like all researchers faced with a large body of evidence containing discrepancies, must discard some evidence as worthless. Since Oswald was tracked almost continuously from the scene of Tippit's shooting to the site of his own arrest, and since forensic evidence linked the revolver found on Oswald to the shooting, the most economical explanation again is that the majority of witnesses were right in their identification of Oswald and the minority were mistaken. This problem of being unable to discard errant data is central to the creation of CTs since, as Keeley says: The role of errant data in UCTs is critical. The typical logic of a UCT goes something like this: begin with errant facts.... The official story all but ignores this data. What can explain the intransigence of the official story tellers in the face of this and other contravening evidence? Could they be so stupid and blind? Of course not; they must be intentionally ignoring it. The best explanation is some kind of conspiracy, an intentional attempt to hide the truth of the matter from the public eye. (Keeley 1999: 199) Such a view in the Kennedy case ignores the fact that the overwhelming amount of errant data on which CTs have been constructed, far from being hidden, was openly published in the 26 volumes of Warren Commission evidence. This has led to accusations that it was `hidden in plain view', but one can't help feeling that a more efficient conspiracy would have suppressed such inconvenient data in the first place. The standard position that errant data is likely to be false, that eye-witness testimony and memory is sometimes unreliable, that persisting pieces of physical evidence are preferable, etc., in short that Occam's Razor will insist on cutting and throwing away some of the data is constantly rejected in Kennedy CT literature. Perhaps the most extravagant example of this, amounting almost to a Hegelian synthesis of assassination conspiracy theories, is Lifton (1980). Seeking to reconcile the major body of testimony that Kennedy was shot from behind with a small body of errant data that he possessed a wound in the front of his body, the author dedicates over 600 pages to the construction of the most baroque conspiracy theory imaginable. In Lifton's thesis, Kennedy was shot solely from the front, and then the conspirators gained access to his body during its journey back to Washington and were able to doctor it so that at the subsequent post mortem examination it showed signs of being shot only from the rear. Thus the official medical finding that Kennedy was only shot from the rear can be reconciled with the general CT belief that he was shot from the front (too) in a theory that seems to show that everybody is right. Apart from the massive complication of such a plan--clearly going against my point (F)--and its medical implausibility, such a thesis actually reverses Occam's Razor by creating more errant data than there was to start with. For if Kennedy was shot only from the front, we now need some explanation for why the great majority of over 400 witnesses at the scene believed that the shots were coming from behind him! And this challenge is one that is ducked by the great majority of CTs: if minority errant data is to be preferred as reliable, then we require some explanation for the presence of the majority data now being rejected. But Lifton at least got one thing right. In accounting for the title of his book he writes: The "best evidence" concept, impressed on all law students, is that when you seek to determine a fact from conflicting data, you must arrange the data according to a hierarchy of reliability. All data are not equal. Some evidence (e.g. physical evidence, or a scientific report) is more inherently error-free, and hence more reliable, than other evidence (e.g. an eye-witness account). The "best" evidence rules the conclusion, whatever volume of contrary evidence there may be in the lower categories.[10] Unfortunately Lifton takes this to mean that conspirators who were able to decide the nature of the autopsy evidence would thereby lay down a standard for judging or rejecting as incompatible the accompanying eye-witness testimony. But given the high degree of unanimity among eye-witnesses on this occasion, and given the existence of corroborating physical evidence (a rifle and cartridges forensically linked to the assassination were found in the Depository behind Kennedy, the registered owner of the rifle was a Depository employee, etc.), all that the alleged body-tampering could hope to achieve is make the overall body of evidence more suspicious because more contradictory. Only if the body of reliable evidence was more or less balanced between a conspiratorial and non-conspiratorial explanation could this difficulty be avoided. But it is surely over-estimating the powers, predictive and practical, of such a conspiracy that they could hope to guarantee this situation beforehand. (B) An inability to assess evidence corruption and contamination. Though, as I note above, such contamination of eye-witness testimony may occur contemporaneously, it is a particular problem for the more long-standing CTs. In the Kennedy case, many witnesses of the assassination who at the time gave accounts broadly consistent with the explanation have subsequently amended or extended their accounts to include material that isn't so consistent. Witnesses, for instance, who at the time located all the shots as coming from the Book Depository subsequently gave accounts in which they located shots from other directions, most notably the notorious `grassy knoll', or later told of activity on the knoll which they never mentioned in their original statements. (Posner (1993) charts a number of these changes in testimony.) What is interesting about many of these accounts is that mundane explanations for these changes--I later remembered that..., I forgot to mention that...--tend to be eschewed in favour of more conspiratorial explanations. Such witnesses may deny that the signed statements made at the time accurately reflect what they told the authorities, or may say that the person interviewing them wasn't interested in writing down anything that didn't cohere with the official explanation of the assassination, and so on. Such explanations face serious difficulties. For one thing, since many of these statements were taken on the day of the assassination or very shortly afterwards, it would have to be assumed that putative conspirators already knew which facts would cohere with an official explanation and which wouldn't, which may imply an implausible degree of foreknowledge. A more serious problem is that these statements were taken by low-level members of the various investigatory bodies, police, FBI, Secret Service, etc.; to assert that such statements were manipulated by these people entails that they were members of the conspiracy. And this runs up against a practical problem for mounting conspiracies, that the more people who are in a conspiracy, the harder it is going to be to enforce security. A more plausible explanation for these changes in testimony might be that witnesses who provided testimony broadly supportive of the official non-conspiratorial explanation subsequently came into contact with some of the enormous quantity of media coverage suggesting less orthodox explanations and, consciously or unconsciously, have adjusted their recollections accordingly. The likelihood of such things happening after a sufficiently thorough exposure to alternative explanations may underlie Norris & King's principle II.1: An observation statement tends to be believable to the extent that the observer was not exposed, after the event, to further information relevant to describing it. (If the observer was exposed to such information, the statement is believable to the extent that the exposure took place close to the time of the event described.)[11] Their parenthesised time principle clearly renders a good deal of more recent Kennedy eye-witness testimony dubious after three and a half decades of exposure to vast amounts of further information in the mass media, not helped by `assassination conferences' where eye-witnesses have met and spoken with each other. One outcome of these two points is that, in the unlikely event of some living person being seriously suspected of involvement in the assassination, a criminal trial would be rendered difficult if not impossible. Such are the published discrepancies now within and between witnesses' testimonies that there would be enormous difficulties in attempting to render a plausibly consistent defence or prosecution narrative on their basis. (C) Misuse or outright reversal of a principle of charity. Where an event may have either a suspicious or an innocent explanation, and there is no significant evidence to decide between them, CTs invariably opt for the suspicious explanation. In part this is due to a feature deriving from Keeley's point (3) above, about CTs seeking to tie together seemingly unrelated events, but perhaps taken to a new level. Major CTs seek a maximally explanatory hypothesis, one which accounts for all of the events within its domain, and so they leave no room for the out of the ordinary event, the unlikely, the accident, which has no connection whatsoever with the conspiratorial events being hypothesised. The various Kennedy conspiracy narratives contain a large number of these events dragooned into action on the assumption that no odd event could have an innocent explanation. There is no better example of this than the Umbrella Man, a character whose forcible inclusion in conspiratorial explanations demonstrates well how a determined attempt to maintain this reversed principle of charity may lead to the most remarkable deformities of rational explanation. When pictorial coverage of the assassination entered the public domain, in newspaper photographs within the next few days, and more prominently in still from the Zapruder movie film of the events subsequently published in LIFE magazine, it became clear that one of the closest bystanders to the presidential limousine was a man holding a raised umbrella, and this at a time when it was clearly not raining. This odd figure rapidly became the focus of a number of conspiratorial hypotheses. Perhaps the most extreme of these originates with Robert Cutler (1975). According to Cutler, the Umbrella Man had a weapon concealed with the umbrella enabling him to fire a dart or flechette, perhaps drugged, into the president's neck, possibly for the purpose of immobilising him while the other assassins did their work. The only actual evidence to support this hypothesis is that the front of Kennedy's neck did indeed possess a small punctate wound, described by the medical team treating him as probably a wound of entrance but clearly explainable in the light of the full body of forensic evidence as a wound of exit for a bullet fired from above and behind the presidential motorcade. Consistent, in other words, with being the work of Oswald. There is no other supportive evidence for Cutler's hypothesis. (Cutler, of course, explains this in terms of the conspirators being able to control the subsequent autopsy and so conceal any awkward evidence; he thus complies with my principle (G) below.) More importantly, it seems inherently unlikely on other grounds. Since the Umbrella Man was standing on the public sidewalk, right next to a number of ordinary members of the public and in plain view of hundreds of witnesses, many of whom would have been looking at him precisely because he was so close to the president, its seems unlikely that a conspiracy could guarantee that he could get away with his lethal behaviour without being noticed by someone. And the proposed explanation for all this rigmarole, the stunning of the target, is entirely unnecessary: most firearms experts agree that the president was a pretty easy target unstunned. If Cutler's explanation hasn't found general favour with the conspiracy community, another has, but this too has equally strange effects upon reasoning clearly. The first version of this theory has the Umbrella Man signalling the presence of the target--movie-film of the assassination clearly shows that the raised umbrella is being waved or shaken. This hypothesis seems to indicate that the conspiracy had hired assassins who couldn't be relied upon to recognise the President of the United States when they saw him seated in his presidential limousine--the one with the president's flag on--next to the most recognisable first lady in American history. An apparently more plausible hypothesis is that it is the Umbrella Man who gives the signal for the team of assassins to open fire. (A version of this hypothesis can still be seen as late as 1992 in the movie JFK.) What I find remarkable here is that nobody seems to have thought this theory through at all. Firstly, the Umbrella Man is clearly on the sidewalk a few feet from the president while our alleged assassins are located high up in the Book Depository, in neighbouring buildings, or on top of the grassy knoll way to the front of the president. How, then, can he know what they can see from their different positions? How can he tell from his location that they now have clear shots at the target? (Dealey Plaza is full of trees, road signs and other obstructions, not to mention large numbers of police officers and members of the public who might be expected to get in the way of a clear view here.) And secondly, such an explanation actually weakens the efficiency of the alleged assassination conspiracy. (Here my limited boyhood experience of firing an air-rifle with telescopic sights finally comes in handy!) In order to make sense of the Umbrella Man as signaller, something like the following sequence of events must occur. Each rifleman focuses upon the presidential target through his telescopic sight, tracking the target as it moves at some ten to twelve miles per hour. Given the very narrow focus of such sights, he cannot see the Umbrella Man. To witness the signal, he must keep taking his eye away from the telescopic sight, refocussing it until he can see the distant figure on the sidewalk, and when the signal is given, put his eye back to the sight, re-focus again, re-adjust the position of the rifle since the target has continued to move while he was not looking at it, and then fire. This is not an efficient recipe for accurate target-shooting. Oliver Stone eliminates some of these problems in the version he depicts in the movie JFK. Here each of his three snipers is accompanied by a spotter, equipped with walkie-talkie and binoculars. While the sniper focuses on the target, the spotter looks out for the signal from the Umbrella Man and then orally communicates the order to open fire. But now, given what I have already said about the problem with the Umbrella Man's location, it is hard to see what purpose he serves that could not be better served by the spotters. He drops out of the equation. He is, as Wittgenstein says somewhere, a wheel that spins freely because it is not connected to the rest of the machinery. Occam's Razor would cut him from the picture, but Occam is no firm favourite of UCT proponents. In 1978, when the House Select Committee on Assassinations held public hearings on the Kennedy case, a Mr. Louis de Witt came forward to confess to being the Umbrella Man. He claimed that he came to Dealey Plaza in order to barrack the president as he went past, and that he was carrying a raised umbrella because he had heard that, perhaps for some obscure reason connected with the president's father's stay in London as US Ambassador during the war, the Kennedy family has a thing about umbrellas. De Witt hadn't come forward in the 15 years since the assassination since he had had no idea about the proposed role of the Umbrella man in the case. This part of his explanation seems to me to be eminently plausible: those of us with an obsessive interest in current affairs find it hard to grasp just how many people never read the papers or watch TV news. There is something almost endearing about de Witt, an odd character whose moment of public eccentricity seems to have enmired him in decades of conspiratorial hypotheses without his realising it. Needless to say, conspiracy theorists did not accept de Witt's testimony at face value. Some argued that he was a stooge put forward by the authorities to head off investigation into the real Umbrella Man, others that de Witt himself must be lying to conceal a more sinister role in these events, though I know of no evidence to support either of these conclusions. What this story makes clear is that an unwillingness to abandon discrepant events as unrelated, an unwillingness to abandon this reverse principle of charity here whereby all such events are conspiratorial unless clearly proven otherwise, rapidly leads to remarkable mental gymnastics, to hypotheses that are excessively complex and even internally inconsistent, (The Umbrella Man as signaller makes the assassination harder to perform.) But, such are the ways of human psychology, once such an event has been firmly embedded within a sufficiently complex hypothesis, no amount of contradictory evidence would seem to be able to shift it. The Umbrella Man has by now been invested with such importance as to become one of the great myths of the assassination, against which mere evidentiary matters can have no effect. (D) The demonisation of persons and organisations. This weakness takes a number of forms in the Kennedy case, which I shall treat separately. (i) Guilt by reputation. The move from the fact that some body--the FBI, the CIA, the mafia, the KGB--has a proven record of wrong-doing in the past to the claim that they were capable of wrong-doing in the present case doesn't seem unreasonable. But the stronger claim that past wrong-doing is in some sense evidence for present guilt is much more problematic, particularly when differences between the situations are overlooked. This is especially true of the role of the CIA in Kennedy CTs. Senator Church's 1976 congressional investigation into the activities of US intelligence agencies provided clear evidence that in the period 1960-63 elements of the CIA, probably under the instructions of or at least with the knowledge of the White House, had conspired with Cuban exiles and members of organised crime to attempt the assassination of Cuban leader Fidel Castro. Evidence also emerged of CIA involvement in the deaths of other foreign leaders--Trujillo in the Dominican Republic, Lumumba in the Congo, etc.. These findings were incorporated in Kennedy CTs as evidence to support the probability that the CIA, or at least certain members of it, were also responsible for the death of Kennedy. Once an assassin, always an assassin? Such an argument neglects the fact that the CIA could reasonably believe that they were acting in US interests, possibly lawfully since they were acting under the guidance or instruction of the White House. This belief is not open to them in the case of killing their own president, a manifestly unlawful act and one hard to square with forwarding US interests. (Evidence that Soldier X willingly shoots at the soldiers of other countries when ordered to do so is no evidence that he would shoot at soldiers of his own country, with or without orders. The situations are plainly different.) At best the Church Committee evidence indicated that the CIA had the capacity to organise assassinations, not that it had either the willingness or the reason to assassinate its own leader. (ii) Guilt by association. This takes the form of impeaching the credibility of any member of a guilty organisation. Since both the FBI and the CIA (not to mention, of course, the KGB or the mafia) had proven track records of serious misbehaviour in this period, it is assumed that all members of these organisations, and all their activities, are equally guilty. Thus the testimony of an FBI agent can be impeached solely on the grounds that he is an FBI agent, any activity of the CIA can be characterised as nefarious solely because it is being carried out by the CIA. Such a position ignores the fact that such organisations have many thousands of employees and carry out a wide range of mundane duties. It is perfectly possible for a member of such an organisation to be an honest and patriotic citizen whose testimony is as believable as anyone else's. Indeed, given my previous point that for security reasons the smaller the conspiratorial team the more likely it is to be successful, it would seem likely that the great majority of members of such organisations would be innocent of any involvement in such a plot. (I would hazard a guess that the same holds true of the KGB and the mafia, both organisations with a strong interest in security.) (iii) Exaggerating the power and nature of organisations. Repeatedly in such CTs we find the assumption that organisations like the CIA or the mafia are all-powerful, all-pervasive. capable of extraordinary foreknowledge and planning.[12] This assumption has difficulty in explaining the many recorded instances of inefficiency or lack of knowledge that these organisations constantly demonstrate. (There is a remarkable belief in conspiratorial circles, combining political and paranormal conspiracies, that the CIA has or had access to a circle of so-called `remote viewers', people with extra-sensory powers who were able through paranormal means to provide them with information about the activities of America's enemies that couldn't be discovered in any other way. Such a belief has trouble in easily accommodating the fact that the CIA was woefully unprepared for the sudden break-up of the Soviet Union and Warsaw Pact, or for the fact that America's intelligence organisations first learned of the start of the Gulf War when Kuwaiti embassy employees looked out of the window and saw Iraqi tanks coming down the road! Sadly, it appears to be true that people calling themselves remote viewers took very substantial fees from the CIA though whether this tells us more about the gullibility of people in paranoid institutions or their carefree attitude towards spending public money I should not care to say.) The more extreme conspiracy theories may argue that such organisations are only pretending to be inefficient, in order to fool the public about the true level of their efficiency. Such a position is, as Popper would no doubt have pointed out, not open to refutation. (iv) Demonising individuals. As with organisations, so with people. Once plausible candidates for roles in an assassination conspiracy are identified, they are granted remarkable powers and properties, their wickedness clearly magnified. In Kennedy CTs there is no better example of this than Meyer Lansky, the mafia's `financial wizard'. Lansky was a close associate of America's premier gangster of the 1940s, Charles `Lucky' Luciano. Not actually a gangster himself (and, technically, not actually a member of the mafia either, since Lansky--as a Jew--could not join an exclusively Sicilian brotherhood) Lansky acted as a financial adviser. He organised gambling activities for Luciano and probably played a significant role in the mafia involvement in the development of Las Vegas, and in subsequent investments of the Luciano family's money, including those in pre-revolutionary Cuba, after Luciano's deportation to Sicily. So much is agreed. But Lansky in CT writing looms ever larger, as a man of remarkable power and influence, ever ready to use it for malign purposes, a vast and evil spider at the centre of an enormous international web, maintaining his influence with the aid of the huge sums of money which organised crime was reaping from its empire.[13] Thus there is no nefarious deed concerning the assassination or its cover-up with which Lansky cannot be linked. This picture wasn't dented in the least by Robert Lacey's detailed biography of Lansky published in 1991. Lacey, drawing upon a considerable body of publicly available evidence--not least the substantial body generated by Lansky's lawsuit to enable him, as a Jew, to emigrate to Israel, was able to show that Lansky, far from being the mob's eminence grise, was little more than a superannuated book-keeper. The arch manipulator, supposedly empowered by the mafia's millions, led a seedy retirement in poverty and was on record as being unable to afford healthcare for his relatives. The effect of reading Lacey's substantially documented biography is rather like that scene in `The Wizard of Oz' when the curtain is drawn back and the all-powerful wizard is revealed to be a very ordinary little man. The 1990s saw the publication of a remarkable amount of material about the workings of American organised crime, much of it gleaned from FBI and police surveillance during the successful campaign to imprison most of its leaders. This material reveals that mafia bosses tend to be characterised by a very limited vocabulary, a remarkable propensity for brutality and a considerable professional cunning often mixed with truly breath-taking stupidity. That they could organise a large-scale assassination conspiracy, and keep quiet about it for more than thirty-five years, seemed even less likely. As I point out below, they would almost certainly not have wanted to. (E) The canonisation of persons or (more rarely) organisations. In the Kennedy case, this has taken the form of idealising the President himself. In order to make a conspiratorial hypothesis look more plausible under (F) below, it is necessary to make the victim look as much as possible like a significant threat to the interests of the putative conspirators. In this case, Kennedy is depicted as a liberal politician, one who was a threat to established economic interests, one who took a lead in the contemporary campaign to end institutionalised discrimination against black people, and, perhaps most importantly, one who was or became something of a foreign policy dove, supporting less confrontational policies in the Cold War to the extent of being prepared to terminate US involvement in South Vietnam. This canonisation initially derives from the period immediately after the assassination, a period marked by the emergence of a number of works about the Kennedy administration from White House insiders like Theodore Sorensen, Pierre Salinger and the Camelot house historian, Arthur Schlesinger, works which tended to confirm the idealisation of the recently dead president, particularly when implicitly compared with the difficulties faced by the increasingly unpopular Lyndon Johnson. >From the mid 1970s Kennedy's personal character came under considerable criticism, partly resulting from the publication of biographies covering his marriage and sexual life, and the personal lives of the Kennedy family. More importantly, for our purposes, were the stream of revelations which emerged from the congressional investigations of this time which indicated the depth of feeling in the Kennedy White House about Cuba; most important here were the Church Committee's revelations that the CIA had conspired with members of organised crime to bring about the assassination of Fidel Castro. These, coming hard on the heels of the revelations of various criminal conspiracies within the Nixon White House, stoked up the production of CTs. (And provided a new motivation for the Kennedy assassination: that Castro or his sympathisers had found out about these attempts and had Kennedy killed in revenge.) But they also indicated that the Kennedy brothers were much harder cold war warriors than had perhaps previously been thought. The changing climate of the 1980s brought a new range of biographies and memoirs--Reeves, Parmet, Wofford, etc.--which situated Kennedy more firmly in the political mainstream. It became that he was not by any means an economic or social liberal--on the question of racial segregation he had to be pushed a lot since he tended to regard the activities of Martin Luther King and others as obstructing his more important social policies. And Kennedy adopted a much more orthodox stance on the cold war than many had allowed: this was, after all, the candidate who got himself elected in 1960 by managing in the famous `missile gap' affair to appear tougher on communism than Richard Nixon, no mean feat. Famously, Kennedy adopted a more moderate policy during the Cuban missile crisis than some of those recommended by his military advisers, but this can be explained more in terms of Kennedy having a better grasp of the pragmatics of the situation than in terms of his being a foreign policy liberal of some sort. This changing characterisation of Kennedy, this firm re-situating of his administration within the central mainstream of American politics--a mainstream which appears considerably to the right in European terms--has been broadly rejected by proponents of Kennedy assassination CTs (some of whom also reject the critical characterisation of his personal life). The reason for this is that it plainly undercuts any motivation for some part of the American political establishment to have Kennedy removed. It is unlikely that any of Kennedy's reforming policies, economic or social, could seriously have been considered such a threat to establishment interests. It is even more unlikely when one considers that much of Kennedy's legislative programme was seriously bogged down in Congress and was unlikely to be passed in anything but a heavily watered-down form during his term. Much of this legislation was forced through after the assassination by Kennedy's successor, Lyndon Johnson being a much more astute and experienced parliamentarian. The price for this social reform, though, was Johnson's continued adherence to the verities of cold war foreign policy over Vietnam. I leave consideration of Kennedy's Vietnam policy to the next section. (F) An inability to make rational or proportional means-end judgements. The major problem here for any Kennedy assassination CT is to come up with a motive. Such a motive must not only be of major importance to putative conspirators, it must also rationally justify a risky, expensive--and often astonishingly complicated--illegal conspiracy. Which is to say that such conspirators must see the assassination as the only or best way of bringing about their aim. The alleged motives can be broadly divided into two categories. Firstly, revenge. Kennedy was assassinated in revenge for the humiliation he inflicted upon Premier Khrushchev over the Cuban missile crisis, or for plotting the assassination of Fidel Castro, or for double-crossing organised crime over alleged agreements made during his election campaign. The problem with each of these explanations is that the penalties likely to be suffered if one is detected far outweigh any rational benefits. Had Castro's hand been detected behind the assassination--something which Johnson apparently thought all too likely--this would inevitably have swung American public opinion behind a US military invasion of Cuba and overthrow of Castro's rule. If Khrushchev has been identified as the ultimate source of the assassination, the international crisis would have been even worse, and could well have edged the world considerably closer towards nuclear war than happened in the Cuban missile crisis. One can only make sense of such explanations on the basis of an assumption that the key conspirators are seriously irrational in this respect, and this is an assumption that we should not make without some clear evidence to support it. The second category of explanations for the assassination are instrumental: Kennedy was assassinated in order to further some specific policy or to prevent him from furthering some policy which the conspirators found anathema. Here candidates include: to protect Texas oil-barons' economic interests, to frustrate the Kennedy administration's judicial assault upon organised crime, to bring about a more anti-Castro presidency, and--the one that plays the strongest role in contemporary Kennedy CTs such as Oliver Stone's--to prevent an American withdrawal from Vietnam. A proper response to the suggestion of any of these as a rational motive for the assassination should be to embark upon a brief cost-benefit analysis. We have to factor in not only the actual costs of organising such a conspiracy (and, in the case of the more extreme Kennedy CTs, of maintaining it for several decades afterwards to engage in what has been by any standards a pretty inefficient cover-up) but also the potential costs to be faced if the conspiracy is discovered, the assassination fails, etc.. Criminals by and large tend to be rather poor at estimating their chances of being caught; murder and armed robbery have very high clear-up rates compared to, say, burglary of unoccupied premises. The continued existence of professional armed robbers would seem to indicate that they underestimate their chances of being caught or don't fully appreciate the comparative benefits of other lines of criminal activity. But though assassination conspirators are by definition criminals, we are to assume here that they are figures in the establishment, professional men in the intelligence, military and political communities, and so likely to be more rational in their outlook than ordinary street criminals. (Though this is a defeasible assumption, since the post-war history of western intelligence agencies has indicated a degree of internal paranoia sometimes bordering on the insane. A substantial part of British intelligence, for instance, spent almost two decades trying to prove that the then head of MI5 was a Soviet agent, a claim that appears to have no credibility at all.) If we assume that the Mafia played such a role in an assassination conspiracy, it is still plausible to believe that they would consider the risks of failure. In fact, we have some evidence to support this belief since, though organised crime is by and large a very brutal institution, in the US--as opposed to the very different conditions prevailing in Italy--it maintains a policy of not attacking dangerous judges or politicians. When in the 1940s senior Mafia boss Albert Anastasia proposed murdering Thomas Dewey, then a highly effective anti-crime prosecutor in New York and subsequently a republican presidential candidate in 1948, the response was to have Anastasia murdered rather than risk the troubles that Dewey's assassination would have brought down upon the heads of organised crime. An even more effective prosecutor, Rudolph Giuliani, remained unscathed throughout his career. Against the risks of being caught, we have to balance the costs of trying to achieve one's goal by some other less dramatic and probably more legal path. The plain fact is that there are a large number of legal and effective ways of changing a president's mind or moderating his behaviour. One can organise public campaigns, plant stories in the press, stimulate critical debate in congress, assess or manipulate public opinion through polls etc. When the health care industry in the US wanted to defeat the Clinton administrations reform proposals, for instance, they didn't opt for assassination but went instead for a highly successful campaign to bring congress and substantial parts of public opinion against the proposals, which soon became dead in the water. On the specific case of American withdrawal from Vietnam, all of the above applies. In the first case, following on from (E) above, it can be plausibly argued that Kennedy had no such intention. He certainly on occasion floated the idea, sounding out people around him, but this is something that politicians do all the time as part of the process of weighing policy options and shouldn't be taken as evidence for such an option. But to see Kennedy as seriously considering such an option is to see him as a figure considerably out of the Democratic mainstream. He would certainly have been aware of the effects that an Asian policy can have upon domestic matters; as a young congressman he would have been intimately aware of the effect that the fall of China to communism in 1949 had upon the last Democratic administration, severely weakening Harry Truman's effectiveness. For years afterwards the Democrats were regarded as the people who "lost China" despite the fact that there was nothing they could have done--short of an all-out war, like that occurring in Korea shortly afterwards, which couldn't possibly be won without the use of nuclear weapons and all that entails. Kennedy's administration had a much stronger presence in South Vietnam and it can reasonably be asked whether he would have wanted to run the risk of becoming the president who "lost Vietnam". He would also have been aware of the problem that ultimately faced Lyndon Johnson, that one could only maintain a forceful policy of domestic reform by mollifying congress over matters of foreign policy. The price for Johnson's Great Society reforms was a continued adherence to a policy of involvement in Vietnam, long after Johnson himself--fully aware of this bind--doubted the wisdom of this policy. Kennedy's domestic reforms were already in legislative difficulties; to believe that he was prepared to withdraw from Vietnam, then, is to believe that he was effectively abandoning his domestic programmes. (That Kennedy was alleged to be considering such an action in his second term, if re-elected, doesn't affect this point. He would still have been a lame-duck president, and would also have weakened the chances of any possible Democratic successor, something that would certainly have been of interest to other members of his party.) It thus appears unlikely that Kennedy would have seriously considered withdrawing completely from Vietnam. But if he had, a number of options were available to opponents of such a policy. Firstly, as noted above, they could have encouraged opposition to such a policy in congress and other important institutions, and among the American public. There was certainly a strongly sympathetic Republican and conservative Democrat presence in congress to form the foundations of such an opposition, as well as among newspaper publishers and other media outlets. If Kennedy had underestimated the domestic problems that withdrawal would cause him, such a campaign would concentrate his mind upon them. And secondly, opponents could work to change Kennedy's mind. They could do this by controlling the information available for Kennedy and his advisers. In particular, military sources could manipulate the information flowing from Vietnam itself. (That Kennedy thought something like this was happening may be indicated by his insistence on sending civilian advisers to Vietnam to report back to him personally.) This policy worked well in Johnson's time--the control of information over the trivial events in the Bay of Tonkin in 1965 was manipulated to indicate a serious crisis which thus forced Johnson into inserting a heavy military presence into South Vietnam in response. There is no reason to believe that such a policy would not have worked if Kennedy had still been in office. At the very least, it would be rational to adopt such a policy first, to try cheap, legal and probably efficient methods of bringing about one's goal before even contemplating such a dramatic, illegal and high-risk activity as assassination. (I omit here any consideration of the point that members of the American establishment might feel a moral revulsion at the idea of taking such action against their own president. Such a claim may well be true, but the argument from rationality does not require it.) At bottom what we face here is what we might term Goodenough's Paradox of Conspiracies: the larger or more powerful an alleged conspiracy, the less need they have for conspiring. A sufficiently large collection of members of the American political, intelligence and military establishment--the kind of conspiracy being alleged by Oliver Stone et al.--wouldn't need to engage in such nefarious activity since they would have the kind of organisation, influence, access to information, etc. that could enable them to achieve their goal efficiently and legally. The inability noted in (F) to make adequate means-end decisions means that UCT proponents fail to grasp the force of this paradox. (G) Evidence against a UCT is always evidence for. The tendency of modern CTs has been to move from conspiracies which try to keep their nefarious activities secret to more pro-active conspiracies which go to a good deal of trouble to manufacture evidence either that there was a different conspiracy or that there was no conspiracy at all. This is especially true of Kennedy assassination CTs. The epistemological attitude of Kennedy CTs has changed notably over the years. In the period 1964-76 the central claim of such theories was that the evidence collected by the Warren Commission and made public, when fairly assessed, did not support the official lone assassin hypothesis but indicated the presence of two or more assassins and therefore a conspiracy. Public pressure in the aftermath of Watergate brought about a congressional investigation of the case. In its 1980 report the House Select Committee eventually decided, almost solely on the basis of subsequently discredited acoustic evidence, that there had indeed been a conspiracy. But more importantly, the committee's independent panels of experts re-examined the key evidence, photographic, forensic and ballistic, and decided that it supported the Warren Commission's conclusion. This led to a sea-change in CTs from 1980 onwards. Given the preponderance of independently verified `best evidence' supporting the lone assassin hypothesis, CT proponents began to argue that some or all of this evidence had been faked. This inevitably entailed a much larger conspiracy than had previously been hypothesised, one that not only assassinated the president but also was able to gain access to the evidence of the case afterwards in order to change it, suppress it or manufacture false evidence. They thus fell foul of (F) above. Since the reason for such CTs was often to produce a hypothesis supported by much weaker evidence, eye-witness testimony and so on, they would tend to fall foul of (A), (B) and (C) as well. One problem with such CTs was that they tended to disagree with one another over which evidence had been faked. Thus many theorists argued that the photographic and X-ray record of the presidential post mortem had been tampered with to conceal evidence of conspiracy, while Lifton (1980) as we saw argued that the record was genuine but the body itself had been tampered with. Other theorists, e.g. Fetzer & co., argue that the X-rays indicate a conspiracy while the photographs do not, implying that the photographs have been tampered with. This latter, widespread belief introduces a new contradiction into the case, since it posits a conspiracy of tremendous power and organisation, able to gain access to the most important evidence of the case, yet one which is careless or stupid enough not to make sure that the evidence it leaves behind is fully consistent. (And, of course, it goes against the verdict of the House Committee's independent panel of distinguished forensic scientists and radiographers that the record of the autopsy was genuine, and consistent, both internally and with the hypothesis that Oswald alone was the assassin.) Of particular interest here is the Zapruder movie film of the assassination. Stills from this film were originally published, in the Warren Report and in the press, to support the official lone assassin hypothesis. When a bootleg copy of this film surfaced in the mid 1970s it was taken as significant evidence against the official version and most CTs since then have relied upon one interpretation or another of this film for support. But now that it is clear, especially since better copies of the film are now available, that the wounds Kennedy suffers in the film do not match those hypothesised by those CT proponents arguing for the falsity of the autopsy evidence, some of these proponents now claim to detect signs that the Zapruder film itself has been faked, and there has been much discussion about the chain of possession of this film in the days immediately after the assassination to see if there is any possibility of its being in the hands of someone who could have tampered with it. What is happening here is that epistemologically these CTs are devouring their own tails. If the evidence that was originally regarded as foundational for proving the existence of a conspiracy is now itself impeached, then this ought to undermine the original conspiracy case. If no single piece of evidence in the case can be relied upon then we have no reason for believing anything at all, and the abyss of total scepticism yawns. Interestingly there seems to be a complete lack of what I termed above `meta-evidence', that is, actual evidence that any of this evidence has been faked. Reasons for believing in this forgery hypothesis tend to fall into one of three groups. (i) It is claimed that some sign of forgery can be detected in the evidence itself. Since much of this evidence consists of poor quality film and photographs taken at the assassination scene, these have turned into blurred Rorschach tests where just about anything can be seen if one squints long and hard enough. In the case of the autopsy X-rays, claims of apparent fakery tend to be made by people untrained in radiography and the specialised medical skill of reading such X-rays. (ii) Forgery is hypothesised to explain some alleged discrepancy between two pieces of evidence. Thus when differences are alleged to exist between the autopsy photographs and the X-rays it is alleged that one or other (or both) have been tampered with. (iii) Forgery is hypothesised in order to explain away evidence that is clearly inconsistent with the proposed conspiracy hypothesis. An interesting case of the latter involves the so-called `backyard photos', photographs supposedly depicting Oswald standing in the yard of his house and posing with his rifle, pistol and various pieces of left-wing literature. For Oswald himself was confronted with these by police officers after his arrest and claimed then that they had been faked--he had had some employment experience in the photographic trade and claimed to know how easily such pictures could be faked. And ever since then CT proponents have made the same claims. But one problem with such claims is that evidence seldom exists in a vacuum, but is interconnected with other evidence. Thus we have the sworn testimony of Oswald's wife that she took the photographs, the evidence of independent photographic experts that the pictures were taken with Oswald's camera, documentary evidence in his own handwriting that Oswald ordered the rifle in the photos and was the sole hirer of the PO box to which it was delivered, eyewitness evidence that Oswald possessed such a rifle and that one of these photos had been seen prior to the assassination, and so on. To achieve any kind of consistency with the forgery hypothesis all of this evidence must itself be faked or perjured. Thus the forgery hypothesis inevitably ends up impeaching the credibility of such a range of evidence that a conspiracy of enormous proportions and efficiency is entailed, a conspiracy which runs into the problems raised in (F) above. These problems are so severe that the forgery hypothesis must be untenable without the existence of some credible meta-evidence, some proof that acts of forgery took place. Without such meta-evidence, all we have is an unjustifiable attempt to convert evidence against a conspiracy into evidence for merely on the grounds that the evidence doesn't fit the proposed CT, which is an example of (A) too. (H) The fallacy of the spider's web. This form of reasoning has been central to many of the conspiratorial works about the JFK assassination: indeed, Duffy (1988) is entitled The Web! Scott (1977) was perhaps the first full-length work in this tradition. It concentrates on drawing links between Oswald and the people he came into contact with, and the murky worlds of US intelligence, anti-Castro Cuban groups and organised crime, eventually linking in this fashion the world of Dealey Plaza with that of the Watergate building and the various secret activities of the Nixon administration. Such a project is indeed an interesting one, one which enlightens us considerably about the world of what Scott terms `parapolitics'. It is made especially easy by the fact that Oswald in his short life had at least tangential connections with a whole range of suspicious organisations, including the CIA, the KGB, pro- and anti-Castro Cuban groups, the US Communist Party and other leftist organisations, organised crime figures in New Orleans and Texas, and so on. And considerable webs can be drawn outwards, from Oswald's contacts to their contacts, and so on. As I say, such research is intrinsically interesting, but the fallacy occurs when it is used in support of a conspiracy theory. For all that it generates is suspicion, not evidence. That Oswald knew X or Y is evidence only that he might have had an opportunity to conspire with them, and doesn't support the proposition that he did. The claim is even weaker for people that Oswald only knew at second or third or fourth hand. And some of these connections are much less impressive than authors claim: that Oswald knew people who ultimately knew Meyer Lansky becomes much less interesting when, as I noted in (D) above, Lansky is seen as much more minor figure than the almost omnipotent organised crime kingpin he is often depicted as. Ultimately this fallacy depends upon a kind of confusion between quantity and quality, one that seems to believe that a sufficient quantity of suspicion inevitably metamorphoses into something like evidence. There is, as the old saying has it, no smoke without fire, and surely such an inordinate quantity of smoke could only have been produced by a fire of some magnitude. But thirty years of research haven't found much in the way of fire, only more smoke. Some of the more outrageous CTs here have been discredited--inasmuch as such CTs can ever be discredited--and the opening of KGB archives in recent years and access to living KGB personnel has shown that Oswald's contacts with that organisation were almost certainly innocent. Not only is there no evidence that Oswald ever worked for the KGB, but those KGB officers who monitored Oswald closely during his two year stay in the USSR were almost unanimously of the opinion that he was too unbalanced to be an employee of any intelligence organisation. But a problem with suspicion is that it cannot be easily dispelled. Since web-reasoning never makes clear exactly what the nature of Oswald's relationship with his various contacts was, it is that much harder to establish the claim that they were innocent. Ultimately, this can only be done negatively, by demonstrating the sheer unlikeliness of Oswald being able to conspire with anyone. The ample evidence of the sheer contingency of Oswald's presence in the book depository on the day of the assassination argues strongly against his being part of a conspiracy to kill the president. Whether in fact he was a part of some other conspiracy, as some authors have argued, is an interesting question but one not directly relevant to assassination CTs. (I) The classic logical fallacy of post hoc ergo propter hoc. This applies to all those assassination CTs which seek to establish some motive for Kennedy's death from some alleged events occurring afterwards. The most dramatic of these, as featured in Oliver Stone's film, is the argument from America's disastrous military campaign in Vietnam. US military involvement escalated after Kennedy's death, therefore it happened because of Kennedy's death, therefore Kennedy's death was brought about in order to cause an increased American presence in Vietnam. The frailty of this reasoning is obvious. As I pointed out in (F) above, such a view attributes to the proposed conspirators a significant inability to match ends and means rationally. In addition there is no end to the possible effects that can be proposed here. Ultimately everything that is regarded as immoral about modern America can be traced back to the assassination. As I pointed out in a recent lecture, what motivates this view is: a desire for a justification of a view of America as essentially a benign and divinely inspired force in the world, a desire held in the face of American sin in Vietnam and elsewhere. There are plausible explanations for Vietnam and Watergate in terms of the domination of post-war foreign policy by cold-war simplicities, and the growth of executive power at the expense of legislative controls, and so on. They are, for those not interested in political science, dull explanations. Above all, they do not provide the emotional justification of conspiratorial explanations. To view Vietnam as the natural outcome of foreign policy objectives of the cold-war establishment, of a set of attitudes shared by both Republican and Democrat, above all to view it as the express wish of the American people--opinion polls registered majority support for the war until after the Tet disaster in 1968--is ultimately to view Vietnam as the legitimate and rational outcome of the American system at work. A quasi-religious view of America as `the city on the hill', the place where God will work out his purpose for men, cannot afford to entertain these flaws. Hence the appeal of an evil conspiracy on which these sins can be heaped.[14] Underlying this reasoning, then, is an emotional attachment to a view of America as fundamentally decent combined with a remarkable ignorance about the real nature of politics. All of the features of America's history after 1963 that can be used as a possible motive for the assassination can be equally or better explained in terms of the ordinary workings of US politics. Indeed many of them, including the commitment to Vietnam and the aggressively murderous attitude towards Castro's Cuba, can be traced to Kennedy's White House and earlier. Though CT theorists often proclaim their commitment to realism and a hard-headed attitude towards matters, it seems clear that their reliance upon this kind of reasoning is motivated more by emotion than by facts. 5. Conclusions The accusation is often made that conspiracy theorists, particularly of the more extreme sort, are crazy, or immature, or ignorant. This response to UCTs may be at least partly true but does not make clear how CT thinking is going astray. What I have tried to show is how various weaknesses in arguing, assessing evidence, etc. interact to produce not just CTs but unwarranted CTs. A conspiratorial explanation can be the most reasonable explanation of a set of facts, but where we can identify the kinds of critical thinking problems I have outlined here, a CT becomes increasingly unwarranted. Apart from these matters logical and epistemological, it seems to me that there is also an interesting psychological component to the generation of UCTs. Human beings possess an innate pattern-seeking mechanism, imposing order and explanation upon the data presented to us. But this mechanism can be too sensitive and we start to see patterns where there are none, leading to a refusal to recognise the sheer amount of contingency and randomness in the world. Perhaps, as Keeley says, "the problem is a psychological one of not recognizing when to stop searching for hidden causes".[15] Seeing meaning where there is none leads to seeing evidence where there is none: a combination of evidential faults reinforces the view that our original story, our originally perceived pattern, is correct--a pernicious feedback loop which reinforces the belief of the UCT proponent in their own theory. And here criticism cannot help, for the criticism--and indeed the critic--become part of the pattern, part of the problem, part, indeed, of the conspiracy.[16] Conspiracy theories are valuable, like any other type of theory, for there are indeed conspiracies. We want to find a way to preserve all that is useful in the CT as a way of explaining the world while avoiding the UCT which at worst slides into paranoid nonsense. I agree with Keeley that there can be no exact dotted line along which Occam's Razor can be drawn here. Instead, we require a greater knowledge of the thinking processes which underlie CTs and the way in which they can offend against good standards of critical thinking. There is no way to defeat UCTs; the more entrenched they are, the more resistance to disproof they become. Like some malign virus of thinking, they possess the ability to turn their enemies' powers against them, making any supposedly neutral criticism of the CT itself part of the conspiracy. It is this sheer irrefutability that no doubted irritated Popper so much. If we cannot defeat UCTs through refutation then perhaps the best we can do is inoculate against them by a better development of critical thinking skills. These ought not to be developed in isolation--it is a worrying feature of this field that many otherwise critical thinkers become prone to conspiracy theorising when they move outside of their own speciality--but developed as an essential prerequisite for doing well in any field of intellectual endeavour. Keeley concludes that there is nothing straightforwardly analytic that allows us to distinguish between good and bad conspiracy theories... The best we can do is track the evaluation of given theories over time and come to some consensus as to when belief in the theory entails more scepticism than we can stomach.[17] Discovering whether or to what extent a particular CT adheres to reasonable standards of critical thinking practice gives us a better measure of its likely acceptability than mere gastric response, while offering the possibility of being able to educate at least some people against their appeal, as potential consumers or creators of unwarranted conspiracy theories. BIBLIOGRAPHY Blakey, G. Robert & Billings, Richard (1981) Fatal Hour -The Plot to Kill the President, N.Y.:Berkeley Publishing Cutler, Robert (1975) The Umbrella Man, Manchester, Mass.: Cutler Designs Donovan, Robert J.(1964) The Assassins, N.Y.: Harper Books Duffy, James. R. (1988) The Web, Gloucester: Ian Walton Publishing Eddowes, Michael (1977) The Oswald File, N.Y.: Ace Books Fetzer, James (ed.) (1997) Assassination Science, Chicago, IL: Open Court Publishing Fisher, Alec & Scriven, Michael (1997) Critical Thinking - Its Definition and Assessment, Norwich: Centre for Critical Thinking, U.E.A. Hofstadter, Richard P. (1964) The Paranoid Style in American Politics, London: Jonathan Cape Hume, David (1748) Enquiry Concerning Human Understanding, ed. by P.H. Nidditch 1975, Oxford: Oxford University Press. Keeley, Brian L. (1999) `Of Conspiracy Theories', Journal of Philosophy 96, 109-26. Lacey, Robert (19901) Little Man, London: Little Brown Lifton, David (1980) Best Evidence, London: Macmillan. 2nd ed. 1988 N.Y.: Carroll & Graf Norris, S.P. & King, R. (1983) Test on Appraising Observations, St Johns Newfoundland: Memorial University of Newfoundland. Norris, S.P. & King, R. (1984) `Observational ability: Determining and extending its presence', Informal Logic 6, 3-9. Oglesby, Carl (1976) The Yankee-Cowboy War , 2nd ed. 1977, N.Y.: Berkley Publishing Pigden, Charles (1993) `Popper revisited, or What Is Wrong With Conspiracy Theories?', Philosophy of the Social Sciences 25, 3-34. Popkin, Richard H. (1966) The Second Oswald , London: Sphere Books Popper, Karl (1945) The Open Society and its Enemies, 5th ed. 1966, London, Routledge. Posner, Gerald (1993) Case Closed, N.Y.: Random House Scheim, David E. (1983) Contract On America, Silver Spring, Maryland: Argyle Press Scott, Peter Dale (1977) Crime and Cover-Up, Berkeley, Cal: Westworks Stone, Jim (1991) Conspiracy of One , Fort Worth TX: Summit Group Stone, Oliver & Sklar, Zachary (1992) JFK - The Movie, New York: Applause Books. Thompson, Josiah(1967) Six Seconds in Dallas , 2nd ed. 1976, N.Y.: Berkeley Publishing Wilson, Robert Anton (1989) `Beyond True and False', in Schultz, T. (ed.) The Fringes of Reason, New York: Harmony. ______________________ [1] And this even though professional philosophers may themselves engage in conspiracy theorising! See, for instance, Popkin (1966), Thompson (1966) or Fetzer (1998) for examples of philosophers writing in support of conspiracy theories concerning the JFK assassination. [2] See Donovan 1964 for more on this. [3] Historians, it seems, still disagree about whether or to what extent Princips' group was being manipulated. [4] And the most extreme UCT I know manages to combine this with both ufology and satanism CTs, in David Icke's ultimate paranoid fantasy which explains every significant event of the last two millennia in terms of the sinister activities of historical figures who share the blood-line of reptilian aliens who manipulate us for their purposes, using Jews, freemasons, etc. as their fronts. Those interested in Mr. Icke's more specific allegations (which I omit here at least partly out of a healthy regard for Britain's libel laws) are directed to his website, http://www.davidicke.com/. [5] See Norris & King 1983 & 1984 for full details of and support for these principles. [6] I don't propose to argue for my position here. Interested readers are pointed in the direction of Posner (1994), a thorough if somewhat contentious anti-conspiratorial work whose fame has perhaps eclipsed the less dogmatic but equally anti-conspiratorial Stone (1990). [7] One of the first of which, from the charmingly palindromic Revilo P. Oliver, is cited by Hofstadter. Oliver, a member of the John Birch Society, which had excoriated Kennedy as a tool of the Communists throughout his presidency, asserted that it was international Communism which had murdered Kennedy in order to make way for a more efficient tool! Right-wind theories blaming either Fidel Castro or Nikita Khrushchev continued at least into the 1980s: see, for instance, Eddowes (1977). [8] And probably not possible! The sheer complexity of the assassination CT community and the number of different permutations of alleged assassins has frown enormously, especially over the last twenty years. In particular, the number of avowedly political CTs is hard to determine since they fade into other areas of CT, in particular those dealing with the influence of organised crime and those dealing with an alleged UFO cover-up, not to mention those even more extreme CTs which link the assassination to broader conspiracies of international freemasonry etc.. [9] See not only the movie but also Stone & Sklar (1992), a heavily annotated version of the film's script which also includes a good deal of the published debate about the film, for and against. [10] Lifton 1980: 132 [11] Norris & King (1983), quoted in Fisher & Scriven (1997). [12] For a remarkable instance of the exaggeration of the power of organised crime in the US and its alleged role in Kennedy's death see Scheim (1983) or, perhaps more worryingly, Blakey & Billings (1981). I say `more worryingly' because Blakey was Chief Counsel for the congressional investigation into Kennedy's death which reported in 1980 and so presumably is heavily responsible for the direction that investigation took. [13] This view of Lansky is widespread throughout the Kennedy literature. See, for instance, Peter Dale Scott's short (1977) which goes into Lansky's alleged connections in great detail. [14] From "(Dis)Solving the Kennedy Assassination", presented to the Conspiracy Culture Conference at King Alfred's College, Winchester, in July 1998. [15] Keeley 1999: 126 [16] Anyone who doubts this should try to argue for Oswald as lone assassin on an internet discussion group! It is not just that one is regarded as wrong or naive or ignorant. One soon becomes accused of sinister motives, of being a witting or unwitting agent of the on-going disinformation exercise to conceal the truth. (I understand that much the same is true of discussions in ufology fora.) [17] Keeley 1999: 126 _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From ljohnson at solution-consulting.com Sun Dec 11 18:09:46 2005 From: ljohnson at solution-consulting.com (Lynn D. Johnson, Ph.D.) Date: Sun, 11 Dec 2005 11:09:46 -0700 Subject: [Paleopsych] NYT Mag: Laptop That Will Save the World, The In-Reply-To: References: Message-ID: <439C6B6A.2090703@solution-consulting.com> This is truly visionary, a technical breakthrough that can change the world. Thanks for sharing it, Frank. Romney's commitment to buy a half million is likewise visionary, putting power into the hands of the children. Lynn Premise Checker wrote: > Laptop That Will Save the World, The > http://select.nytimes.com/preview/2005/12/11/magazine/1124989448443.html > > [How far does anyone predict that the educational achievement gap will > be closed internationally?] > > By MICHAEL CROWLEY > > Here in America, high-speed wireless Internet has become a > commonplace home amenity, and teenagers with Sidekicks can > browse the Web on a beach. For many people in developing > nations, however, the mere thought of owning a computer remains > pure fantasy. > > But maybe not for long. This year, Nicholas Negroponte, > chairman of the Massachusetts Institute of Technology's Media > Lab, unveiled a prototype of a $100 laptop. With millions of > dollars in financing from the likes of [3]Rupert Murdoch's News > Corporation and Google, Negroponte and his colleagues have > designed an extremely durable, compact, no-frills laptop, which > they'd like to see in the hands of millions of children > worldwide by 2008. > > So how can any worthwhile computer cost less than a pair of > good headphones? Through a series of cost-cutting tricks. The > laptops will run on free "open source" software, use cheaper > "flash" memory instead of a hard disk and most likely employ > new LCD technology to drop the monitor's cost to just $35. Each > laptop will also come with a hand crank, making it usable even > in electricity-free rural areas. > > Of course, the real computing mother lode is the Internet, to > which few developing-world users have access. But the M.I.T. > laptops will offer wireless peer-to-peer connections that > create a local network. As long as there's an Internet signal > somewhere in the network area - and making sure that's the > case, even in rural areas, poses a mighty challenge - everyone > can get online and use a built-in Web browser. Theoretically, > even children in a small African village could have "access to > more or less all libraries of the world," Negroponte says. > (That's probably not very useful to children who can't read or > understand foreign languages.) His team is already in talks > with several foreign governments, including those of Egypt, > Brazil and Thailand, about bulk orders. Gov. Mitt Romney of > Massachusetts has also proposed a bill to buy 500,000 of the > computers for his state's children. > > References > > 3. > http://topics.nytimes.com/top/reference/timestopics/people/m/rupert_murdoch/index.html?inline=nyt-per > > _______________________________________________ > paleopsych mailing list > paleopsych at paleopsych.org > http://lists.paleopsych.org/mailman/listinfo/paleopsych > > From checker at panix.com Sun Dec 11 22:10:14 2005 From: checker at panix.com (Premise Checker) Date: Sun, 11 Dec 2005 17:10:14 -0500 (EST) Subject: [Paleopsych] NYT Mag: The 5th Annual Year in Ideas Message-ID: The 5th Annual Year in Ideas New York Times Magazine, 5.12.11 http://www.nytimes.com/2005/12/11/magazine/11ideas1-1.html [There are many things here for everyone, esp. transhumanists and sociobiologists. Here's the introduction. Then the list of ideas, then the articles, then the respective URLs. ENJOY! Read as many as you wish and at your leisure. [I am sending this out to most of those on my address list. I may have sent this to some that really didn't want to get it. If so, I apologize. I may expunge from my address book those who have not communicated with me for some time. If you are on my general list, however, I will never expunge you unless I am specifically asked to do so. I appreciate that I may bring up matters that are so controversial that some of you may prefer just to read my messages and not get yourself associated with me. [Some of you may have consigned my messages unread to the garbarage bin, but I have no way of telling, since I don't use a feature (and may not be able to use it on my Pine account anyhow) that will let me know when a message has been read. This feature sometimes just silently report backs to sender. If someone tries this on me in Pine, I can find out about the attempt, but there is no button to press giving my consent. (I found this out when I bounced a message to another address and got a request to inform the sender that I had opened it. I acceded to the request and hope that you, whoever you are--I've forgotten who, are not very confused, as the message was many months ago! But I can be extremely tardy in responding. Bug me if you really want a reply to anything whatsoever and haven't gotten one. I do not ignore my critics intentionally. Indeed, I thrive on criticism that goes beyond name-calling.) [With other software, I would have to open the full raw text and inspect each message I get individually. Other versions of this feature require the consent of the recipient before sending to send back a message that it has been read, but I think you can read it even if you elect not to tell the recipient you are doing so. [Okay, so now ENJOY!] This issue marks the fifth anniversary of what is becoming a venerable tradition at the magazine: The Year in Ideas. As always, we seek to gain some perspective on what has transpired since January by compiling a digest of the most noteworthy ideas of the past 12 months. Like the biographer Lytton Strachey surveying the Victorian Age, we row out over the great ocean of accomplishment and lower into it a little bucket, which brings up to the light characteristic specimens from the various depths of the intellectual sea - ideas from politics and science, medicine and law, popcorn studies and camel racing. Once we have thrown back all the innovations that don't meet our exacting standards, we find ourselves with the following alphabetical catch: 78 notions, big and small, grand and petty, serious and silly, ingenious and. . . well, whatever you call it when you tattoo an advertisement on your forehead for money. These are the ideas that, for better and worse, helped make 2005 what it was. You'll find entries that address momentous developments in Iraq ("The Totally Religious, Absolutely Democratic Constitution") as well as less conspicuous, more ghoulish occurrences in Pittsburgh ("Zombie Dogs"). There are ideas that may inspire ("The Laptop That Will Save the World"), that may turn your stomach ("In Vitro Meat"), that may arouse partisan passions ("Republican Elitism") and that may solve age-old mysteries ("Why Popcorn Doesn't Pop"). Some mysteries, of course, still remain. For instance, we do not yet have an entirely satisfying explanation for how Mark Cuban, the outspoken Internet mogul and N.B.A. owner, came to be connected with three of the year's most notable ideas ("Collapsing the Distribution Window," "Scientific Free-Throw Distraction" and "Splogs"). That was just one surprising discovery we made in the course of assembling the issue. In the pages that follow, we're sure you'll make your own. List of ideas, then the articles, then the respective URLs [112]Accredited Bliss [35]Anti-Paparazzi Flash, The [36]Anti-Rape Condom, The [37]Branding Nations [38]Cartoon Empathy [39]Celebrity Teeth [40]Cobblestones are Good for You [41]Collapsing the Distribution Window [42]Consensual Interruptions [43]Conservative Blogs are More Effective [44]Dialing Under the Influence [45]Do-It-Yourself Cartography [46]Dolphin Culture [47]Econophysics [48]Embryo Adoption [49]Ergomorphic Footwear [50]Fair Employment Mark, The [51]False-Memory Diet, The [52]Fleeting Relationship, The [53]Folksonomy [54]Forehead Billboards [55]Gastronomic Reversals [56]Genetic Theory of Harry Potter, The [57]Global Savings Glut, The [58]His-and-Her TV, The [59]Hollywood-Style Documentary, The [60]Hypomanic American, The [61]Fertile Red States [62]In Vitro Meat [63]Juvenile Cynics [64]Laptop That Will Save the World, The [65]Localized Food Aid [66]Making Global Warming Work for You [67]Medical Maggots [68]Microblindness [69]Monkey Pay-Per-View [70]National Smiles [71]Open-Source Reporting [72]Parking Meters That Don't Give You a Break [73]Playoff Paradigm, The [74]Pleistocene Rewilding [75]Porn Suffix, The [76]Preventing Suicide Bombing [77]Readable Medicine Bottle, The [78]Republican Elitism [79]Robot Jockeys [80]Runaway Alarm Clock, The [81]Scientific Free-Throw Distraction [82]Seeing With Your Ears [83]Self-Fulfilling Trade Rumor, The [84]Serialized Pop Song, The [85]Sitcom Loyalty Oath, The [86]Solar Sailing [87]Sonic Gunman Locator, The [88]Splogs [89]Stash Rocket, The [90]Stoic Redheads [91]Stream-of-Consciousness Newspaper, The [92]Subadolescent Queen Bees [93]Suburban Loft, The [94]Synesthetic Cookbook, The [95]Taxonomy Auctions [96]"The Crawl" Makes You Stupid [97]Toothbrush That Sings, The [98]Totally Religious, Absolutely Democratic Constitution, The [99]Touch Screens That Touch Back [100]Trial-Transcript Dramaturgy [101]Trust Spray [102]Two-Dimensional Food [103]Uneavesdroppable Phone Conversation, The [104]Urine-Powered Battery, The [105]Video Podcasts [106]Why Popcorn Doesn't Pop [107]Worldwide Flat Taxes [108]Yawn Contagion [109]Yoo Presidency, The [110]Zero-Emissions S.U.V., The [111]Zombie Dogs Accredited Bliss By CHARLES WILSON If you think financing a motion picture is difficult, consider for a moment the fund-raising bench mark that the filmmaker David Lynch set this year for his new David Lynch Foundation for Consciousness-Based Education and World Peace: $7 billion. The director of "Mulholland Drive" hopes to finance seven "universities of peace," with endowments of $1 billion each, where students would practice Transcendental Meditation. Developed by Maharishi Mahesh Yogi in the late 1950's, T.M. is a technique whereby individuals repeat a mantra to themselves during two 20-minute sessions per day. Lynch began practicing it 32 years ago as a student. T.M. rid him of his deep anger, he says, and enlivened his creative process. "When you dive within," Lynch says, "you experience an unbounded ocean of bliss consciousness." Lynch says he believes that undergraduates today - 3 of 10 of whom say they suffer from depression or an anxiety disorder - need to find that unbounded ocean even more than he did in 1973. To that end, he has recently offered to help underwrite for-credit "peace studies" classes, which would include T.M. instruction, at a number of universities. Pending approval, American University will offer one of these classes next year. Researchers there will also begin studying the technique's effects on student grades, I.Q.'s and mental health. Drawing on the work of John Hagelin, a quantum physicist and T.M. practitioner, Lynch harbors broader hopes that the seven universities of peace could enable the square root of 1 percent of the world's population - about 8,000 people - to simultaneously do an advanced version of the T.M. technique called "yogic flying." Lynch and Hagelin say that a mass meditation of this size could have a palliative effect upon the "unified field" of consciousness that connects all human beings and thereby bring about the conditions for world peace. Anti-Paparazzi Flash, The By ALEXANDRA BERZON If you have ever felt sorry for celebrities hounded by cameras as they go about their daily business - be that pumping gas or entering a flashy nightclub - you can rest easy. A group of researchers at Georgia Tech has designed what could become an effective celebrity protection device: an instrument that detects the presence of a digital camera's lens and then shoots light directly at the camera when a photographer tries to take a picture. The result? A blurry picture of a beam of light. Try selling that to Us Weekly. The Georgia Tech team was initially inspired by the campus visit of a Hewlett Packard representative, who spoke about the company's efforts to design cameras that can be turned off by remote control. Gregory Abowd, an associate professor, recalls that after the talk, the team members thought, There's got to be a better way to do that, a way that doesn't require the cooperation of the camera. The key was recognizing that most digital cameras contain a "retroreflective" surface behind the lens; when a light shines on this surface, it sends the light back to its source. The Georgia Tech lab prototype uses a modified video camera to detect the presence of the retroreflector and a projector to shoot out a targeted three-inch beam of light at the offending camera. Anti-Rape Condom, The By CHRISTOPHER SHEA The vagina dentata - a vagina with literal or figurative teeth - is a potent trope in South Asian mythology, urban legend, Freudian rumination and speculative fiction (the novel "Snow Crash," by Neal Stephenson, for example). But it took a step toward reality this August with the unveiling of the Rapex, a female "condom" lined with rows of plastic spikes on its inner surface. The Rapex is the brainchild of Sonette Ehlers, a retired blood technician in South Africa who was moved by the country's outlandish rape rate, which is among the highest in the world. The device is designed to be inserted any time a woman feels she is in danger of sexual assault. Its spikes are fashioned to end an assault immediately by affixing the Rapex to the assaulter's penis, but also to cause only superficial damage. The Rapex would create physical evidence of the attack as well and, as Ehlers laid out a course of events for reporters at a news conference, send the offender to a hospital, where he would be promptly arrested. Branding Nations By CLAY RISEN If the British consultant Simon Anholt had his way, sitting at the cabinet table with the secretary of defense and the attorney general would be a secretary of branding. Indeed, he foresees a day when the most important part of foreign policy isn't defense or trade but image - and when countries would protect and promote their images through coordinated branding departments. "I've visited a great many countries where they have ministers for things that are far less important," he says. This year, Anholt, a prolific speaker, adviser to numerous governments and editor of the journal Place Branding, published "Brand America: The Mother of All Brands," in which he predicted that the days when countries will essentially open their own in-house marketing shops are right around the corner. "Governments understand this very well, and most of them are now trying or have tried in the past to achieve some kind of control over their images," Anholt writes. He may be on to something, since governments are quickly realizing that image maintenance isn't just about reeling in tourists - witness Karen Hughes's high-profile public-diplomacy efforts or Tony Blair's Public Diplomacy Strategy Board, an outgrowth of Britain's "Cool Britannia" campaign. Late last year, the Persian Gulf state Oman hired Landor Associates, a brand consulting outfit, to develop and promote "Brand Oman." Cartoon Empathy By JOEL LOVELL For anyone who pays even the slightest attention to cartoons, the scene is familiar: birds flying, bunnies hopping, floppy-hatted Smurfs singing and dancing around a campfire. Then without warning a group of warplanes arrives and starts carpet-bombing. As the Smurfs scatter, their mushroom village goes up in flames. After the last bomb falls, amid the burning rubble and surrounded by dead Smurfs, Baby Smurf sits alone, wailing. The scene comes from a 30-second TV commercial that began being shown on Belgian national television this fall, as part of Unicef's campaign to raise money to help rehabilitate child soldiers in Sudan, Burundi and Congo. The decision to use cartoon characters in the ad, rather than show images of actual children, was calculated not to lessen the horror but to amplify it. "We've found that people have gotten used to seeing traditional images of children in despair, especially from African countries," says Philippe Henon, a spokesman for Unicef Belgium. "Those images are no longer surprising, and most people certainly don't see them as a call to action." Unicef's goal was to convey to adults the horror of war by drawing on their childhood memories, and Smurfs, Henon says, "were the image most Belgians ages 30 to 45 connect to the idea of a happy childhood." The spot has generated a considerable amount of controversy. "People have been shocked," says Henon, who emphasizes that the ad is intended for an adult audience and is shown only after 9 p.m. "But we've received a lot of positive reactions. And this has also been apparent in the donations." Celebrity Teeth By REBECCA SKLOOT Last year, if you walked into your dentist's office saying, "Hey, Doc, can you make my teeth look like Cameron Diaz's or Brad Pitt's?" the answer would have been, "Yeah, sure - with a lot of anesthetic, drilling and permanent reconstruction." But things have changed. Meet the Snap-On Smile - a thin, flexible, resin shell of perfect teeth that snaps over your actual teeth like a retainer. No adhesive, no drilling. Its inventor, Marc Liechtung, is a dentist at Manhattan Dental Arts, where you can walk in on a Monday, make a painless plaster mold of your teeth and then pick up your new smile by Friday. All for $1,200 to $1,600. Patients can work with a "smile guide" to chose one of 17 colors ("yellow-white," "yellow-gray," even "Extreme White Buyer Beware") and 18 shapes ("squared," "square-round," "pointy"). But many patients just hand Liechtung a celebrity photo and say, "Make my teeth look like this." So he does. But he wants to make one thing clear: "I did not come up with the Snap-On Smile so people could mimic celebrities." His goal was an affordable, minimally invasive dental tool. "I had patients with almost no teeth who didn't have $20,000 for reconstruction," he says. So this year, after months in the lab, he unveiled Snap-On Smiles. He is licensing them to dentists and has sold more than 300 to his own patients, many of whom have perfectly healthy (and often straight) teeth. People don't ask Liechtung whether the Snap-On causes permanent damage (it doesn't) or whether you can eat with it (you can - even corn on the cob). "No," Liechtung says, "they just want to know: 'Which is the most popular celebrity?' 'What kind of girls get Halle Berry?' 'Who do guys ask for?' "In the beginning, it made me sick. I thought I invented some serious medical device, but all people wanted to do was use it to make themselves look like celebrities!" Eventually he thought, Well, why not? "A person comes in, I say I can give them any teeth they want, who are they going to want to look like? Me? No!" Cobblestones are Good for You By CHRISTOPHER SHEA According to a study published in August in the Journal of the American Geriatrics Society, if you want to be a fitter, more relaxed, more agile retiree, the prescription is to walk on cobblestones. Specially designed paths lined with river stones are a common sight in Chinese parks, and the people who traverse them in bare or stocking feet report that they feel at once soothed and invigorated. Fuzhong Li, a researcher at the Oregon Research Institute, started thinking about the paths during trips to Shanghai, and he and two other researchers resolved to test the walkers' claims. Financed by the National Institute on Aging, they recruited 54 sedentary but healthy men and women, ages 60 to 92, to walk in socks three times a week for 16 weeks on special cobblestone mats. The test subjects were eased into the walking routine - since the stones were uncomfortable for some at first - building up to a half-hour of mat time per session. Meanwhile, a control group of 54 took part in more conventional walking exercises. Collapsing the Distribution Window By CLAY RISEN In February, the film industry as we know it may change forever. That's when "Bubble," a low-budget murder mystery directed by Steven Soderbergh, will appear in theaters - and on cable, and on DVD, all on the same day. The movie is the first in a six-film deal between Soderbergh and 2929 Entertainment, a partnership led by the media moguls Mark Cuban and Todd Wagner, which includes theaters, cable channels and production and distribution companies. While no one expects "Bubble" to break box-office records, even a modicum of success could indicate the arrival of something many in the movie business have anticipated - and feared - for years: universal release. With box-office revenue slumping and DVD sales skyrocketing, it's not surprising that moviemakers are looking for ways to collapse the period of time it takes for a film to make its way from the multiplex to home video - in industry-speak, the "distribution window." The universal-release strategy has a lot of appeal for moviemakers: in addition to taking better advantage of the red-hot home-video sector, it's also more cost-effective - instead of requiring separate marketing efforts for theater and video releases, universal release requires just one. Plus, the strategy undercuts film pirates, who sometimes offer knockoff DVD's of films before they even hit the big screen. But not everyone likes the idea. John Fithian, head of the National Association of Theater Owners, has expressed fear that rather than create new revenue streams, the practice will "be a death threat to our industry." And some film purists, like the director M. Night Shyamalan, have said that universal release is also a threat to the traditional moviegoing experience. Consensual Interruptions By JASCHA HOFFMAN The problem is all too familiar: You're chatting with a group of people when someone's cellphone goes off, interrupting the conversation. What makes the intrusion irritating isn't so much the call itself - the caller has no way of knowing if he has chosen a good time to cut in. It's that the group as a whole doesn't have any say in the matter. Until now. Stefan Marti, a graduate of the M.I.T. Media Laboratory, who now works for Samsung, has devised a system that silently surveys the members of the group about whether accepting an incoming phone call would be appropriate. Then it permits the call to go through only if the group agrees unanimously - thus creating a more consensual sort of interruption. The system, it must be said, is highly elaborate. It begins with a special electronic-badge or -necklace device that you and everyone else you might be conversing with must wear. Your badge can tell who is in conversation with you by comparing your speech patterns with those of people nearby. (Anyone within a few feet of you who is not talking at the same time you are is assumed to be part of your conversation.) Each badge is also in wireless contact with your cellphone and a special ring that you wear on your finger. When a caller tries you on your cellphone, all the finger rings of the people in your conversation silently vibrate - a sort of pre-ring announcing to the group the caller's intention to butt in. If anyone in the group wants to veto the call, he can do so by simply touching his ring, and the would-be call is redirected to voice mail. If no one opts to veto, the call goes through, the phone rings and the conversation is interrupted. Having solved the problem of when phone calls should interrupt us, Marti is now working on how they should do so. Inspired by the observation that the best interruptions are subtle and nonverbal but still somewhat public, he has designed an animatronic squirrel that perches on your shoulder and screens your calls. Instead of your phone ringing, the squirrel simply wakes and begins to blink. [?][?][?]Jascha Hoffman Conservative Blogs are More Effective By MICHAEL CROWLEY When the liberal activist Matt Stoller was running a blog for the Democrat Jon Corzine's 2005 campaign for governor, he saw the power of the conservative blogosphere firsthand. Shortly before the election, a conservative Web site claimed that politically damaging information about Corzine was about to surface in the media. It didn't. But New Jersey talk-radio shock jocks quoted the online speculation, inflicting public-relations damage on Corzine anyway. To Stoller, it was proof of how conservatives have mastered the art of using blogs as a deadly campaign weapon. That might sound counterintuitive. After all, the Howard Dean campaign showed the power of the liberal blogosphere. And the liberal-activist Web site DailyKos counts hundreds of thousands of visitors each day. But Democrats say there's a key difference between liberals and conservatives online. Liberals use the Web to air ideas and vent grievances with one another, often ripping into Democratic leaders. (Hillary Clinton, for instance, is routinely vilified on liberal Web sites for supporting the Iraq war.) Conservatives, by contrast, skillfully use the Web to provide maximum benefit for their issues and candidates. They are generally less interested in examining every side of every issue and more focused on eliciting strong emotional responses from their supporters. But what really makes conservatives effective is their pre-existing media infrastructure, composed of local and national talk-radio hosts like Rush Limbaugh, the Fox News Channel and sensationalist say-anything outlets like the Drudge Report - all of which are quick to pass on the latest tidbit from the blogosphere. "One blogger on the Republican side can have a real impact on a race because he can just plug right into the right-wing infrastructure that the Republicans have built," Stoller says. Dialing Under the Influence By BRENDAN I. KOERNER The truest words are spoken not in jest but rather after one too many bourbon sours. Liquid courage can turn a normally taciturn individual into a confrontational blabbermouth, eager to tell co-workers or former lovers exactly how he feels about them. The results aren't usually pretty, as has now been immortalized in the popular culture: Paul Giamatti's wine-addled character succumbs to a bout of "drinking and dialing" in the movie "Sideways." Baring one's soul while soused, unfortunately, is easier than ever, because of the proliferation of mobile phones. A BlackBerry's primary function may be to keep you apprised of critical e-mail messages from work, but it is also handy - too handy - for ringing your fianc?e at 4 a.m. and confessing what really happened at your bachelor party. Fortunately, Virgin Mobile Australia has a solution: a service called Dialing Under the Influence (D.U.I.). Before heading out for a night of debauchery, a Virgin Mobile customer simply dials 333, then the number of someone who shouldn't be called midbender - a boss, a recent breakup, the cute boy who works two cubicles over. The number is then rendered unreachable on that handset until 6 a.m. the next morning, by which time the tongue-loosening effects of the evening's alcohol will presumably have worn off. Kerry Parkin, a Virgin Mobile Australia spokeswoman, admits that the D.U.I. service, which costs about 19 cents per blacklisted number, was initially hatched as a promotional gimmick. What began as a publicity stunt, however, has become a favorite among Virgin Mobile users. The D.U.I. service was used 10,000 times over the past year, including 250 times by one customer in a single month. ("We think that man might have a problem," Parkin says.) Do-It-Yourself Cartography By PAMELA LICALZI O'CONNELL The most influential mashup this year wasn't a Beatles tune remixed with hip-hop lyrics. It was an online street map of Chicago overlaid with crime statistics. Chicagocrime.org, which was created by the journalist Adrian Holovaty, was one of the first Web sites to combine publicly available data from one site (in this case, the Chicago Police Department's online database) with a digital map supplied by another site (in this case, Google). This summer, Google released software tools that make this sort of mashup simple to create, even for casual Web users. Thousands of people began to make useful, often elegant, annotated maps. It turns out that the best way to organize much of the information online is geographically. After Holovaty's crime statistics, real-estate listings and classified ads were among the first forms of information combined with maps. Then came sporting events, movies and gas stations with low prices. Now the social possibilities are being mined, with sites like mapchatter.com, which lets you search for chat partners by locale, and frappr.com, where you can map the physical locations of your online pals and share photos with them. The latest twist is "memory maps," in which you annotate a satellite photo of your hometown with your personal history. (A good example is the blogger Matthew Haughey's evocative project, "My Childhood, Seen by Google Maps.") Dolphin Culture By AARON RETICA Sometimes, when a dolphin in Shark Bay, off the coast of Western Australia, prepares to forage, she drops to the sea floor, rips a fat conical chunk of sea sponge out of it, covers her beak with the sponge cone and sets to work. After she finds the fish she wants, she drops the sponge. "Sponging," as the scientists at the Shark Bay Dolphin Research Project call this behavior, is an unusual instance of an animal using another animal as a tool, but that is not what makes the sponging interesting to biologists. It's that dolphins learn to use the sponges - to probe deeply for food while protecting their beaks - from their mothers. In "Cultural Transmission of Tool Use in Bottlenose Dolphins," a paper published this spring in the Proceedings of the National Academy of Sciences, Michael Kr?tzen, Janet Mann and several other researchers argue that they have demonstrated "the first case of an existing material culture in a marine mammal species." Genetic explanations for the behavior, they write, are "extremely unlikely." And there are dolphins in Shark Bay that don't use the sponges but forage in the same deep-water channels as those that do - so sponging can't be only habitat-driven either. The sponging dolphins "see what Mom does and do it," Mann says. One-tenth of the mothers sponge. Many of their offspring have been seen sponging, too, and there is at least one documented case of sponging by a grandmother, a mother and a granddaughter. Nearly all of the mature spongers are female. Quite a few juvenile males try sponging, but they don't keep it up. Kr?tzen and his colleagues speculate that the solitary nature of sponging may be incompatible with the intense social requirements that characterize mature male dolphin life. All but one of the genetically-tested sponging dolphins share "recent co-ancestry" with an animal the researchers call a "Sponging Eve," the first dolphin to discover the technique and pass it on. In other words, they're related to one another. "It's a little sponge club," Mann explains. But the genetic markers they share don't seem to correspond to anything that has to do with their ability to sponge. Econophysics By CHRISTOPHER SHEA Victor Yakovenko, a physicist at the University of Maryland, happens to think that current patterns of economic inequality are as natural, and unalterable, as the properties of air molecules in your kitchen. He is a self-described "econophysicist." Econophysics, the use of tools from physics to study markets and similar matters, isn't new, but the subfield devoted to analyzing how the economic pie is split acquired new legitimacy in March when the Saha Institute of Nuclear Physics, in Calcutta, held an international conference on wealth distribution. Econophysicists point out that incomes and wealth behave suspiciously like atoms. In the United States, for example, beneath the 97th percentile (roughly $150,000), the dispersion of income fits a common distribution pattern known as "exponential" distribution. Exponential distribution happens to be the distribution pattern of the energy of atoms in gases that are at thermal equilibrium; it's a pattern that many closed, random systems gravitate toward. As for the wealthiest 3 percent, their incomes follow what's called a "power law": there is a very long tail in the distribution of data. (Consider the huge gap between a lawyer making $200,000 and Bill Gates.) Other developed nations seem to display this two-tiered economic system as well, with the demarcation lines differing only slightly. Embryo Adoption By SARAH BLUSTAIN This year, opponents of abortion stepped up their use of a carefully chosen phrase - "embryo adoption" - that describes a couples' decision to have a baby using the embryos of another couple. The less loaded term for embryo adoption is "embryo donation." It typically signifies that a couple who have undergone in vitro fertilization, and have had as many children as they wish to, are releasing their leftover embryos for use by other would-be parents. Of some 400,000 frozen embryos in the country, according to the RAND Corporation, about 9,000 are designated for other families. (Another 11,000 are designated for research, while the balance remain unused in freezers.) Medically, embryo adoption and embryo donation are identical. But to promoters of embryo adoption, which term you use makes all the difference: "We would like for embryos to be recognized as human life and therefore to be adopted as opposed to treated as property," explains Kathryn Deiters, director of development at the Nightlight Christian Adoptions agency, in California, which has been offering embryo adoptions since the late 1990's. Nightlight also favors the term "snowflakes." As the agency's executive director, Ron Stoddart, told The Washington Times: "Like snowflakes, these embryos are unique, they're fragile and, of course, they're frozen.. . .It's a perfect analogy." In May, President Bush delighted the Nightlight agency when he met with some of its young success stories, who wore "Former Embryo" stickers on their chests. He used the occasion to stress his opposition to legislation supporting wider stem-cell research with embryos. Ergomorphic Footwear By JAMES GLAVE Most people expect to break in a new pair of shoes by wearing them for a few weeks until the material softens and stretches to fit. But the shoe designer Martin Keen has a better idea. In April, Keen will launch Mion Footwear, a line of mass-produced shoes, designed for sailing and water sports, that promise rapid custom-grade cushioning. Like a memory-foam mattress, Mion insoles compress after about 10 hours of normal wear to assume the unique contours of the owner's foot - right down to your inward-curling pinkie toe. The transformation is permanent, and in the end, the shoes fit no one else, hence their name, which is pronounced "my own." Fair Employment Mark, The By CHRISTOPHER SHEA For a decade now, Congress has declined to pass the Employment Nondiscrimination Act (ENDA), which would make it illegal for companies to fire or demote on the basis of sexual orientation. And yet some of the nation's biggest companies, including AT&T, I.B.M. and General Mills, say they'd be happy to abide by the legislation. The Yale Law School professor Ian Ayres and his wife, the Quinnipiac University School of Law professor Jennifer Gerarda Brown, wonder: Why wait for Congress to pass a law when you can, in effect, do it yourself? In their book "Straightforward: How to Mobilize Heterosexual Support for Gay Rights," Ayres and Brown present a plan for partly enacting ENDA without Congress's help. Their Fair Employment mark is a seal of approval - think of the Orthodox Union's imprimatur that a product is kosher - mated to a novel legal scheme that would effectively privatize this area of antidiscrimination law. Under the plan, companies can acquire a license committing them to abide by a recent version of ENDA (specifically, one introduced by Senator Edward Kennedy in 2003) and to open themselves to lawsuits by employees or job applicants if they violate it. In return, the companies can display a mark on their products advertising their commitment to nondiscrimination. The mark itself, a simple "FE" (not unlike the Underwriters Laboratories' "UL," which signals that an electronic product has passed safety tests), is intentionally prosaic - designed not to inflame the minority of consumers who might boycott a company that protected homosexuals, while potentially appealing to the more than 80 percent of consumers who oppose workplace discrimination against gays. False-Memory Diet, The By JOHN GLASSIE According to the results of a study released in August, it is possible to convince people that they don't like certain fattening foods - by giving them false memories of experiences in which those foods made them sick. The research was conducted by a team including Elizabeth Loftus, a psychologist at the University of California, Irvine, who is known for her previous work showing the malleability of human memory and calling into question the reliability of recovered memories in sexual-abuse cases. She turned her attention to food as a way to see if implanted memories could influence actual behavior. After initial experiments, in which subjects were persuaded that they became ill after eating hard-boiled eggs and dill pickles as children, the researchers moved on to greater challenges. In the next study, up to 40 percent of participants came to believe a similarly false suggestion about strawberry ice cream - and claimed that they were now less inclined to eat it. The process of implanting false memories is relatively simple. In essence, according to the paper that Loftus's team published in Proceedings of the National Academy of Sciences, subjects are plied with "misinformation" about their food histories. But a number of obstacles remain before members of the general population can use this technique to stay thin. Attempts to implant bad memories about potato chips and chocolate-chip cookies, for instance, failed. "When you have so many recent, frequent and positive experiences with a food," Loftus explains, "one negative thought is not enough to overcome them." More work is needed to determine if the false-memory effect is lasting and if it is strong enough to withstand the presence of an actual bowl of ice cream. It's also not clear, at this point, how people could choose to undergo the process without thereby becoming less vulnerable to this kind of suggestion. Fleeting Relationship, The By VANESSA GREGORY Americans put a premium on sustaining intimate relationships, but could it be that they gain as much emotional sustenance from the relative strangers they meet on commuter trains, in the stands at softball games and even at strip clubs? In "Together Alone: Personal Relationships in Public Places," the sociologists Calvin Morrill and David Snow of the University of California, Irvine, along with Cindy White, a professor of communication at the University of Colorado, present a collection of essays stressing the importance of the interactions that occur in public spaces, like bars and gyms. "Fleeting relationships," Morrill explains, are brief interactions that nonetheless are "colored by emotional dependence and intimacy." Morrill's researchers visited strip clubs and found that customers paid not just for the eroticism but also for the sense of connection they felt with the dancers. "You can tell a dancer who really cares about the people she dances with," one customer said. In another chapter, a singles dance designed to foster serious romance was instead used by regulars as an enjoyable, safe, commitment-free place to socialize with strangers and then head home - alone. The researchers also emphasized the value of "anchored relationships," which are more enduring than fleeting ones, but fixed to a single location. One "Together Alone" contributor, Allison Munch, studied anchored relationships among amateur-softball-league fans. Munch's spectators rarely saw one another outside of the stands, but they formed a "floating community," trading intimate details about their marriages, watching one another's children and sharing food and clothing. One fan said that the stands had become their "back porch." Folksonomy By DANIEL H. PINK In 1876, Melvil Dewey devised an elegant method for categorizing the world's books. The Dewey Decimal System divides books into 10 broad subject areas and several hundred sub-areas and then assigns each volume a precise number - for example, 332.6328 for Jim Rodgers's investment guide, "Hot Commodities." But on the Internet, a new approach to categorization is emerging. Thomas Vander Wal, an information architect and Internet developer, has dubbed it folksonomy - a people's taxonomy. A folksonomy begins with tagging. On the Web site Flickr, for example, users post their photos and label them with descriptive words. You might tag the picture of your cat, "cat," "Sparky" and "living room." Then you'll be able to retrieve that photo when you're searching for the cute shot of Sparky lounging on the couch. If you open your photos and tags to others, as many Flickr devotees do, other people can examine and label your photos. A furniture aficionado might add the tag "Mitchell Gold sofa," which means that he and others looking for images of this particular kind of couch could find your photo. "People aren't really categorizing information," Vander Wal says. "They're throwing words out there for their own use." But the cumulative force of all the individual tags can produce a bottom-up, self-organized system for classifying mountains of digital material. Grass-roots categorization, by its very nature, is idiosyncratic rather than systematic. That sacrifices taxonomic perfection but lowers the barrier to entry. Nobody needs a degree in library science to participate. Forehead Billboards By JEFF STRYKER In January, Andrew Fischer, a 21-year-old Web-site developer from Omaha, Neb., went on eBay and auctioned off advertising space on his forehead. "As I go around town doing my thing. . .your domain name will be plastered smack-dab on my noggin," he proposed. In exchange for a winning bid of $37,375, Fischer sported a temporary tattoo of the logo for an over-the-counter sleep remedy called SnoreStop for a full month. "It took me forever to do errands and stuff," Fischer says. "Everywhere I went people couldn't help noticing - they had to talk to 'the forehead guy.' I guess that's what the advertiser was after." Fischer's forehead - and the pictures of his brightly festooned brow that circled the globe in news-service stories about the stunt - amounted to what Christian de Rivel, an executive at SnoreStop, estimates to be nearly a million dollars of publicity. "Our sales increased 50 percent as a result," de Rivel says. A couple of years ago, John Carver of the London marketing agency Cunning trademarked the term ForeheADs, but until this year the practice had not gone much beyond college students looking for beer money. "It goes by a lot of names, like 'skinvertising,' " explains Drew Black, a spokesman for GoldenPalace.com, an Internet gambling and entertainment venture. Black is in charge of, among other things, making purchases on behalf of GoldenPalace.com that help raise the site's profile - for instance, a grilled cheese sandwich bearing the image of the Virgin Mary. "Now people mention us by name in their eBay ads," Black says. "We get many offers for tattoos." Gastronomic Reversals By WILLIAM GRIMES Fried mayonnaise? Hot ice cream? Chocolate pudding that can be sliced and cut? This year witnessed the flourishing of an unusual culinary fashion: dishes that, with the addition of certain chemicals, turn hot into cold or moist into dry or create an invisible boundary between the two. When chefs talk about fresh ingredients, gellan does not normally come to mind. Then again, neither does fried mayonnaise, but you cannot have one without the other. Gellan, a gum derived from bacterially fermented carbohydrates, holds emulsions together at very high heat. Wylie Dufresne, the chef and owner of WD-50 in Manhattan, uses it to deep-fry mayonnaise, which he serves with pickled beef tongue in a kind of disassembled deli sandwich. The mayonnaise, cut into cubes and coated with flour, egg and bread crumbs before going into the deep-fryer, is brown and crisp on the outside, oozy inside. At the restaurant Alinea in Chicago, Alex Stupak, the pastry chef, has pulled off almost as neat a trick using tapioca maltodextrin, a starch derivative that absorbs fat. By pulsing it in a food processor with caramel candy, he can transform caramel into a powder, which, when eaten, turns right back into caramel as the ingredients recombine in your mouth. The sweetener sorbitol also has its charms. It is commonly used in toothpaste, but add enough to your concoction and it has a plasticizing effect. Stupak uses it to turn chocolate pudding into a substance that can be rolled out, sliced and cut into thin strips that retain the mouth feel of ordinary pudding but can be tied into knots. This is a tantalizing prospect for pastry chefs, who, Stupak points out, "are always trying to get away from circles and squares." Dufresne, meanwhile, has moved on to methylcellulose, a polysaccharide gel that sets when hot and melts when cold. He has added it to chilled lemon yogurt in a squeeze bottle. When squirted into a bowl of hot cocoa dashi, the yogurt turns into plump "noodles." Genetic Theory of Harry Potter, The By STEPHEN MIHM This summer, the journal Nature published "Harry Potter and the Recessive Allele," a letter that argued that J. K. Rowling's tales of the young wizard Harry Potter offer an opportunity to educate children in modern theories of heredity. As almost everyone above the age of 3 knows, the Harry Potter novels depict a world divided into people who possess magical powers (wizards and witches) and those who do not (Muggles). Not everyone can be a wizard; indeed, after careful review of the evidence, the authors of the Nature letter concluded that wizards evidently inherit their gifts from their parents as predicted by the theories of the 19th-century geneticist Gregor Mendel. Apparently, wizardry (or the lack thereof) is determined by a linked pair of genes, or alleles, that you inherit from your parents, one allele from each parent. The researchers hypothesized that wizardry is a recessive trait, like blue eyes, meaning that an individual who inherits from his parents one wizard allele and one Muggle allele (wM or Mw) will not display wizarding powers. Only individuals with two wizard alleles (ww) will display magical powers. Such individuals - like Harry and his nemesis, Draco Malfoy - are more likely to be born to parents who possess ww genes. But children born of mixed marriages need not necessarily live a life of Mugglehood: those with a pure-blood wizard father (ww) and a part-Muggle mother (Mw) can inherit the precious ww genes. Children can also inherit the trait when neither parent is a wizard but both carry the wizard gene (Mw). Here the researchers cited Harry's friend Hermione Granger, the child of two Muggle dentists, as an example of the recessive allele surfacing against the odds. Case closed? Not a chance: no sooner had the letter appeared than a group of plant scientists at Cambridge fired off a rebuttal: "Harry Potter and the Prisoner of Presumption," in which they claimed that the recessive-allele hypothesis was "deterministic and unsupported by available evidence." Global Savings Glut, The By MICHAEL STEINBERGER That the United States, with a current account deficit equivalent to more than 6 percent of its gross domestic product, is living beyond its means is not in dispute. And, at least until recently, there was general agreement among economists that the shortfall was mainly due to American profligacy, in the form of record federal budget deficits and a household savings rate that has now officially hit zero. In March, however, the received wisdom was challenged by a formidable figure with a penchant for airing provocative views: Ben Bernanke, who was then a governor of the Federal Reserve and early next year will most likely replace Alan Greenspan as the central bank's chairman. In a speech to the Virginia Association of Economics, Bernanke suggested that the primary cause of the current account deficit was not America's excessive spending but rather the rest of the world's excessive thrift - what he coined, memorably, a "global saving glut." Bernanke pointed out that Japan, Germany and other advanced industrialized nations have been squirreling away money to help support aging populations and that because of a paucity of attractive domestic investments, a sizable share of those savings has been put to work in the United States. More important, a number of developing countries have greatly increased their savings, and a lot of this money has also come to the United States - through, among other things, vast purchases of U.S. Treasury securities. According to Bernanke, all this foreign investment helped inflate and sustain the stock-market bubble of the late 1990's. It has also helped keep long-term interest rates low, which in turn has produced a significant rise in property values and another surge in household wealth. In short, foreigners needed a place to park their savings, the United States became the depository of choice and this enormous inflow of investment put lots of cash in the pockets of Americans - cash they chose to spend rather than save. Although Bernanke is universally admired for his intellectual acuity, his hypothesis has met with some skepticism. Critics note that the global savings rate as a proportion of global output has been gradually declining for more than three decades. More cynical observers suggest that Bernanke's argument is simply a clever attempt by a Bush administration appointee to deflect blame for the rapid deterioration in America's finances. His-and-Her TV, The By SUSAN DOMINUS When two people curl up on a couch, absorbed in their respective novels, that state of affairs seems somehow companionable, even if the two have been transported to opposite ends of the imaginative universe, with one traipsing down the lush green paths of a 19th-century English estate and the other checking out the interstellar sex in one of Frank Herbert's sci-fi novels. Modern technology, of course, has made this kind of apart-while-together bonding all the more possible: picture a pair of lovebirds, their fingers entwined as they walk down the street, their free hands holding up cellphones as they talk to third and fourth parties. Now the good people at Sharp have created yet another opportunity for multitasking togetherness. It's called the controlled-viewing-angle LCD: a screen - for either a computer or a television, or a combination of the two - that shows different images depending on the angle from which you view it. With what's called a parallax barrier laid over the screen, the backlight is shunted off into left and right directions. One direction corresponds to one set of images, the other direction to another, entirely different set. Hollywood-Style Documentary, The By A.O. SCOTT When critics want to praise the realism of a fictional film, they sometimes liken it to a documentary. But as filmmakers and distributors have discovered the commercial potential of nonfiction movies, the comparison more often runs in reverse. The documentaries that look most attractive are those that mimic the tried-and-true conventions of Hollywood, telling unusual or exotic stories in reassuringly familiar ways. Of course, documentary remains an elastic category, encompassing essay films like "The Aristocrats," historical inquiries like "Ballets Russes" and acts of witness like "Darwin's Nightmare" - all released to critical praise in 2005. But none of them received as much attention - or made as much money - as "March of the Penguins" or "Mad Hot Ballroom," two movies that typify what might be called the Hollywood-style documentary, or the genre documentary. The genre documentary supplies the emotional and narrative satisfactions associated with popular commercial cinema, mining its material directly from the real world rather than synthesizing it according to screenwriting formulas. Its character arcs and three-act architecture are supposedly found in nature - or at least at events like the National Spelling Bee, the setting of "Spellbound," one of the progenitors of the genre. "Mad Hot Ballroom," which follows children in three New York schools as they prepare for a citywide dance tournament, offers up some of the pleasures, and many of the clich?s, of a classic sports movie, right up to the climactic triumph of the underdog. That the movie explores a unique event, the outcome of which could not have been known in advance, makes its sentiments sweeter and more intense. The kind of uplift that feels a little phony in a "based on a true story" feature like "Coach Carter" or "Dreamer" is redeemed in the competition documentary, a subgenre that also includes "Rize" and "Murderball." Hypomanic American, The By EMILY BAZELON For centuries, scholars have tried to explain the American character: is it the product of the frontier experience, or of the heritage of dissenting Protestantism, or of the absence of feudalism? This year, two professors of psychiatry each published books attributing American exceptionalism to a new and hitherto unsuspected source: American DNA. They argue that the United States is full of energetic risk-takers because it's full of immigrants, who as a group may carry a genetic marker that expresses itself as restless curiosity, exuberance and competitive self-promotion - a combination known as hypomania. Peter C. Whybrow of U.C.L.A. and John D. Gartner of Johns Hopkins University Medical School make their cases for an immigrant-specific genotype in their respective books, "American Mania" and "The Hypomanic Edge." Even when times are hard, Whybrow points out, most people don't leave their homelands. The 2 percent or so who do are a self-selecting group. What distinguishes them, he suggests, might be the genetic makeup of their dopamine-receptor system - the pathway in the brain that figures centrally in boldness and novelty seeking. The genetic variation that gets neurons firing along the dopamine circuits seems to have been disproportionately prevalent in the kinship groups that over generations walked the farthest 10,000 to 20,000 years ago, from Asia across the Bering Strait into the Americas. This genetic makeup, Whybrow argues, may also be present to a high degree among the 98 percent of Americans who were either born in another country or into families that came to this country in the last three centuries. If the genetic marker cuts across immigrants of all origins, it's not about where you come from, it's that you came at all. Infrared Pet Dry Room, The By JOEL LOVELL In September, at the FCI Seoul International Dog Show, the Korean engineering company Daun ENG introduced what may be the most radical new dog product since the chew toy. The Infrared Pet Dry Room is, as its name suggests, a chamber into which you place a wet dog in order to dry him or her via infrared radiation. Because infrared rays penetrate the dermis, they warm and dry an animal more quickly than a blow-dryer does, and they do so without resulting in the kinds of skin rashes that blow-dryers often cause. In Vitro Meat By RAIZEL ROBIN In July, scientists at the University of Maryland announced the development of bioengineering techniques that could be used to mass-produce a new food for public consumption: meat that is grown in incubators. The process works by taking stem cells from a biopsy of a live animal (or a piece of flesh from a slaughtered animal) and putting them in a three-dimensional growth medium - a sort of scaffolding made of proteins. Bathed in a nutritional mix of glucose, amino acids and minerals, the stem cells multiply and differentiate into muscle cells, which eventually form muscle fibers. Those fibers are then harvested for a minced-meat product. Scientists at NASA and at several Dutch universities have been developing the technology since 2001, and in a few years' time there may be a lab-grown meat ready to market as sausages or patties. In 20 years, the scientists predict, they may be able to grow a whole beef or pork loin. A tissue engineer at the Medical University of South Carolina has even proposed a countertop device similar to a bread maker that would produce meat overnight in your kitchen. Juvenile Cynics By LEAH MESSINGER Adults often extol children for their innocence, but according to a recent study by researchers at Yale's department of psychology, kids are in fact the most hardened of cynics. The study, which was written by Candice M. Mills and Frank C. Keil and appeared in the May 2005 issue of the journal Psychological Science, suggests that young children are especially apt to believe that when people distort the truth, they do so for selfish reasons. In one part of the experiment, kindergartners, second graders, fourth graders and sixth graders read or heard short stories about the outcomes of various contests. The children were then informed that some of the characters in these stories had falsely reported the results of the contests. The children had to decide whether the misstatements were due to lying, bias or innocent error. More often than not, the children believed that the characters were lying - provided that a character's spreading of misinformation was clearly aligned with the promotion of the character's self-interest. In fact, the study shows young children to be more cynical than adults because they are more likely to link self-interest with intentional deception as opposed to a mistake or subconscious bias. "Young children are less likely than adults to give people who make incorrect statements in their own favor the benefit of the doubt," Mills writes, "assuming instead that these kinds of inaccuracies arise from a malicious intent to deceive." Mills attributes the study's results partly to the fact that many elementary-school children have yet to distinguish between conscious and unconscious thought. Clearly, outright deception is a simpler concept than subtle bias. In any case, the children's acute skepticism may be an increasingly important tool for the newest members of an information-based economy. Laptop That Will Save the World, The By MICHAEL CROWLEY Here in America, high-speed wireless Internet has become a commonplace home amenity, and teenagers with Sidekicks can browse the Web on a beach. For many people in developing nations, however, the mere thought of owning a computer remains pure fantasy. But maybe not for long. This year, Nicholas Negroponte, chairman of the Massachusetts Institute of Technology's Media Lab, unveiled a prototype of a $100 laptop. With millions of dollars in financing from the likes of Rupert Murdoch's News Corporation and Google, Negroponte and his colleagues have designed an extremely durable, compact, no-frills laptop, which they'd like to see in the hands of millions of children worldwide by 2008. So how can any worthwhile computer cost less than a pair of good headphones? Through a series of cost-cutting tricks. The laptops will run on free "open source" software, use cheaper "flash" memory instead of a hard disk and most likely employ new LCD technology to drop the monitor's cost to just $35. Each laptop will also come with a hand crank, making it usable even in electricity-free rural areas. Of course, the real computing mother lode is the Internet, to which few developing-world users have access. But the M.I.T. laptops will offer wireless peer-to-peer connections that create a local network. As long as there's an Internet signal somewhere in the network area - and making sure that's the case, even in rural areas, poses a mighty challenge - everyone can get online and use a built-in Web browser. Theoretically, even children in a small African village could have "access to more or less all libraries of the world," Negroponte says. (That's probably not very useful to children who can't read or understand foreign languages.) His team is already in talks with several foreign governments, including those of Egypt, Brazil and Thailand, about bulk orders. Gov. Mitt Romney of Massachusetts has also proposed a bill to buy 500,000 of the computers for his state's children. Localized Food Aid By ALEXANDRA STARR In emergency medicine, doctors often refer to the "golden hour," or the 60-minute window after a medical calamity when treatment is most likely to save the patient. Famine emergencies have a similar dynamic: if food arrives at the earliest signs of a shortage, more lives will be saved. Buying food locally often provides the greatest chance to prevent starvation. That's partly because of the geography of famine relief. About two-thirds of countries in need are in Africa, while many of the donor countries are congregated in North America and Western Europe. Shipping emergency food aid from the U.S. often takes five months or longer. Every country that contributes regularly to famine relief has the flexibility to buy locally, with the exception of the biggest donor: the United States. Federal laws more than half a century old dictate that all food aid has to be purchased in and shipped from U.S. soil. Earlier this year, the Bush administration tried to relax the rules and allow up to one-fourth of the food-aid budget to be used to buy commodities from in or around the famine-afflicted country. The proposal, however, was voted down in Congress. The defeat could be chalked up to the fact that it ran afoul of a fundamental tenet of U.S. food aid: help the needy but also make sure you boost the bottom line of agribusiness and shipping companies. While it's understandable that corporations are loath to forfeit government money, they have a somewhat surprising ally in maintaining the status quo: nongovernmental organizations. These groups, which distribute food in poor countries, have their own financial stake in fighting local buying. That's because only a fraction of the food donated by U.S. farmers is used in famine emergencies. Development and relief groups sell the surplus and earmark the proceeds for anti-hunger and poverty programs. They fear that ending the current practice of shipping from the U.S. will curtail big-business support for food aid, leading to smaller budgets. Those, in turn, could jeopardize their anti-hunger programs. Making Global Warming Work for You By D.T. MAX If global warming occurs at the pace that most scientists anticipate, there is going to be money in it for those who have the right product in the right place at the right time. George Bernard Shaw once noted that profits are made in the dark; now they will be made in the heat. Some farmers in Britain are already finding that longer summers mean a lower animal feed bill. ("Every day that the sheep can eat grass instead of us having to carry out cake is a bonus," a Welsh farmer named John Davies told a local newspaper. "We should look at the effects of global warming and learn to work with it and use it effectively.") A Central American company is pitching Americans a tree called the leucaena, which they claim has the potential to sequester carbon dioxide - a source of global warming. And an outerwear manufacturer in New York now makes the linings of its winter jackets removable after finding that its lightweight "microfiber windbreaker" was selling year round. Then there are the experimenters. Klaus Lackner, a geophysics professor at Columbia University, is working on a windmill-size structure that takes carbon dioxide from the air and traps it. In general, though, the Europeans are far ahead of the Americans in designing for a future they see as inevitable. The Dutch have a particular interest in getting on top of the global-warming market. In Amsterdam, innovative architects are designing boutique floating houses, offices and even stadiums, anticipating the day when the Netherlands' low landmass is inundated. Medical Maggots By CLAY RISEN Most people associate maggots with death and disease. But if Ronald Sherman has his way, you may soon think of them as lifesavers. Sherman, a doctor who runs the BioTherapeutics, Education and Research Foundation in Irvine, Calif., is a leading proponent of maggot debridement therapy (M.D.T.) - the use of maggots to consume dead tissue, kill bacteria and stimulate new tissue growth. "They exert these actions 24 hours a day," Sherman says, "without the need for a highly trained surgeon and without the high cost of many other comparable treatments." Sherman has been working with maggots for two decades. The Food and Drug Administration recently cleared maggots for marketing and distribution as "medical devices" - the first living organisms to receive full F.D.A. approval. Along with leeches, maggots are part of the emerging field of biotherapy - the therapeutic use of living creatures. M.D.T., used to clean out gangrenous tissue often found in ulcers, burns and postoperative infections, is a relatively simple process (and cheap, at around $100 a treatment). After doctors clean the wound and isolate it with gauze, the maggots are placed on the tissue, about 5 to 10 per square centimeter. After being covered with more gauze, they are left alone for 48 to 72 hours. Most patients feel nothing, though some report a tingling sensation or pain. Microblindness By NOAH SHACHTMAN People are distracted by naked supermodels; it doesn't take three Ph.D.'s to figure that out. But what psychology researchers at Yale and Vanderbilt Universities have discovered is that erotic - and violent - images are so distracting that they make people temporarily blind. Steven Most, Marvin Chun and David Zald ran a high-speed computer slide show for students, with hundreds of photographs flicking by for a tenth of a second apiece. Students were asked to pick out the picture that was turned 90 degrees from the rest. Most of the time, it was no problem. But when a snapshot of a bared breast or a severed limb came just before the perpendicular image, the test subjects had a 30 percent chance of missing the perpendicular image completely. Researchers call the effect "attentional rubbernecking." "It's like when you turn to look at an accident as you're driving on the highway," Most says. "There's something involuntary about it." Only students who scored low for anxiety in psychological profiles managed to find the target picture consistently - and that was only true when they were distracted by violent images (not erotic ones) and received detailed instructions about what to look for. Zald says that there's a message in the study for marketers who like to drape the scantily clad around their products. "A sexy billboard might capture attention," he says. "But people might end up so focused on the sex, they miss what's being advertised." The consequences could be even worse if that billboard was above the highway. "The attentional blindness may only last for a half-second or less," Zald says. "But when you're going 70 miles an hour, that can be plenty dangerous." Monkey Pay-Per-View By ALAN BURDICK Entertainment moguls, take note: scientists are now one step closer to understanding what a monkey will - and won't - pay to see. In January, the neurobiologists Michael Platt and Robert Deaner at Duke University published the results of an experiment that explored the viewing habits of male rhesus monkeys seated in a laboratory. If a test monkey chose to look in one direction, it received a squirt of cherry juice. If it looked in another, it received a slightly larger or smaller squirt of juice, plus one of several images to look at: the face of a higher-status or lower-status monkey or the attractive back end of a female monkey. By varying the amount of juice that came with each picture, the researchers were able to calculate the value of each image, in "relative juice payoff," to the viewer. The results of the study, called "Monkeys Pay Per View," would not surprise a theater operator in Times Square, past or present. With remarkable consistency, the monkeys were willing to forgo a little juice - to pay extra, in effect - to look at the more important monkeys or to check out some monkey booty. (The male monkeys did not seem to prefer female faces over male ones, however.) For scientists, the results offered the first experimental confirmation that monkeys discriminate between pictures of their brethren based on social status. To what extent they are picking up on facial cues, or bringing to bear their own prior familiarity with the social group, remains to be spelled out. Regardless, the study shows that the importance of social information is wired into their brains: the neural circuits that assign value (in the currency of juice) have access to the database of social interactions. What holds for nonhuman primates may also hold for people. With a better understanding of the neural basis of social cognition, neurobiologists hope to get a handle on diseases like autism, which effectively disrupts a person's ability to judge the expressions, intentions and importance of other individuals. National Smiles By D.T. MAX Dacher Keltner, a professor of psychology at the University of California at Berkeley, contends that Americans and the English smile differently. On this side of the Atlantic, we simply draw the corners of our lips up, showing our upper teeth. Think Julia Roberts or the gracefully aged Robert Redford. "I think Tom Cruise has a terrific American smile," Keltner, who specializes in the cultural meaning of emotions, says. In England, they draw the lips back as well as up, showing their lower teeth. The English smile can be mistaken for a suppressed grimace or a request to wipe that stupid smile off your face. Think headwaiter at a restaurant when your MasterCard seems tapped out, or Prince Charles anytime. Keltner hit upon this difference in national smiles by accident. He was studying teasing in American fraternity houses and found that low-status frat members, when they were teased, smiled using the risorius muscle - a facial muscle that pulls the lips sideways - as well as the zygomatic major, which lifts up the lips. It resulted in a sickly smile that said, in effect, I understand you must paddle me, brother, but not too hard, please. Several years later, Keltner went to England on sabbatical and noticed that the English had a peculiar deferential smile that reminded him of those he had seen among the junior American frat members. Like the frat brothers', the English smile telegraphed an acknowledgment of hierarchy rather than just expressing pleasure. "What the deferential smile says is, 'I respect what you're thinking of me and am shaping my behavior accordingly,"' Keltner says. His theory was put to the test earlier this year when a British journalist showed Keltner 15 pictures of closely cropped smiles and Keltner guessed right - Briton or American - 14 times. "I missed Venus Williams like a fool," he remembers. Open-Source Reporting By ALEXANDRA STARR No liberal blogger could complain about a dearth of material in 2005. From the Bush administration's ham-fisted response to Hurricane Katrina to the indictment of the former Republican majority leader Tom DeLay, opportunities to lambaste the Republican Party were abundant. Of course, staying abreast of all these developing stories was not a facile proposition, at least in the experience of Joshua M. Marshall, editor of the left-leaning blog Talkingpointsmemo.com. And so this October he put out a plea for help, asking his readers to share their knowledge of the spreading Washington scandals. He termed the effort "open-source investigative reporting." The phrase echoes the open-source software movement, whose programmers pool their expertise to write source code. Other Internet-based endeavors, like the online encyclopedia Wikipedia, also draw on a virtual community to produce Web site content. Talking Points Memo provided an ideal platform for a similar experiment: the blog attracts some 100,000 readers a day, many of them hard-core news obsessives. In Marshall's words, they represented a "huge nationwide information-gathering apparatus." Marshall challenged his virtual news corps to dig into a succession of Republican embarrassments. Drawing on news reports, they laid out a detailed chronology of the events that culminated in the arraignment of the former vice presidential chief of staff I. Lewis Libby for obstruction of justice. After Marshall obtained a list of gifts that the disgraced Republican lobbyist Jack Abramoff showered on Capitol Hill employees, he asked readers familiar with the Congressional ethics code to determine if the goodies were violations. Sometimes his directives were less specific. Take, for example, his post on the former Federal Bureau of Investigation director Louis Freeh, whom he derided as an incompetent hack. "Freeh is a walking glass house," Marshall wrote. "Please everyone collect your rocks." Parking Meters That Don't Give You a Break By BRENDAN I. KOERNER Pulling up to an overfed parking meter, and thus saving a quarter or two, is one of life's small pleasures. So the elders of Pacific Grove, Calif., seem like major killjoys for installing 100 high-tech meters that deprive motorists of this karmic reward. A wire grid beneath each parking space senses the magnetic disruption caused by a vehicle's departure, causing the meter's time to reset to zero. "We're making parking more democratic," insists Kirby Andrews, a managing director of the Connecticut-based InnovaPark, which designed Pacific Grove's stingy meters. He points out that the digital meters can be programmed to help free up spaces more frequently, either by refusing additional coins until a new car arrives or by increasing rates over time - $1 for the first hour, $2 for the second, $4 for the third. Parking democracy may or may not be on the march, but "dumb" meters are definitely headed for extinction, replaced by more efficient, more draconian alternatives. In Montreal, for example, drivers punch their parking space number into a solar-powered curbside machine that accepts both credit cards and coins. When time runs out, the machine registers that fact, and P.D.A.-toting parking agents can see the violator's space flash red on an on-screen map. Cities love these new technologies because they make it easier to ticket scofflaws, turn over spaces and generally stick it to motorists a wee bit more. The financial results are hard to argue with: InnovaPark, for example, estimates that by simply resetting to zero when a car pulls out of a space, its meters rake in 20 to 40 percent more revenue than dumb meters. Playoff Paradigm, The By JOHN SWANSBURG Team sports like football and basketball have long benefited from the frenzy of audience interest that attends the postseason. Now individual sports like golf, tennis and stock-car racing have realized that the playoff format can be viable - and lucrative - for their leagues too. It took Nascar, the most baldly commercialized of sports, to see the merits of adopting a playoff format. Last year, Nascar began packaging the season's last 10 races as a playoff series. (After 26 "regular season" races, the top 10 drivers qualify for the so-called Chase for the Nextel Cup, in which they vie for the championship.) In the first year of the new system, TV ratings for the final 10 races of the season jumped 12 percent. They rose again this year, despite the failure of the marquee drivers Dale Earnhardt Jr. and Jeff Gordon to make the playoff cut. Such was the backdrop for the P.G.A.'s announcement this fall that it will institute its own late-season playoff series beginning in 2007, culminating in a revamped Tour Championship that will, in fact, crown a Tour champ. (As it stands, the Tour Championship is an event of such little import that Phil Mickelson skipped it this year, supposedly so he could go trick-or-treating with his kids.) The P.G.A.'s new system, which will impose the arc and rigor of a team sport's season on golf's bloated 10-month tour, borrows liberally from Nascar, as well as from tennis's new United States Open Series. "We're the only major sport that doesn't have a playoff system," said the P.G.A. Tour commissioner, Tim Finchem, in explaining the plan. Not long after, the L.P.G.A. announced a playoff system of its own. Pleistocene Rewilding By ALAN BURDICK The golden era of North America, in the eyes of today's North Americans, was about 200 years ago, when bison roamed the Great Plains by the millions. But even that is a watered-down memory. Thirteen thousand years ago, truly mega fauna, including lions, cheetahs, camels and five kinds of elephant, also walked the land - and still would today, had humans not come along to speed their extinction. Why not bring 'em back? In August, in a paper titled "Re-Wilding North America," a team of naturalists led by Josh Donlan, a Cornell University graduate student, proposed to do exactly that. Many of these large vertebrates now face extinction in Africa and Asia, the authors note; meanwhile, the Great Plains area is slowly emptying of its people. No time like the present to restore the Pleistocene-era wilderness. Ecotourists and dollars would flock to the region. More important, the plan would aid the global conservation effort and simultaneously serve as an antidote to "the 'pests and weeds' (rats and dandelions) that will otherwise come to dominate the landscape," the authors wrote. Of course, the reintroduced beasts wouldn't be the exact same species as the originals, merely their closest living kin. And key logistical details need to be ironed out. For instance, would the predators be fenced-in or freely roaming? Mostly the former, the naturalists propose - although that sounds more like Busch Gardens than the Serengeti bush. On the other hand, prides of free-roaming lions would present a certain. . .liability. (In Tanzania, lion attacks on people have risen 300 percent in the past 15 years.) Ours is a nation that can barely stand to shoot the deer overrunning our lawns. Pity the first wildlife manager seen shooting a renegade lion on network television. Porn Suffix, The By JASCHA HOFFMAN Establishing a new Internet suffix like ".com" or ".org" takes deep pockets and patience. This has not deterred Stuart Lawley, a Florida entrepreneur, from trying to establish a pornography-only ".xxx" domain. In such a realm, Lawley could restrict porn marketing to adults only, protect users' privacy, limit spam and collect fees from Web masters. The .xxx proposal was finally slated for approval in August by the Internet Corporation for Assigned Names and Numbers (Icann), but because of a flurry of protest, it has been shelved for now. Lawley's scheme has aroused support and dissent across the political spectrum. The Family Research Council warns that it will simply breed more smut. But Senator Joe Lieberman supports a virtual red-light district because he says it would make the job of filtering out porn easier. Meanwhile, some pornographers, apparently drawn by the promise of catchier and more trustworthy U.R.L.'s, have gotten behind Lawley. Other skin-peddlers, echoing the A.C.L.U., see the establishment of a voluntary porn zone as the first step toward the deportation of their industry to a distant corner of the Web, where their sites could easily be blocked by skittish Internet service providers, credit card companies and even governments. The Free Speech Coalition, a lobbying group for the pornography industry, supports an entirely different approach to Web architecture. It recommends that children be confined to a wholesome ".kids" domain. This "walled garden" theory of Internet safety is not original. It is borrowed from Lawley himself, who has since dropped it because he deems it impractical. Preventing Suicide Bombing By PAUL TOUGH How do you stop a suicide bomber on his way to a target? Until recently, that wasn't an urgent question for scholars in the West. But it is now, and scientists, military strategists and security experts are scrambling, in different directions, to find an answer. Last year, Darpa, the Pentagon's research arm, convened a panel of experts through the National Research Council to study methods to detect suicide bombers from a distance, before they strike. The panel's report presented some provocative ideas, including "detection by detonation." Under this plan, soldiers at a military checkpoint would fire radiation at each approaching car. If there were no explosives on board, the car would pass through the beam safely. But if the car carried suicide attackers, the radiation would cause their bombs to explode, killing everyone on board (and anyone unlucky enough to be nearby), but leaving the checkpoint unharmed. Another intriguing notion from the report: "distributed biological sensors" - bees, moths, butterflies or rats specially trained to pick up bomb vapors, buzzing or fluttering through a crowd, sniffing for fumes. (The rats, equipped with global-positioning-system chips, would work the sewers.) Even if you manage to detect a suicide bomber, what do you do next? This question was taken up by Edward H. Kaplan, a professor of public health at Yale, in a paper he published in July, written with Moshe Kress of the Naval Postgraduate School in Monterey, Calif. Kaplan and Kress investigated the physics of a belt-bomb blast and reached some unexpected conclusions. It turns out that very few people are killed by the concussive force of a suicide explosion; the deadly weapon is in fact the shrapnel - the ball bearings, nails or pieces of metal that the attacker attaches to the outside of his bomb. The explosions, though, are usually not powerful enough to send these projectiles all the way through a human body, which means that if your view of a suicide bomber is entirely obscured by other people at the moment of detonation, you are much more likely to escape serious injury. Because of the geometry of crowds, Kaplan found, a belt bomb set off in a heavily populated room will actually yield fewer casualties than one set off in a more sparsely populated area; the unlucky few nearest to the bomb will absorb all of its force. The authors used these calculations to question some assumptions about what authorities should do if they detect a bomber. The International Association of Chiefs of Police issued guidelines this year suggesting that police officers who find a bomber in a crowd should fire shots into the air to cause people near the bomber to scatter or hit the deck. But Kaplan's calculations demonstrate that in many cases, this would make things worse - as a packed crowd ran away from a bomber or dropped to the ground, the circle of potential victims around him would get wider and thus more populous, and more lives could be lost. As Kaplan points out, these physics create an unusual moral dilemma. If you suddenly find yourself next to a suicide bomber about to set off his charge, there is what he calls "a huge conflict between what's best for you as an individual and what's best for the group." Soon after his paper was published, Kaplan came across one blogger who had read about his research and concluded that the heroic thing to do in that situation would be to approach the bomber and hug him or her, sacrificing yourself but saving the lives of many people behind you. Not surprisingly, homeland security experts are still looking for other neutralization techniques, ones that don't involve hugging. And here there is another divide in the field. The association of police chiefs recommended this year that police on the scene simply shoot suspected bombers in the head - "specifically, at the tip of the nose when facing the bomber . . .or about one inch below the base of the skull from behind." The London officers who two weeks after the July bombings killed an innocent Brazilian man whom they suspected of being a suicide bomber were operating on similar instructions, a shoot-to-kill policy known as Operation Kratos. Readable Medicine Bottle, The By SUSAN DOMINUS With their squeezed, blocky typefaces, odd assortments of inexplicable numbers and mysterious codings, the tops that come off too easily or not at all, prescription medicine bottles remind you, above all, not to expect too much from medicine. One look at the forlorn bottle and its dated, amber cast, and we can practically see the hospital walls in that milky green, the plastic wrap sweating over the canned fruit salad. Medicine, it seems, is supposed to resist the aesthetic touch. We have come to accept that unquestioningly, just as we've come to accept so much else about the way we move through the world of health care. It occurred to Deborah Adler to expect more. Adler's grandmother risked serious harm when she accidentally swallowed her husband's medicine instead of her own, and Adler couldn't help wondering why prescription bottles couldn't more clearly demarcate whose bottles belonged to whom. That simple question spawned a host of others. Why should the patient have to rotate the bottle to read all the information on it? Why should the name of the drug be hidden at the bottom of the label? Why didn't the label say anywhere what the medicine was for? Adler began redesigning the bottle for her M.F.A. thesis at the School of Visual Arts and continued to develop the project after she graduated and began working with the graphic designer Milton Glaser. She devised a label that displayed the most relevant information - patient name, drug name and instructions - up top, where it couldn't be missed. She and Glaser also reconfigured outdated warning labels: instead of showing an icon that looked like either a pastrami hero or a flying saucer for "take with food," they presented a simple plate with knife and fork. Each family member, Adler proposed, would have a designated color associated with his or her bottles. An information card would slip into a pocket on the back of the bottle, clearly indicating the medicine's purpose. Republican Elitism By NOAM SCHEIBER For more than a generation, the most popular maneuver in the conservative playbook has been to denounce academic and cultural elites. In the 1960's, William F. Buckley Jr. quipped that he'd rather "be governed by the first 2,000 names in the Boston telephone book than by the 2,000 members of the Harvard faculty." In 2003, David Frum, a conservative columnist and former Bush speechwriter, denounced the "American elite" for its "combination of guilt and self-doubt." "With his 'axis of evil' speech," he continued, "President Bush served notice to the world: he felt no guilt and no self-doubt." But this year, many conservatives put the elite-baiting aside and began trafficking in their own elitist pronouncements. Even in the relatively rarefied world of appellate judges and academic economists, John Roberts (Harvard College, Harvard Law School), Samuel A. Alito Jr. (Princeton, Yale Law School) and Ben Bernanke (Harvard and M.I.T., Princeton professor) stand out for their impressive credentials. When George Bush nominated them to important offices, Frum and his fellow conservative commentators saluted their impeccable qualifications even as they denounced the nomination of the relatively uncredentialed Harriet Miers (Southern Methodist University, S.M.U. School of Law). Of course, for all their insistence that businessmen and ordinary citizens understand the realities of American life better than highly credentialed professionals, conservatives have spent decades building a credentialed professional elite of their own. Ambitious conservatives have attended top graduate schools, taken positions at prestigious research groups and law firms, written important books and papers. They've waited patiently for the day when they could take their places at the highest levels of government - when, say, a second-term Republican president would enjoy majorities in both houses of Congress. Now that the day has finally come, conservatives don't want the opportunity fumbled - and certainly not in favor of a few presidential cronies. For Bush "to take a hazard on anything other than a known quantity of the highest intellectual and personal excellence," Frum wrote in the aftermath of the Miers nomination, would be "simply reckless." Responding to Bush's request that conservatives trust his assessment of Harriet Miers, George F. Will thundered: "He has neither the inclination nor the ability to make sophisticated judgments about competing approaches to construing the Constitution. Few presidents acquire such abilities in the course of their pre-presidential careers, and this president, particularly, is not disposed to such reflections." So much for the idea that what Bush knows in his heart is more important than what intellectuals know in their heads. Conservatives still may not want the Harvard faculty running the country. But it turns out they'd be perfectly happy with alumni of the Harvard Republican Club. Robot Jockeys By ROBERT MACKEY When the new camel-racing season got under way recently in Qatar and the United Arab Emirates, spectators with sharp vision noticed that something was up with the freshman class of jockeys. For the first time in living memory, they were not young children imported from impoverished areas of South Asia and Africa. The jockeys in Qatar were imported from Switzerland; those in the Emirates were local products. And one more thing: they were also robots. After years of pressure from human rights groups and Western governments, gulf sheiks agreed to enforce bans on child jockeys this year. Rather than opt for heavier adult riders, though, camel owners obsessed with speed asked robotics firms to start churning out mechanical boys. So this year's races look more like scenes from "Futurama" than "Lawrence of Arabia," with thoroughbred camels guided by child-size robot jockeys racing along desert tracks at speeds of 25 miles an hour. The jockeys themselves are controlled by the camel trainers, who follow close behind in S.U.V.'s, hunched over remote controls that send out wireless signals. At the flick of a joystick or the press of a button, the trainers can move the jockeys' arms to pull the camels' reins or administer whippings. As required, they can also shout abuse at their camels, delivered via walkie-talkie. Runaway Alarm Clock, The By STEPHEN MIHM Every morning, millions of Americans begin the day with the annoying beep of an alarm clock - a noise they are likely to silence with a fumbling tap of the snooze button. While the few minutes' rest this affords is reprieve enough for some, many hit the snooze button again and again, prolonging their wake-up time and leaving themselves late for school or work. For these undisciplined dozers, a machine has arrived that promises to get them out of bed - literally. The device, known as Clocky, looks like a conventional digital alarm clock, only wrapped in shag carpeting and with wheels sticking out the sides. Designed by Gauri Nanda when she was a graduate student at the M.I.T. Media Lab, Clocky, too, has a snooze button. But a few minutes after the button is pressed, the clock drives itself off the night stand. Once on the floor, after righting itself, it will move in random directions, eventually nesting in another part of the room. Then the alarm sounds again, forcing the sleeper to rise. Nanda, who says she hopes that Clocky will arrive on store shelves next year, explains that her inspiration came from some kittens who once shared her bedroom, nibbling on her toes in the morning but running away when confronted. Clocky's movements are equally unpredictable: it might end up hiding under the bed, lurking in a corner or simply lounging in the middle of the room. But it will rarely, if ever, settle within arm's reach. "It reminds you of a troublesome pet," Nanda says. Scientific Free-Throw Distraction By JASON ZENGERLE Every basketball fan knows that the seats behind a backboard don't afford a great view of the court, but they do provide an opportunity to affect a game's outcome. By waving ThunderStix - those long, skinny balloons that make noise when smacked together - or other implements of distraction, fans sitting behind the basket can unnerve an opposing team's foul shooters and make them miss. But not, a new theory holds, unless the fans gesticulate in a particular way. According to Daniel Engber, a basketball fan with a master's degree in neuroscience, the standard "free-throw defenses" are too haphazard to be effective. Fans tend to wave their ThunderStix willy-nilly, creating a unified field of randomly moving objects. Because of the way the human brain perceives motion, free-throw shooters can easily ignore this sort of visual commotion. "Fans might think they're doing something by crazily waving their ThunderStix," Engber says, "but to the players it's all just a sea of visual white noise." Which is why, Engber surmises, N.B.A. teams' free-throw percentages at home and on the road are nearly identical. The key to a successful free-throw defense, Engber argues, is to make a player perceive a "field of background motion" that tricks his brain into thinking that he himself is moving, thereby throwing off his shooting. In other words, fans should wave their ThunderStix in tandem. Last season, Engber proposed this tactic to the Dallas Mavericks' owner, Mark Cuban, who took him up on the idea. For three games, Cuban had members of the Mavs' Hoop Troop instruct fans to wave their ThunderStix from side to side in unison. And as Engber subsequently reported in the online magazine Slate, the initial results were encouraging. In the first game, the Mavericks' opponent, the Boston Celtics, shot 60 percent from the line, about 20 percent below their season average. In the second game, the Milwaukee Bucks shot a meager 63 percent. But in the third game, the Los Angeles Lakers shot 78 percent - about the league average. Which apparently was enough to persuade Cuban to abandon the strategy. Seeing With Your Ears By ALISON MOTLUK Seeing is something that most of us expect to do with our eyes. But what if you are born blind or lose your sight later in life? Peter Meijer suggests you consider seeing with your ears instead. Meijer, a research scientist in the Netherlands, has developed a technology called the vOICe, which allows you to represent visual information - to "see" - with sounds. The device is a tiny camera, a laptop and headphones. The camera is mounted on your head and the laptop takes the video input and converts it into auditory information, or soundscapes. The scene in front of you is scanned in stereo: you hear objects on your left through your left ear and objects on your right through your right ear. Brightness is translated as volume: bright things are louder. Pitch tells you what's up and what's down. The image refreshes once a second. With practice, Meijer says, you can learn to sense instinctively how the features of a soundscape correspond to objects in the physical world. Pat Fletcher, for instance, a proficient user of the vOICe who could see until age 21, describes the grayscale images in her head as "ghostly" but real. At a meeting of the Cognitive Neuroscience Society in New York in April, researchers from Harvard Medical School announced that when they viewed the activity in the brains of two vOICe users (one blind at birth, the other who went blind later in life), it was in many respects like that of a sighted person while seeing. Self-Fulfilling Trade Rumor, The By DEAN ROBINSON Desmond Mason for Jamaal Magloire - it was hardly the stuff of N.B.A. legend. A swingman (Mason) averaging 12.8 points a game over his career for a center (Magloire) averaging 9.5. But what was notable about the trade, which took place in late October just before the current N.B.A. season began, when the Milwaukee Bucks sent Mason to the New Orleans Hornets for Magloire, is that it was inspired by a Web site, hoopshype.com. At least that's where Larry Harris, the Bucks' general manager, told The Racine Journal Times he got the idea for the deal. Harris saw on the site's Rumors page that the Hornets were willing to trade Magloire (despite the Hornets' coach having publicly said otherwise), so he contacted the Hornets and promptly traded for Magloire. It's close to a hoops version of the ontological argument for the existence of God: Mason for Magloire made so much sense, apparently, that once the very possibility of Magloire being traded was posited, it had to be reality. The rumor that the Hornets were open to trading Magloire did not originate online; it appeared in a New York Post column by Peter Vecsey. But the fact that a Web site, not firsthand exposure to the column itself, brought two dealmakers together hints at how a couple of Internet-driven phenomena could be changing professional sports. First, the spread of fantasy leagues means that when it comes to the major sports, everyone's a general manager; fans parse player stats online, collect and share all sorts of data and analysis and conduct their own trades. In some fantasy league somewhere, some team had probably already traded Mason for Magloire. Serialized Pop Song, The By WM. FERGUSON Every summer has its jeep beats - those inescapable radio hits that swell and subside with the passing S.U.V.'s. Last summer, though, one track was different: it had no chorus, no hook, no real beat even. In fact, it wasn't one song but five. It was R. Kelly's "Trapped in the Closet," a multipart urban operetta released to radio in "chapters" in anticipation of his 10th studio album, "TP.3: Reloaded." Each chapter is about three minutes long and has the same music, a nondescript slow groove, over which Kelly sings a tale of epic infidelity and improbable plot twists. "Trapped in the Closet" was a minor cultural moment: Chapters 1 through 5 each occupied the top spot in the urban charts; the album made its debut at No. 1. Last month, Kelly released a DVD collecting videos of the original five chapters, plus seven new ones, with Kelly himself as co-director and star. If you consider it only as a concept, "Trapped in the Closet" seems inevitable. It touches on the big entertainment megatrends: the branded franchise, the automatic spinoff to DVD, the self-updating content of podcasting, the campy soap opera of "Desperate Housewives." As for the song itself, that's a different story. "Trapped in the Closet" may indeed be without precedent. It is also completely bananas. Start with the plot. The narrator wakes up in a strange woman's bed. Before he can get home to his own wife, he's hiding in the closet from his one-night stand's husband, a pastor (of course) named Rufus. In short order, the narrator is waving a gun around, only to be rendered speechless by the appearance of Rufus's lover, Chuck. And that only gets us up to Chapter 2. Sitcom Loyalty Oath, The By BRYAN CURTIS In April, an unusual document appeared on the Web site www.getarrested.com. It was a "loyalty oath," which fans of the TV show "Arrested Development," Fox's critically beloved but anemically rated sitcom, could sign to make a formal declaration of their devotion. The oath was the brainchild of Fox executives, who were increasingly leery about the future of the show. Despite having won an Emmy for Best Comedy Series, "Arrested Development," then ending its second season, was in dire straits. The show had placed 121st among the programs measured by the Nielsen ratings, and Fox had shortened the show's second season by four episodes. Fearing that the program was headed for "hiatus," Internet-based buffs began to hound Fox with angry phone calls and mass mailings. The loyalty oath offered aggrieved fans a chance to "speak out and take action," a Fox spokesman explained. By signing, a viewer pledged "never-ending loyalty and allegiance to the best comedy on television today." The signatory further promised to "tune in for every single episode of the third season" and to recruit friends and family to join him in the cause. The oath ends with a lusty call to action: "With these efforts, I promise to keep 'Arrested Development' alive!" Fox claims that more than 100,000 people have signed it. Nonetheless, "Arrested Development" soon lost even more of its audience, and in November Fox announced that it was shortening the current third season by nine episodes - an almost sure sign that the show would soon be canceled. Solar Sailing By BRYAN CURTIS Solar sailing is a bit like a missing link between Carl Sagan and Patrick O'Brian. "Imagine hoisting a sail and being out there in space," says Louis Friedman, the project director of Cosmos 1, the world's first solar-sail spacecraft. "It's a beautiful idea, and it conjures up the idea of the great sailing ships and whole notion of exploration." On June 21, Friedman's team placed the unmanned Cosmos 1 inside an intercontinental ballistic missile and launched it skyward from a Russian submarine. The missile faltered, and Cosmos 1 crashed into the sea - a scene recalling great shipwrecks more than great discoveries. But if, someday, it works, Friedman says that solar sailing could have many advantages over conventional space flight. For one thing, a solar-sailing vessel does not require fuel. Once in space, the Cosmos 1 would have unfurled eight 50-foot-long sails, which would have arrayed themselves around the ship like flower petals. Engineers built the sails from thin sheets of aluminum-coated Mylar that were designed to reflect photons from the sun's light and thus propel the craft forward. As with an earthbound ship, the angle of the sails was to be manipulated to control the direction of the craft. At first, a solar-sailing vessel would accelerate very slowly. But its acceleration would be constant, so that in a little over a year's time the craft could be traveling at speeds in excess of conventional rockets. For this reason, Friedman sees solar sailing as the ideal method to visit other planets. It could, he predicts, shave a few years off a satellite's 10-year trip to Pluto. Sonic Gunman Locator, The By NOAH SHACHTMAN The bombs get all the headlines, but gunfire is also a constant threat to American troops in Iraq. Between the shattered buildings, the rubble piles, the swirling dust storms and the roaring Humvees, shooters can be very hard to find. The Pentagon's response: start equipping Humvees with technology that can automatically pinpoint where the shots are coming from. One system, known as Boomerang, uses a bundle of seven microphones, each facing a different direction, mounted on top of an 18-inch pole. When a bullet flies by, creating a shock wave, each microphone picks up the sound at a slightly different time. Those tiny differences allow the system to calculate where the shooter is. (Boomerang also listens for the blast from the gun's muzzle, which reaches the system just after the bullet's faster-than-sound flight.) Inside the Humvee, a recorded voice buzzes through a dashboard speaker, announcing the shooter's position - "Shot 10 o'clock! Shot 10 o'clock!" - and an analog clocklike display indicates the direction. Other information, like the shooter's G.P.S. coordinates, range and elevation, are also provided. "We're now accurate way beyond 500 meters," says Dave Schmitt, Boomerang's program manager at BBN Technologies in Cambridge, Mass. Splogs By DANIEL H. PINK Newton's third law of motion applies on the Internet as well as in the physical world: for every action, there is an equal and opposite reaction. The convenience of e-mail messages triggered the annoyance of spam. And now the democracy of Web logs has brought forth the duplicity of spam blogs, or splogs. During one weekend in October, persons unknown used Google's blog-creation tool, Blogger, to generate more than 13,000 fake blogs. Hosted on Google's free BlogSpot Web site, these splogs consisted of nonsense text, postings swiped from legitimate blogs and, most important, links to sites that sploggers were trying to promote. Because search engines base their rankings in part on how many other sites link to a particular site, splogs can propel the sites to which they are linked to the top of search-engine results. (In this respect, splogging is similar to "Google bombing," a less commercialized version of the same linking strategy, usually done as a prank.) The blogosphere erupted in indignation. Leading the charge was the billionaire blogger and bad-boy Dallas Mavericks owner Mark Cuban, who is also co-owner of IceRocket.com, a tool for searching blogs. "Get Your BlogSpot [Expletive] Together Google," Cuban thundered on his widely read Web log. IceRocket.com temporarily stopped including BlogSpot-hosted sites in its index. And Google quickly added measures that let people flag counterfeit blogs - and that try to ensure that those creating blogs with Blogger are human beings rather than automated scripts. Still, splogs continue to taint search-engine rankings. Technorati, a leading blog tracking and searching service, estimates that between 2 and 8 percent of the 70,000 new Web logs created each day are actually counterfeit. "Blogs are coming at us left and right," Cuban wrote earlier this year. "We are killing off thousands a day, but they keep on coming. Like Zombies. It's straight from 'Night of the Living Dead."' Stash Rocket, The By JOEL LOVELL For years, various freethinkers among us have been at work on the problem of how to unload the hooch when parents or teachers or law-enforcement officials close in. To this end, immeasurable amounts of drugs have been swallowed and flushed and fed to witless mutts inside Little Debbie snack cakes, but not until the night of June 24 had the distinguished practice seen its highest expression. That evening, Michael Ray Sullivan and Joseph Calvin Seidl, two Kentucky-based, er, entrepreneurs, were pulled off the road by the police near Kingdom City, Mo., and were forced to unveil for the first time their handmade, cigarette-lighter-powered, driver-activated stash rocket. According to an affidavit by Special Agent Steve Mattas of the Drug Enforcement Administration, Sullivan and Seidl's design, which was observed in the trunk of Sullivan's 1990 Ford Thunderbird, consisted of a "hobby-style rocket that was controlled by an elaborate system of ropes and pulleys" that lifted the rocket "from a prone to upright position" upon the opening of the car's trunk. The rocket was three to four feet long and three to four inches in diameter and was connected, via a series of wires that the police say drew their power from the car's cigarette lighter, to a homemade switch in the front of the car. Its payload consisted of two gallon-size Ziploc bags that contained roughly two pounds, or 917 grams, of methamphetamine. Stoic Redheads By AMY SULLIVAN Redheads have long been portrayed in literature and art as strong-willed and fiery. Now there may be a scientific explanation for these traits. The key, according to researchers at McGill University in Montreal, is a gene that is linked both to red hair coloring and to higher levels of pain tolerance. It has been known since the mid-1990's that mutations of the MC1R gene are responsible for hair color - and fair skin and freckles - in about 70 percent of redheads. But when Jeffrey S. Mogil and his colleagues at McGill set out to find a genetic link to pain inhibition, MC1R wasn't at the top of their list of targets. "We normally only get excited about genes in the brain when it comes to pain," he says. "This is in the skin." There was, however, a little-noticed paper that said MC1R was in fact expressed in the brain. It was enough of a clue to go on. So, earlier this year, Mogil ran some mice through a battery of pain tests, using mice with the red-hair gene as his test group. (A collaborator in the Netherlands ran the same study with humans, giving them electrical shocks to the leg.) When animals and humans experience pain, their brains release natural opiates similar to morphine. In most cases, however, the MC1R gene produces a protein that interferes with the efficacy of those substances as well as of artificial painkillers. What Mogil found is that the variant of MC1R that causes red hair also appears to allow these opiates to work unimpeded. As a result, redheads can withstand up to 25 percent more pain than their blond and brunet peers do before saying "stop." Stream-of-Consciousness Newspaper, The By PAUL TOUGH Hurricane Katrina transformed many things - the city of New Orleans, the coastline of Louisiana, President Bush's approval rating - but perhaps the most surprising change was the one it wrought on the American newspaper. Before the storm hit, the editors of The New Orleans Times-Picayune set up a page on their Web site they called the Hurricane Katrina Weblog. They had done something similar in previous storms, and for the first two days, the blog functioned as it had in the past, as a supplement to the paper, a catchall for breaking news and official announcements on evacuations and shelters. Then the flood waters rose, and the printed edition was shut down. Suddenly the blog became something different: a new kind of newspaper, one in which there was only a front page. The paper's staff was forced to evacuate its headquarters. Jim Amoss, the paper's editor, says that he and his colleagues quickly realized that the print version of their newspaper had become temporarily obsolete. They still compiled a daily edition of the paper and made it available to readers as a full-color download each evening. But the main event, it was clear, was the blog, which was soon publishing 25,000 words a day, with new posts appearing every few minutes. Readership grew exponentially, and several days after the storm, the blog was getting 20 million to 30 million page views a day. (Three weeks later, when Hurricane Rita hit, The Houston Chronicle created a similar, equally successful Internet edition.) Subadolescent Queen Bees By ALEXANDRA STARR Anyone who has spent time in a middle-school cafeteria knows that girls can be nasty. As it happens, cattiness isn't confined to the pubescent set. According to a study released by Brigham Young University researchers earlier this year, girls as young as 4 manipulate their peers to stay atop the social hierarchy. "They'll spread rumors and give their peers the silent treatment," says David Nelson, an assistant professor of marriage, family and human development at B.Y.U. and an author of the study. "They do whatever works to maintain control." So much for the sugar-and-spice reputation of the sandbox set. High-school bitchiness has been a cultural fixation for several years now. (Witness the popularity of the movie "Mean Girls," itself based on the best-selling book "Queen Bees and Wannabes.") And psychologists have been studying so-called relational aggression - as opposed to physical aggression - in both male and female adolescents for more than a decade. But the B.Y.U. study, which appeared in April in the journal Early Education and Development, was the first academic paper to document that very young girls know how to exert psychological dominance over their peers. While the study doesn't address how queen bees are formed, Nelson speculates that many of these children are raised by unusually controlling parents, who show them that manipulation reaps results. "They take the relationship paradigm they learn at home and transfer it to their peer group," he says. Suburban Loft, The By CLAY RISEN The loft used to be a distinctly urban animal: empty downtown factories converted first into living spaces by struggling artists, then later into trendy up-market condos. But like many city dwellers, as it has matured, the loft has moved farther away from the hustle and bustle of downtown life. Last year, the National Association of Home Builders incorporated the "loft look" in its annual demonstration home in Las Vegas. A cross between SoHo and Pleasantville, the house featured an open plan, buffed-concrete floors and high ceilings, but it also sat amid a manicured lawn in a gated subdivision. It drew rave reviews and quickly sold for $1.9 million. The "loft look" - also known as "factory chic" - has since proliferated across the Sun Belt and Midwest, often in so-called soft-loft condos (which are built from scratch rather than converted) but occasionally as single-family detached homes. Loft-style houses boast roll-up metal garage doors, cage-ensconced outdoor lights and exposed ductwork - "City living without the city," boasts the developer of Stone Canyon, a loft subdivision in Las Vegas. Many of these lofts also come with the sort of trappings McMansion dwellers have come to expect: walk-in closets, granite countertops, sunken bathtubs. Loft-style living is popular not just in blue-state enclaves like Boulder or Austin: this year developers in Texas announced the 10-story Tower Lofts at Town Square in Sugar Land, Tex., the heart of Tom DeLay's Congressional district. Synesthetic Cookbook, The By ST?PHANIE GIRY Can't figure out what to have for dinner? Hugo Liu, a graduate student at M.I.T.'s Media Lab, has developed a "smart" cookbook that can recommend a dish on the basis of some of the tastes and emotions commonly associated with it. Synesthetic Recipes, a searchable computer database of 60,000 recipes, can't actually read your mind, but it comes close. In the manner of a conventional cookbook, it is indexed according to 5,000 ingredients and 400 cooking procedures. But it can also be searched according to terms that range from the descriptive ("silky") and the playful ("aha") to the referential ("Popeye") and the temperamental ("brooding"). Looking for something that's "exotic" and "melodic" and "citrusy"? The cookbook suggests barbecued pork ribs with a currant glaze or jackfruit pudding. The database takes its name from synesthesia - the blurring of sensations, as when you "see" sounds or "hear" colors. To create his web of food associations, Liu and a team of his fellow M.I.T. researchers first mined a variety of informational sources: food sites on the Web, the records of the culinary historian Barbara KetchamWheaton and a catalog of thousands of simple statements (like "butter tastes rich") that were volunteered for an unrelated M.I.T. research project. Liu and his team then cross-referenced this information and combined it with a giant Web bank of recipes. Their task - which took about four years - essentially amounted to programming a computer with the knowledge that, for instance, a souffl? is ethereal because it's fluffy, that it's fluffy because it's made with well-aerated egg whites and that whipping egg whites aerates them. Taxonomy Auctions By CLAY RISEN Five years ago, the British primatologist Robert Wallace was trekking through the Bolivian rain forest when he glimpsed a new species of monkey - a discovery that eventually gave him and his colleagues, per the rules of the taxonomy world, exclusive naming rights. Usually, species names derive from their physical or geographic characteristics, but biologists have christened their finds after everything from punk-rock bands to Ernest Hemingway. Even Bush, Cheney and Rumsfeld have been honored - with a trio of slime-mold beetles, but honored nonetheless. Wallace and his partners, however, had a different idea: Why not sell off the naming rights and use the proceeds to finance the monkey's habitat? He and the Wildlife Conservation Society soon got in touch with Charity Folks, a New York venue for nonprofits, which this past spring conducted the world's first online taxonomy auction. Ellen DeGeneres made a bold stab at winning the bid - even publicizing the auction on her talk show - but the highest offer came from GoldenPalace.com, a gambling and entertainment Web site, which put up $650,000. Behold Callicebus aureipalatii - roughly, the golden palace monkey. Wallace didn't exactly invent the taxonomy auction - the idea has been floating around biology circles for a few years. And it's not the first time biologists have traded names for money: scientists have long honored their patrons with taxonomic gifts. In Germany, the nonprofit Biopat allows donors to "sponsor" a species of their choice in exchange for naming rights; it even provides an online catalog of unnamed flora and fauna. Nevertheless, March's auction represents something of a breakthrough - providing not only a potentially lucrative font of revenue but also a great source of publicity for an often overlooked field. "The Crawl" Makes You Stupid By NOAH SHACTMAN A few days after 9/11, Lori Bergen was watching television news and, in her words, "slowly going insane." She was glued to the tube but couldn't focus on what she saw. Bergen, a journalism professor at Kansas State University, blamed "the crawl" - that stream of headlines, sports scores and weather updates constantly slinking across the lower portion of the TV screen - for her inability to concentrate. "It was distracting, how it was moving all the time," she says. "It made it so easy to drift away." So Bergen took a piece of duct tape and placed it on the bottom of the screen. It let her focus. And it made her wonder: Was she the only one who became distracted? To find out, Bergen and a Kansas State journalism colleague, Tom Grimes, along with a researcher named Deborah Potter, designed a study in which students were asked to watch a set of stories from "CNN Headline News." Half the time the crawl was at the bottom; the other half it was edited out. What the professors found was that students watching the show with the crawl remembered about 10 percent fewer facts than those who watched without it. To Bergen, it was more than a personal vindication. It challenged the notion, trumpeted by media executives like the former Time Warner C.O.O. Robert Pittman, that today's young people somehow absorb information differently than previous generations did. Learning by constantly nibbling at bits and bites from multiple sources at once - what people in the business and computer worlds call "multitasking" - just doesn't work well. It makes you only more distracted, less effective. Toothbrush That Sings, The By ARIANNE COHEN Brushing your teeth is a drag - particularly when you're 3 years old. You can't see over the counter, the toothpaste tastes weird and it all brings you one step closer to bedtime. Enter Hasbro and the music industry's push to introduce pop tunes to children's bedtime routines. In February, Hasbro announced its plans to release Tooth Tunes, a manual toothbrush that transmits a preloaded two-minute song through the jawbone and into the inner ear during brushing. Sound outlandish? Plug your ears and hum - that's approximately how it works. The song is played at the push of a button, giving Junior a musical party in his mouth, while nearby Mom hears only a soft hum. A further advantage: When brushing with Tooth Tunes, Junior will know exactly when he has reached the dentist-recommended two-minute mark, because that's when the music stops. As part of its announcement, Hasbro also revealed that it was negotiating with record companies to license tunes by pop stars like Hilary Duff and the Black Eyed Peas. But Hasbro hit a snag and was beat to the musical-toothbrush market this year by OraWave's Tuned Musical 2-Minute TwinSpin. This device plays one of eight short tunes through a handle speaker after the two-minute mark, as a reward for proper brushing. "The average person brushes for well under a minute, and children much less than that," says OraWave's president, Tom Hoffecker. Late next year, OraWave expects to introduce the $35 Tuned Musical MP3 TwinSpin, which will download songs into the toothbrush handle through a water-protected U.S.B. port. Totally Religious, Absolutely Democratic Constitution, The By NOAH FELDMAN A decade ago, almost everyone across the political spectrum - from neoconservatives to Islamic fundamentalists - agreed that democracy and Islam were inherently incompatible. This consensus followed from definitions: democracy means the rule of the people, whereas Islam teaches the sovereignty of God. In October, though, Iraqis went to the polls and ratified a Constitution that committed itself with equal strength to both democracy and Islam. The document announced that Iraq would be a democracy with equality for all and declared that no law could contradict the principles of democracy. At the same time, it declared Islam the basic source of law and the official religion, and it decreed that no law could contradict "the provisions of the judgments of Islam." The country's leading Shiite clerics supported the Constitution and instructed their followers to vote for it. The neoconservatives in Washington took a deep breath and then hailed it as a milestone in Middle Eastern freedom. Marrying Islam and democracy has required some reinterpretation of the two ideas. Islamically-oriented supporters of the Constitution have accepted the idea that Islam does not dictate every aspect of ordinary life or of government policy. The Islamic veto will come in only if particular laws are understood by the high constitutional court as violations of Islam's basic tenets. This will put tremendous power in the court itself - which means that its composition is crucial. While the Constitution specifies that it will not be composed exclusively of clerics, it does mandate that experts in Shariah will serve alongside lawyers trained in secular civil law. How many of each must be determined by the legislature that will be elected Dec. 15. Of course, tensions are likely, especially when it comes to marriage, divorce and inheritance. The Constitution promises every Iraqi the right to be governed by the family law of the religious denomination of his or her choice. How then to handle couples who belong to different denominations - or who want secular laws to govern their relationship? And if Shariah as interpreted by one religious group makes it difficult or impossible for women to initiate divorce, is this a violation of the Constitution's commitment to democratic equality? Opinions are sure to differ, and the courts will have to weigh in alongside the legislature. Touch Screens That Touch Back By CATHERINE PRICE Americans are familiar with the touch screens on A.T.M.'s, casino games and flight check-in kiosks. Curiously, though, none of these technologies actually take advantage of a user's sense of touch. Despite our skin's enormous ability to give us feedback about our surroundings, our eyes dominate our other senses. That may be about to change. Developments in haptic technology - that is, technology that simulates the sense of touch - suggest that our machines are about to start touching back. Immersion, a company in San Jose, Calif., has developed new systems that enable touch screens to give tactile feedback: when you press the buttons on a screen, you actually feel them click, as if they were buttons on a touch-tone phone - even though the screen is not actually being depressed. This is achieved, counterintuitively, by moving the glass of the touch-screen display quickly from side to side by about 0.2 to 0.3 millimeters - creating the illusion that the glass is moving up and down far more than it actually is. "Whether you move the glass sideways or in displacement, most people perceive it as displacement because that's what they're expecting," says Mike Levin, vice president of Immersion's industrial control group. "The brain is tricked into believing that it's a press motion." Trial-Transcript Dramaturgy By JASON ZINOMAN When the judge in the Michael Jackson child-molestation trial banned cameras from the courtroom, he left the producers at the E! network with a serious problem. How could they satisfy their audience's appetite for celebrity news without footage of the king of pop squirming next to his lawyer? There was only one thing to do: fake it. E! whipped up a crude but faithful simulation of this year's version of the Trial of the Century, using look-alike actors, a makeshift set and verbatim excerpts of the courtroom transcripts as the script. The network had staged a high-profile court case once before - during the O.J. Simpson civil trial - but it was a less-polished production. ("Midway through the trial, the actor playing O.J. asked for more money," the producer Jeff Shore recalls. "He was replaced the next day, but no one seemed to notice.") "The Michael Jackson Trial," which was broadcast every night at 7:30 p.m. and 9 p.m., was presented in half-hour and hourlong installments featuring the highlights of the previous day in court. To provide some context and narrative glue, a panel of real lawyers offered analysis. The show was a modest ratings hit, and despite the complaints of television critics (Tom Shales of The Washington Post called it "cheap, creepy, foolish and lurid - and those are its good points"), it was actually quite restrained compared with most of the news coverage of the case. The direct quoting of testimony gave a good sense of how tedious even the most circuslike trial can be. In general, the cast members, who were assisted by reports from eyewitnesses about the facial expressions and tone of the real participants, underplayed their roles. The fake Michael Jackson looked more real than the real one, and even Jimmy Kimmel's performance as Jay Leno, who was a witness, seemed closer to method-acting seriousness than broad parody. Trust Spray By JOHN GLASSIE The hormone oxcytocin, which plays a role in childbirth, breast-feeding, orgasm and feelings of love, is usually thought to have a happy set of responsibilities within the body. But a new study suggests that the hormone could be put to more sinister uses. According to a paper published in the June issue of Nature, a research team at the University of Zurich has determined that a nasal spray containing oxcytocin can be used to make human subjects more trusting. In the study, 128 male participants played several rounds of a game borrowed from economic and social-behavior theory. The game essentially offers rewards to "investors" who are willing to temporarily entrust some or all of their money to anonymous "trustees." Almost half of the investors who took three puffs per nostril of the oxcytocin spray transferred all of their money to their unseen trustees, whereas only a quarter of those who inhaled a placebo went that far. "Oxcytocin doesn't make you nicer or more optimistic or more willing to gamble," says Michael Kosfeld, who headed the research team. "It causes a substantial increase in trusting behavior." Kosfeld grudgingly allows that his team's research could one day be applied to exploit people. But for the time being, salesmen, politicians and Lotharios looking to increase their appearance of trustworthiness will have a hard time gaining much benefit from the new findings. "At this point you have to use the nasal spray," Kosfeld says, "so you really need the consent of the other person, which requires a certain degree of trust in the first place." Moreover, the effect lasts only a few minutes - hardly long enough to negotiate a contract for someone's soul. Two-Dimensional Food By JON FASMAN A picture may be worth a thousand words, but is it dinner? At Chicago's trendy Moto restaurant, it is: a 20-course tasting menu can begin with "sushi" made of paper that has been printed with images of maki and wrapped around vinegared rice and conclude with a mint-flavored picture of a candy cane. Should you fail to finish a course, Homaro Cantu, Moto's executive chef, will emerge from the kitchen with a refund: a phony dollar bill flavored to taste like a cheeseburger and fries. It may sound like some sort of Surrealist stunt with dire intestinal consequences, but here's the rub: the "food" tastes good. Good enough to lure diners back at $240 per head (including wine). Cantu, who says that as a child he had "a fascination with how things tasted, especially inedible things," has essentially combined a high-end kitchen with a Kinko's. Using a modified ink-jet printer and organic, food-based inks, he prints images of food (and other objects) on specially designed paper made of modified food starch. He skillfully adds intensely flavored liquid seasoning, and voil?: a printout of a cow that tastes like filet mignon. (The paper itself is neutral-tasting and free of allergens and calories; the flavorings are stable, with a long shelf life, and may contain amino acids and other added nutrients.) Uneavesdroppable Phone Conversation, The By RYAN BIGGE Sick of your colleagues listening in on your phone conversations? The traditional method of preventing eavesdropping in the workplace is to build dampers and baffles into cubicle walls. But now a device called Babble attacks the problem at the source, transforming the chatter emanating from your cubicle into a flow of meaningless mumblings. Babble, which hit shelves in June, consists of two speakers and a small sound generator that attaches to your phone. The generator isolates and records the various phonemes - the building blocks of intelligible speech - of your speaking voice. Then when you activate it for a telephone conversation, it generates a stream of random phonemes that counteract the inflections and drops in your voice. When that parallel "conversation" emerges from the Babble loudspeakers and combines with your actual conversation, it produces a choral arrangement of sweet nothings. "It creates the music of voice, without the meaning of voice," explains Danny Hillis, a founder of Applied Minds, a research-and-development firm that created the technology with Sonare Technologies. While productivity gains may help justify its $395 price tag, Babble could also be valuable for protecting the confidentiality of patient information in places like waiting rooms and hospital reception areas. Urine-Powered Battery, The By JOEL LOVELL In their quest to develop a smaller, cheaper battery for medical test kits - like those used to detect diabetes by analyzing a person's urine - scientists in Singapore had a eureka moment of sorts when they realized that the very urine being tested could also serve as a power source. In the September issue of The Journal of Micromechanics and Microengineering, Ki Bang Lee described how he and his team of researchers created "the first urine-activated paper battery" by soaking a piece of paper in a solution of copper chloride, sandwiching it between strips of magnesium and copper and then laminating the paper battery between two sheets of plastic. In this setup, the magnesium layer serves as the battery's anode (the negatively charged terminal) and the copper chloride as the cathode (the positively charged terminal). An electricity-producing chemical reaction takes place when a drop of urine, which contains many electrically charged atoms, is introduced to the paper through a small opening in the plastic. Video Podcasts By ROBERT MACKAY In October, when Steve Jobs announced Apple's release of the video-playing iPod, he spoke at length about the hit TV shows and music videos that could be purchased and downloaded for the device at the iTunes store. He spent less time talking up the free content available there that could, in the long run, be more significant: video podcasts. Podcasting is an Internet alternative to broadcasting. Instead of listening to the radio or watching TV, you subscribe to one of the thousands of Web sites that regularly post audio or video programs. New episodes of the programs are delivered to your computer and transferred to a portable player like an iPod, automatically, as soon as they are published - in much the same way that digital video recorders like TiVo allow you to subscribe to TV programs. What gives video podcasts their revolutionary potential is that, like audio podcasts, they can be made and published on the Web by producers with large budgets and salaries or producers with no budgets and allowances. By making it easy to subscribe to podcasts through iTunes, Apple is allowing home-schooled media makers to distribute their programming directly to a global audience. Only a handful of the more than 2,000 Web sites that offer video by subscription right now are owned by TV stations, and part of the charm of the format in its infancy is that a professional video podcast about elections in Azerbaijan, made by a documentary filmmaker working for washingtonpost.com, can exist side by side in an iTunes playlist with homemade, autobiographical video podcasts that open small windows into more personal current events - like a college kid in Michigan playing drunken miniature golf, women in a Manhattan office bantering about Cheerios, a fan's-eye view of a rock show in Minneapolis or a man stuck in an airplane seat during a long delay trying to make sense of the items for sale in the SkyMall catalog he finds in the seat pocket in front of him. Why Popcorn Doesn't Pop By REBECCA SKLOOT Here's an undeniable fact: when you make popcorn - no matter what brand you use, no matter how closely you follow the directions - some kernels just won't pop. Here's another undeniable fact: at some point someone labeled those unpoppable kernels "Old Maids." That person was not Bruce Hamaker, who has never used the phrase but has received hate mail from people saying he's sexist because of it. They've got the wrong guy. Bruce Hamaker's only connection to unpopped kernels is entirely inoffensive. He has figured out why they don't pop. Popability depends on water. As the kernel heats up, water inside it releases steam, putting more and more pressure on the kernel until it explodes. In a recent study, Hamaker, a Purdue University food chemist, found that popcorn with lower water content left more unpopped kernels. "That," he says, "led us to the obvious question, What causes low moisture in certain kernels?" Which led straight to the hull, the crunchy outside of the corn. The hull is made up of several thin sheets of cellulose. Turns out, when cellulose gets hot, it changes structure. Its thin sheets become crystals that bond so tightly together, water can't pass through. The more crystalline the hull becomes, the less water can leak out, and the more likely it is to pop. So the key to maximum popability is using popcorn strains whose hulls become most crystalline. Worldwide Flat Taxes By MICHAEL STEINBERGER In the United States, advocates of a flat tax - a single rate applied to all taxpayers regardless of income level - are generally regarded as the intellectual heirs to the flat-earthers. Arguments to replace the graduated income tax with a flat tax have won little respect in policy circles and even less among voters (witness Steve Forbes's two failed presidential bids). Overseas, however, it is a different story. In August, it was reported that Greece was weighing the introduction of a 25 percent flat tax, joining a growing list of European countries that either have adopted the flat tax or are giving the idea strong consideration. The flat tax has proved especially popular in former Soviet bloc countries. Estonia was the first to implement one, establishing a 26 percent flat rate in 1994. Since then, Latvia, Lithuania, Ukraine, Slovakia, Romania and Georgia have all flattened their tax rates. Russia itself went to a single bracket in 2001, and Bulgaria, Croatia and Hungary are expected to follow suit in the near future. All this has left American flat-taxers exultant. "The world is flat," crowed The Wall Street Journal in a recent editorial. The fact that an idea rooted in conservative ideology has gained its strongest following in formerly Communist Eastern Europe only adds to their sense of vindication. As they see it, the flat tax is winning converts because it is easy to administer, helps reduce tax evasion (especially if the tax rate is set relatively low) and stimulates economic activity. They point, for instance, to the impressive growth and the rise in tax revenues that Russia has enjoyed since introducing its flat tax. Yawn Contagion By REBECCA SKLOOT Hanging out with Steve Platek will make you yawn. He'll get you thinking about yawning, reading about yawning, and sooner or later, your mouth's gaping. You can't help it. "My favorite way to induce a yawn," Platek says, "is a video clip of a good yawner paired with yawn audio." Platek, a cognitive neuroscientist at Drexel University, alternately describes yawning as "a primitive unconscious mechanism" or something that's "sweet," "totally cool" or "awesome." And he's finally figuring out why it's contagious. Scientists (and everybody else) have known for decades that yawns are contagious, but they've never known why. Some think it's an unconscious mirror effect - someone yawns, you yawn in response almost like a reflex. But Platek says he thinks it has to do with empathy. The way he sees it, the more empathetic you are, the more likely it is that you'll identify with a yawner and experience a yawn yourself. In a recent study, Platek looked at contagious yawning in people with "high empathy," "low empathy" and everything in between. He found that higher empathy meant more yawn-susceptible and lower empathy meant more yawn-immune. But that wasn't proof enough. So Platek put volunteers in M.R.I. machines and made them yawn again and again to pinpoint the areas of the brain involved. When their brains lighted up in the exact regions of the brain involved in empathy, Platek remembers thinking, "Wow, this is so cool!" Some yawning researchers - of which there are few - have identified many types of yawns. There's the contagious yawn, the I'm-tired yawn and the I-just-woke-up yawn. There's the threat yawn, which is the my-teeth-are-bigger-than-yours yawn that's so popular with primates. ("People do it, too," says Platek, "but unfortunately, we don't have scary teeth anymore.") There's also the sexual yawn. (One scientist claims that yawns are used in seduction.) At some point, you have to wonder: why study yawning? It's quirky, interesting, but not important, right? Wrong, says Platek. Nearly every species on the planet yawns: insects, fish, birds, reptiles, mammals. "Yawning is such a primitive neurological function," Platek says, "it's a window into what happened during the evolution of the brain." Yoo Presidency, The By JEFFREY ROSEN Can the president of the United States do whatever he likes in wartime without oversight from Congress or the courts? This year, the issue came to a head as the Bush administration struggled to maintain its aggressive approach to the detention and interrogation of suspected enemy combatants in the war on terrorism. But this was also the year that the administration's claims about presidential supremacy received their most sustained intellectual defense, rooted in a controversial theory known as the unitary executive. The defense was set out in a book called "The Powers of War and Peace: The Constitution and Foreign Affairs After 9/11," written by John Yoo. A law professor at Berkeley and a former deputy assistant attorney general in the Office of Legal Counsel, Yoo is most famous for his contributions to memos arguing that the Geneva Conventions - as well as criminal laws prohibiting torture - don't apply to enemy combatants. In his book, Yoo provides a historical basis for claims of broad presidential power by arguing that the framers of the Constitution intended to create a unitary executive responsible for ensuring that all executive-branch policies - from firing cabinet officers to declaring war - conform to the wishes of the national constituency that elected the president. Critics have accused Yoo of ignoring the fact that the framers were wary of giving the president the power of a king, but Yoo responds that Congress, by withholding funds for war, can check the president in the same way that Parliament checked the crown. Zero-Emissions S.U.V., The By SUSAN DOMINUS In the 90's, environmentalists could celebrate at least one success story: the "cap and trade" system, a market-inspired strategy for reducing harmful factory emissions. The way it works is simple. Companies that want to produce emissions beyond the legal limit are allowed to buy the right to release additional emissions from companies that have managed to keep their own emissions below the limit. Recently, Karl Ulrich, a professor at the Wharton School of the University of Pennsylvania, introduced a microversion of the same policy - only applied to individual automobiles as opposed to factories. It allows a socially conscious driver to cancel out the environmental damage caused by his car. The system is called TerraPass. If a driver can't stomach the thought of trading in his sleek S.U.V. for a more-fuel-efficient but less-than-thrilling station wagon, he can pay a fee to a company that is also called TerraPass. The company then allocates the money to reduce industrial carbon dioxide emissions, support renewable energy like wind farms and purchase (but not use) pollution credits from companies, among other environmentally conscious endeavors. The fee is proportional to a car's carbon dioxide emissions - approximately $80 a year in the case of an S.U.V. Officially introduced just over a year ago, the company has sold TerraPasses, which take the form of a small decal that drivers can put on their windshields, to approximately 2,300 people (some of them not even car owners). The decals announce that a driver is more environmentally concerned than his choice of transportation might otherwise suggest. The company takes a small cut of its sales, taking advantage of the apparently healthy market for guilt reduction. Although TerraPass certainly works on a free-market principle, it's lacking the element of naked self-interest that would drive a truly global change. A more exact parallel to the cap-and-trade system would be one in which drivers who saved fuel by moseying down a 60 miles-per-hour lane could accrue electronic passes they could sell the next morning on eBay to whoever needed to dart to work or the airport that morning at 70 m.p.h. The market for environmental righteousness may be growing, but surely not as fast as the market for speed. [?][?][?]Susan Dominus Zombie Dogs By STEPHEN MIHM Just as dogs preceded humans in making the first risky voyages into space, a new generation of canines has now made an equally path-breaking trip - from life to death and back again. In a series of experiments, doctors at the Safar Center for Resuscitation Research at the University of Pittsburgh managed to plunge several dogs into a state of total, clinical death before bringing them back to the land of the living. The feat, the researchers say, points the way toward a time when human beings will make a similar trip, not as a matter of ghoulish curiosity but as a means of preserving life in the face of otherwise fatal injuries. The method for making the trip is simple. The Safar Center team took the dogs, swiftly flushed their bodies of blood and replaced it with a relatively cool saline solution (approximately 45 to 50 degrees) laced with oxygen and glucose. The dogs quickly went into cardiac arrest, and with no demonstrable heartbeat or brain activity, clinically died. There the dogs remained in what Patrick Kochanek, the director of the Safar Center, and his colleagues prefer to call a state of suspended animation. After three full hours, the team reversed their steps, withdrawing the saline solution, reintroducing the blood and thereby warming the dogs back to life. In a flourish worthy of Mary Shelley, they jump-started their patients' hearts with a gentle electric shock. While a small minority of the dogs suffered permanent damage, most did not, awakening in full command of their faculties. Of course, the experiments were conducted not to titillate fans of horror films but to save lives. Imagine a stabbing victim brought to the emergency room, his aorta ruptured, or a soldier mortally wounded, his organs ripped apart by shrapnel. Ordinarily, doctors cannot save such patients: they lose blood far more quickly than it can be replaced; moreover, the underlying trauma requires hours of painstaking repair. But imagine doctors buying time with the help of an infusion of an ice-cold solution, then parking their patients at death's door while they repair and then revive them. References 35. http://www.nytimes.com/2005/12/11/magazine/11ideas1-3.html 36. http://www.nytimes.com/2005/12/11/magazine/11ideas1-4.html 37. http://www.nytimes.com/2005/12/11/magazine/11ideas1-5.html 38. http://www.nytimes.com/2005/12/11/magazine/11ideas1-6.html 39. http://www.nytimes.com/2005/12/11/magazine/11ideas1-7.html 40. http://www.nytimes.com/2005/12/11/magazine/11ideas1-8.html 41. http://www.nytimes.com/2005/12/11/magazine/11ideas1-9.html 42. http://www.nytimes.com/2005/12/11/magazine/11ideas1-10.html 43. http://www.nytimes.com/2005/12/11/magazine/11ideas1-11.html 44. http://www.nytimes.com/2005/12/11/magazine/11ideas1-12.html 45. http://www.nytimes.com/2005/12/11/magazine/11ideas1-13.html 46. http://www.nytimes.com/2005/12/11/magazine/11ideas1-14.html 47. http://www.nytimes.com/2005/12/11/magazine/11ideas1-15.html 48. http://www.nytimes.com/2005/12/11/magazine/11ideas1-16.html 49. http://www.nytimes.com/2005/12/11/magazine/11ideas1-17.html 50. http://www.nytimes.com/2005/12/11/magazine/11ideas1-18.html 51. http://www.nytimes.com/2005/12/11/magazine/11ideas1-19.html 52. http://www.nytimes.com/2005/12/11/magazine/11ideas1-20.html 53. http://www.nytimes.com/2005/12/11/magazine/11ideas1-21.html 54. http://www.nytimes.com/2005/12/11/magazine/11ideas_section2-1.html 55. http://www.nytimes.com/2005/12/11/magazine/11ideas_section2-2.html 56. http://www.nytimes.com/2005/12/11/magazine/11ideas_section2-3.html 57. http://www.nytimes.com/2005/12/11/magazine/11ideas_section2-4.html 58. http://www.nytimes.com/2005/12/11/magazine/11ideas_section2-5.html 59. http://www.nytimes.com/2005/12/11/magazine/11ideas_section2-6.html 60. http://www.nytimes.com/2005/12/11/magazine/11ideas_section2-7.html 61. http://www.nytimes.com/2005/12/11/magazine/11ideas_section2-8.html 62. http://www.nytimes.com/2005/12/11/magazine/11ideas_section2-9.html 63. http://www.nytimes.com/2005/12/11/magazine/11ideas_section2-10.html 64. http://www.nytimes.com/2005/12/11/magazine/11ideas_section2-11.html 65. http://www.nytimes.com/2005/12/11/magazine/11ideas_section2-12.html 66. http://www.nytimes.com/2005/12/11/magazine/11ideas_section2-13.html 67. http://www.nytimes.com/2005/12/11/magazine/11ideas_section2-14.html 68. http://www.nytimes.com/2005/12/11/magazine/11ideas_section2-16.html 69. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-1.html 70. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-2.html 71. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-3.html 72. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-4.html 73. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-5.html 74. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-6.html 75. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-7.html 76. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-8.html 77. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-9.html 78. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-10.html 79. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-11.html 80. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-12.html 81. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-13.html 82. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-14.html 83. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-15.html 84. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-16.html 85. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-17.html 86. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-18.html 87. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-19.html 88. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-20.html 89. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-21.html 90. http://www.nytimes.com/2005/12/11/magazine/11ideas_section3-22.html 91. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-1.html 92. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-2.html 93. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-3.html 94. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-4.html 95. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-5.html 96. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-6.html 97. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-7.html 98. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-8.html 99. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-9.html 100. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-10.html 101. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-11.html 102. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-12.html 103. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-13.html 104. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-14.html 105. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-15.html 106. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-16.html 107. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-17.html 108. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-18.html 109. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-19.html 110. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-20.html 111. http://www.nytimes.com/2005/12/11/magazine/11ideas_section4-21.html 112. http://www.nytimes.com/2005/12/11/magazine/11ideas1-2.html From checker at panix.com Mon Dec 12 01:36:14 2005 From: checker at panix.com (Premise Checker) Date: Sun, 11 Dec 2005 20:36:14 -0500 (EST) Subject: [Paleopsych] Dallas Morning News: Gene variant may depress IQ of males Message-ID: Gene variant may depress IQ of males http://www.azstarnet.com/sn/printDS/105238 [This article appeared in many places. I don't know its original provenance, but different sources of the same text have been posted on several groups I subscribe to. [This is just one more finding coming down the pike, and I would not attach too much significance to it, yet.] Published: 12.03.2005 THE DALLAS MORNING NEWS DALLAS -- Scientists in North Carolina say they have identified a gene that affects IQ, a finding that, if confirmed, would be a significant step toward understanding the genetic basis for intelligence. The new research could also have ethical implications because the effect of the gene appears to be quite dramatic: The scientists say that males who inherit a particular version of the gene have, on average, an IQ that is 20 points lower than males who don't. "I have to admit, the ramifications of it are great," said Randy Jirtle, the Duke University biologist who led the new research, noting that current genetic-testing techniques can easily determine which males have that version. However, he stressed that the IQ results in his research were based on a group average; individual males carrying the gene version had a wide range of IQ scores. While females also can carry the variation, it does not appear to affect their IQ, he said. Jirtle reported the new findings last month at a scientific conference in Durham, N.C. As early as the 1920s, research suggested that genetics play a key role in determining a person's mental capabilities. But so far, connections between IQ and specific genes have been just correlations, with little supporting evidence. The new research, Jirtle and other experts said, will need to be replicated before it is considered definitive. Jirtle's research centers on a gene identified as IGF2R, for type 2 insulinlike growth factor receptor. The gene governs the production of a protein that, among other jobs, affects cell growth. All people carry the gene, but some have a version with a slightly different code, Jirtle said. This variation, he and his colleagues found, correlates with a lower IQ. The researchers studied about 300 children with an average age of 10. The children, all Caucasian, came from six counties in the Cleveland area. As a group, males -- but not females -- who had the variant gene had IQ scores about 20 points lower than males who didn't. From checker at panix.com Mon Dec 12 01:36:24 2005 From: checker at panix.com (Premise Checker) Date: Sun, 11 Dec 2005 20:36:24 -0500 (EST) Subject: [Paleopsych] NYT: A Pioneering Transplant, and Now an Ethical Storm Message-ID: A Pioneering Transplant, and Now an Ethical Storm http://www.nytimes.com/2005/12/06/science/06prof.html ["Ethical Concerns on Face Transplant Grow" appended.] [These "ethicists" are too risk averse.] Scientist at Work: Jean-Michel Dubernard By [3]LAWRENCE K. ALTMAN Dr. Jean-Michel Dubernard, whose decision to perform the world's first partial face transplant has placed him at the center of an ethical storm, leads a kind of double life. As a surgeon in Lyon, Dr. Dubernard, 64, has been a pioneer, developing techniques to transplant pancreas glands and other tissues, and organizing the international team that performed the world's second hand-forearm transplant in 1998. (The first was performed in Ecuador in 1964 before advances in drugs and microsurgery.) But Dr. Dubernard is also a politician, a former deputy mayor of Lyon who is one of the most powerful members of the French National Assembly. "There's a big brain behind him and a steely will that is willing to confront massive criticism," said Dr. Thomas E. Starzl of the University of Pittsburgh, who performed the first successful [4]liver transplants, in Denver. In performing a face transplant on a severely disfigured 38-year-old woman, Dr. Dubernard has now entered one of the most disputed frontiers in transplantation science. The transplants are extremely risky, and no one can say what a patient will look like afterward. Critics have said that in rushing to be first to do a face transplant, Dr. Dubernard bypassed standard procedures to reconstruct the face of the woman, who was severely bitten by her dog last May. Dr. Laurent Lantieri, a rival transplant surgeon in Paris, has said that Dr. Dubernard and his team did not follow ethical and legal guidelines in performing the transplant. Other transplant experts have raised questions about the woman's psychological stability and about Dr. Dubernard's decision to give the woman an infusion of [5]stem cells from the donor's bone marrow shortly after the face transplant in an effort to prevent rejection of the new face, a procedure they say is still experimental. Dr. Dubernard has responded that the operation, performed in Amiens, met all French ethical and legal standards and that the patient was examined by psychiatrists and found to be an acceptable candidate for a transplant. He has also been critical of news coverage of the woman's operation. Dr. Dubernard withstood similar criticisms after his team gave a new hand to Clint Hallam in 1998 and then was deeply embarrassed when reporters learned that Mr. Hallam had a criminal record and that he had lost his hand while in prison. Further, Mr. Hallam turned out to be an unreliable patient, refusing to take the prescribed antisuppressant drugs to prevent rejection of the graft and to do the regular exercises needed to train his new hand. He demanded amputation of the new hand in 2001. By applying knowledge gained in Mr. Hallam's case, Dr. Dubernard's team went on to perform successful hand-forearm transplants on two other patients. Each had lost both hands. The hand recipient whose transplant has functioned the longest is due to celebrate his sixth anniversary in January. Surgeons around the world have successfully performed a total of 30 hand-forearm transplants, including the three in Lyon, said Dr. Nadey Hakim of London, a team member who amputated Mr. Hallam's new hand. In a telephone interview, Dr. Hakim described Dr. Dubernard as "big, pushy and at the same time friendly and kind." In juggling his two careers, Dr. Dubernard says he usually commutes to Paris for two days each week to tend to politics in the French Parliament. On other days, he cares for patients at the Edouard Herriot Hospital in Lyon. He describes himself as a workaholic and a chain smoker who has quit several times over the last 40 years. The last time was two years ago. Dr. Dubernard was born at the hospital where he works. His father was a general practitioner and his mother a pharmacist. To friends and colleagues, he is known as Max, a nickname he was given in school for always giving his maximum effort, particularly in rugby. An illness influenced Dr. Dubernard's decision to become a doctor. He was in awe of the surgeon who performed an emergency appendectomy on him when he was 11, he said, and he decided to become a surgeon himself. As a medical student at the University of Lyon, Dr. Dubernard caught [6]tuberculosis. After the illness disqualified him from military service, he went to Belgium to do research on liver and other transplants. One day, his Belgian superiors received a call from Dr. Joseph E. Murray of Harvard, who had a sudden vacancy for a research trainee. Dr. Dubernard volunteered, he said, but his Belgian hosts told him that at age 24 he was too young. When no one else accepted, Dr. Dubernard went to Boston. He said he dedicated the face operation to Dr. Murray, who won a Nobel Prize in Medicine in 1990 for helping to perform the world's first successful organ transplant, a kidney in identical twins, in 1954. Dr. Dubernard was unmarried when he worked at Harvard and "left a lot of broken hearts when he returned to France," Dr. Murray recalled in an interview over the weekend. In Lyon, Dr. Dubernard earned a doctoral degree, in part for work on xenotransplants, or transplants between species, among two kinds of monkeys. At age 37, he became chief of urology at the Herriot Hospital and at the University of Lyon. He has served in the French Parliament since 1986. His interest in politics, he said, comes in part from memories of his family's involvement in the French resistance movement. When Dr. Dubernard could not find a candidate for a hand transplant in France, he turned to his friend, Dr. Earl Owen of Sydney, Australia, who shared a goal of transplanting a hand. Dr. Owen had what the team believed was a good candidate in Mr. Hallam. Dr. Dubernard said that he was not anxious before performing the operation to give Mr. Hallam a new hand or the one to give the woman a partial new face. The reason, he said, was his confidence in the drills that his team of dermatologists, psychiatrists, nurses and other experts had followed in practicing each step of the complicated procedures. "Once the preparations were done, I didn't worry anymore," Dr. Dubernard said. "But, after the transplants, it was another problem." Dr. Dubernard said that when he went to sleep after Mr. Hallam's operation, he awoke from a dream, horrified that the new hand had turned black from acute rejection. It had not. Doctors who have examined a number of the hand transplant recipients have been impressed with the psychological benefits the procedure offered the patients, particularly the double amputees. But experts debate the degree of nerve sensation and motor function that the recipients have regained from the transplants. Dr. Dubernard said he was hesitant about performing the partial face transplant until he examined the woman in Amiens and saw the severity of the wounds. She had difficulty speaking and eating, as food fell from her mouth, he said. Independent experts told his team that the wounds were "very difficult, if not impossible" to repair with standard reconstructive surgery, Dr. Dubernard said. But, he said, "We knew we could improve her life." Dr. Dubernard said he slept only about three hours each night last week, in part because he worried about questions like: Would the arteries and veins clot, jeopardizing survival of the graft? Now that certain danger points have passed, he said he is beginning to sleep better and longer. Still, he says he knows that the woman's immune system can reject the new face at any time during her life. At a news conference in Lyon on Friday, Dr. Dubernard exuded confidence. He appeared bright-eyed, eyebrows continually raised, energetic, funny and quick to engage reporters. He clearly is someone who loves the limelight, as he asserted himself over his more subdued colleagues. Dr. Dubernard said in interviews this weekend that if a need arose, he would not hesitate to receive a new hand or face, or give approval for one to his three children or six grandchildren. He said he is divorced from his first wife, and lives with Dr. Camille Frances, a professor of dermatology in Paris. Dr. Dubernard says that under French law he faces mandatory retirement from practice in two years and is not sure what he will do then. A full-time career in politics is one possibility. Another is becoming a poet to express his wide-ranging interests, including a love of mythology. Ariane Bernard contributed reporting from Paris for this article. References 3. http://query.nytimes.com/search/query?ppds=bylL&v1=LAWRENCE%20K.%20ALTMAN&fdq=19960101&td=sysdate&sort=newest&ac=LAWRENCE%20K.%20ALTMAN&inline=nyt-per 4. http://topics.nytimes.com/top/news/health/diseasesconditionsandhealthtopics/transplants/index.html?inline=nyt-classifier 5. http://topics.nytimes.com/top/news/health/diseasesconditionsandhealthtopics/stemcells/index.html?inline=nyt-classifier 6. http://topics.nytimes.com/top/news/health/diseasesconditionsandhealthtopics/tuberculosis/index.html?inline=nyt-classifier Ethical Concerns on Face Transplant Grow http://www.nytimes.com/2005/12/06/international/europe/06facex.html By MICHAEL MASON and LAWRENCE K. ALTMAN In urgent telephone calls and agonized e-mail messages, American scientists are expressing increasing concerns that the world's first partial face transplant, performed in northern France on Nov. 27, may have been undertaken without adequate medical and ethical preparation. Some scientists say they fear that if the French effort fails, it could not only threaten the life of the transplant recipient, a 38-year-old Frenchwoman, but jeopardize years of careful planning for a new leap in transplant surgery. "We've been working on the ethics and the science for some time, going slowly while we figure out immunology and patient selection criteria and indications," said Dr. L. Scott Levin, chief of plastic and reconstructive surgery at Duke University Medical Center. "This flies in the face of everything we've tried to do." The scientists' worries stem in part from the execution of the surgery, and in part from news reports over the weekend that called into question the patient's emotional state. Dr. Maria Siemionow, director of plastic surgery research at the Cleveland Clinic, who has been preparing to perform a full face transplant, said that the way the transplant was conducted appeared to conflate two experimental protocols: the transplantation of facial tissue and the infusion of stem cells from the donor bone marrow into the patient in an attempt to prevent rejection of the new face. The first procedure, although untried until now, has been well studied, and the microsurgical techniques involved are commonplace. But the second has been successful in human subjects only rarely and only recently. While pilot studies do suggest that an infusion of stem cells from the donor can help produce "chimerism" in humans, a state in which foreign tissue is tolerated by the body with comparatively little or no suppression of the immune system, it is far from standard practice in transplantation. The French team's decision to perform two novel procedures simultaneously means that it may be difficult to determine the cause of success or failure of the transplant, Dr. Siemionow said. "They should not be doing two experiments on the same patient," she added. "Ethics aside, it will make it difficult to get clean answers - if it works, why does it work, and if it goes wrong, was it the transplant or the stem cells?" In a telephone interview yesterday, Dr. Jean-Michel Dubernard, the surgeon whose team performed the groundbreaking operation, in which the patient received lips, a chin, and a nose from a brain-dead donor, defended the infusion of bone marrow stem cells and denied that the procedure was a step into uncharted territory. "It is not two experiments at the same time," he said. But American experts said that the French team's approach diverged in other ways from what had been scientific consensus about how to proceed. The transplant was performed months after the woman's injury, and before any attempt at conventional reconstructive surgery. The French doctors said traditional surgery could not have salvaged the woman's face. Dr. Beno?t Lengel?, a Belgian specialist in facial injuries, and other experts had judged that reconstructive surgery would be "very difficult, if not impossible" in the patient's case, Dr. Dubernard said. Yet reconstructive surgeons in the United States and in Europe routinely operate on patients with similar injuries, some missing as much as half their faces. Surgeons like Dr. Siemionow have long argued that the first face transplant should be attempted only on a patient who has tried to live with the alternatives. The psychological issues are as complex as the surgical ones. Scientists have been concerned about how thoroughly the patient was emotionally prepared for the procedure, concerns that were only heightened when The Sunday Times of London reported that she had admitted that she sustained her injuries during a suicide attempt. According to the newspaper, the woman said that she was mauled by her family's Labrador retriever after she took an overdose of sleeping pills and collapsed. She believed the dog was trying to revive her, the newspaper said. The Times also reported that the donor had committed suicide. The reports greatly alarmed experts in the field; the experimental protocols devised at both the Cleveland Clinic and the University of Louisville explicitly preclude emotionally unstable candidates. Patients with a history of depression sometimes do not comply with the complicated drug regimens necessary to prevent organ rejection. The news reports, however, were vigorously denied yesterday by Dr. Dubernard, who responded, "No, no and no!" when asked if his patient had tried to take her own life. He said the woman had taken only two sleeping pills for insomnia after a family argument. Dr. Dubernard said that the woman was approved as a candidate for a face transplant only after a thorough psychological examination by an independent expert and by mental health professionals working with the transplant teams in Amiens, where the operation was performed, and in Lyon, where the woman is now being monitored for rejection reactions. "These people are not stupid," he said. The patient, who had difficulty eating before the transplant because of the injuries, ate dinner Sunday night and lunch on Monday, Dr. Dubernard said. Dr. Dubernard said the woman had visited the ?douard Herriot Hospital in Lyon "in October and again in November to meet the transplant team and to ask questions." Yet longer-term psychological evaluation might have been useful for another reason, experts said. Since this is the first transplant of its kind, and it strikes deeply at questions of personal identity, the French patient's emotional stamina will be sorely tested. Already some news reports in Europe have identified the woman, who requested anonymity, and published pictures of her before and after the surgery. Scientists planning for the first face transplant knew it would happen. One of the Cleveland Clinic's screening criteria is that a candidate for this procedure must be able to withstand intense public scrutiny - to be able to see pictures of the face's former owner, for example, on tabloid covers at the checkout rack. Resilience is important both for the first patient, say researchers, and for those who would follow. "Every patient, when you talk to them, their goal is just to get out of the limelight," said Dr. David Young, associate professor of plastic surgery at the University of California, San Francisco, which also has been drawing up plans for a face transplant. "If this works, many potential patients who are on the fence will change their minds. But if this thing crashes and burns, it will damage the field." For some experts, even the best-case situation has a down side. "We want for this to go well," said Dr. Siemionow. "But if it does, then I am afraid everyone will forget that the ethics were not proper here. And if it does not, then they will be blaming the transplant procedure but not the ethics behind it." From checker at panix.com Mon Dec 12 01:36:33 2005 From: checker at panix.com (Premise Checker) Date: Sun, 11 Dec 2005 20:36:33 -0500 (EST) Subject: [Paleopsych] Lisa Barrow and Cecilia Elena Rouse: Does College Still Pay? Message-ID: Lisa Barrow and Cecilia Elena Rouse: Does College Still Pay? The Economists' Voice Volume 2, Issue 4 2005 Article 3 Tuesday, December 06, 2005 [This is an absolutely bad paper, unfortunately all-too-typical of what educational "economists" foist upon the public. Buried in footnote 3 is a statement that that the resulting income is not going to be correct for the sort of students that go to college in the first place. That those who do go to college just might possibly have higher intelligence than those who do not is not even mentioned! [Further bad economics is the statement that "Minimum wage increases in the late 1990s helped increase the wages ofthe lowest-skilled workers...." But what about those that don't get jobs because the minimum wage is set at more than they can get on the market?? [I shouldn't even quibble at this point with setting the discount rate at five percent, though the true time-discount rate is more on the order of two percent.] Lisa Barrow is an economist at the Federal Reserve Bank of Chicago and Cecilia Rouse is a professor at Princeton. Summary Since the mid-1990s college tuition costs have risen quickly while the rate of increase in the value of education has slowed considerably. Cecilia Rouse and Lisa Barrow explore the reasons and ask if college remains a good investment. ------------ In the 1980s the value of a college education grew significantly. According to Census data, in 1979 those with a bachelor's degree or higherearned roughly 45 percent more per hour than workers with only a high school diploma. By 1989 wages for college graduates were more than 70 percent higherthan those of high school graduates.1 This dramatic change revived argumentsover the cause and effect relationship between education and higher income. Inother words, was education driving income levels or was the education trend abyproduct of rising income levels? This debate spawned a very large literature tying increasing income inequality to a decrease in demand for workers withoutmarketable skills. A key reason for the increasing value of a college educationwas the increasing cost of not having one: Real earnings of workers without somecollege education fell during the 1980s, as earnings of the more highly educatedincreased. Politicians and policymakers tried to enact policies to improve educational attainment, for as President Clinton stated: "Today, more than everbefore in our history, education is the fault line between those who will prosper inthe new economy and those who will not."2 1 All levels of education have become more valuable since the late 1970s. The return on each yearof schooling was 6.6 percent (in terms of hourly wages) in 1979, compared with 9.8 percent in 1989 and 10.9 percent in 2000. We focus on college education here due to space limitations. 2 "President Clinton's Call to Action for American Education in the 21st Century" (February 1997) available at www.ed.gov/updates/PresEDPlan/part9.html, accessed on April 4, 2005. But the labor market changed in the mid-1990s. The hourly wage gapbetween those with college education and those without, which had grown by 25 percentage points in the 1980s, grew by only 10 percentage points in the 1990s. At the same time, college tuition rates increased extremely rapidly. The wage-gapslowdown has led some to wonder: Has college ceased being the better deal overthe past few years? Do rising tuition levels mean that the value of a collegeeducation has peaked? And even, is attending college still worth the costs? Our answer to the final question is yes. College is definitely still worth theinvestment. In fact, there are no signs that the value of a college education haspeaked or is on a downward trend. Also, the rapid annual percentage rise in thecost of tuition has had little effect on the value of a college education, largelybecause tuition is a relatively small part of the true total economic cost of attending college. Most of the true economic cost of college is the wages studentsforego while they attend--and those have not risen by very much at all. The Changing Value of Education To make sense of trends in the economic value of education, one must first understand what economists see as the "return to education." The return to education is the capitalized present value of the extra income an individual wouldearn with additional schooling, after taking into account all of the costs of obtaining the additional schooling.3 This return to education may change becauseof a shift in the income for individuals who obtain more schooling or a shift in theincome of those who do not. Also, a change in the economic costs of educationcan affect the return to education. 3 Ideally, one would observe a worker's income were she to obtain the additional schooling, andthen compare this to her income were she to obtain no further schooling. Because an individual either obtains more schooling or does not, this ideal is impossible to measure. Therefore, economists typically compute the return to schooling by comparing the average income of workerswho have obtained the additional schooling to those who did not. The main conceptual issue with this observed return to schooling is a concern that workers who obtained the additional schoolingmay also differ from those that did not along unobserved dimensions (such as they were moremotivated or hard working). Further discussion of these issues is beyond the scope of this paper; however, we refer the interested reader to Card (1999). Figure 1 shows the average hourly real wages (relative to hourly wages in 1980) for four sets of workers between 1980 and 2004. The four categoriesinclude: those who did not complete high school; those who only earned a high school diploma; those who have some college education but did not earn a bachelor's degree; and those who earned at least a bachelor's degree. Through themid-1990s average hourly wages increase fairly steadily among those with at leasta bachelor's degree, while the real wages of high school dropouts and of thosewith only a high school diploma decline. These trends account for the largeincreases in the return to schooling through the mid-1990s. Since the mid-1990s the average wages of college graduates have skyrocketed, increasing by 18 percent by 2004. However, the wages of highschool dropouts have also risen, climbing by 10 percent in the second half of the1990s from their lowest levels in 1994. Because of this turnaround in the wagesof high school dropouts, the college wage premium has risen at a much slowerrate of increase than before. And rapidly rising tuition costs must be set againstthis slower rate of increase. [Figure 1 Hourly Wages by Education Group Relative to 1980 Hourly Wages Source: Authors' calculations from the 1980-2004 (even years only) Current Population SurveyOutgoing Rotation Group files available from Unicon. We limit the sample to individualsbetween ages of 25 and 65 years, and drop observations with wages < 1/2 of the minimum wage orabove the 99th percentile of the distribution.] Why Is the Premium No Longer Rising as Rapidly? Many economists in the 1990s thought the major source of increasing wage inequality was "skill-biased technological change" (see Bound and Johnson [1992] and Katz and Murphy [1992]). Changes in technology increased theproductivity of high-skilled workers relative to low-skilled workers, raising therelative demand for the former. Therefore, relative wages for high-skilled workersrose, while those for the less-skilled declined. An end to this skill bias in technological change could account for the leveling off of the return to education. While possible, we do not believe this is a likely explanation. The relativewages of college graduates have risen at the same time as the supply of high- skilled workers has increased, due to higher enrollment at colleges and greaterimmigration of high-skilled workers. Between 1996 and 2000 college enrollmentrose by nearly 7 percent (National Center for Education Statistics, 2003). Since1999, 36 percent of immigrants entering the U.S. had at least a bachelor's degreecompared with 24 percent of immigrants arriving in the 1980s (CurrentPopulation Survey, 2003). The share of the population aged 25 to 65 years old with at least a bachelor's degree rose from 26 percent in 1996 to 30 percent in 2004.4. Despite the growth in the relative supply of college graduates, the wages of college graduates have continued to rise dramatically, which indicates an increasing--not a decreasing--demand for their skills. Moreover, average wagesof workers with lower levels of education have also increased since 1995; it is this turnaround in the trend which accounts for the slowing growth in the return to schooling. 4 Authors' calculations based on March CPS data. Autor, Katz, and Kearney (2004) also find thatthe relative supply of college-equivalent labor continued to increase throughout the late 1990s andearly 2000s. Thus the relevant question is: Why have the wages of these lower-skilled workers increased in the past decade? Minimum wage increases in the late 1990s helped increase the wages ofthe lowest-skilled workers, but it is unlikely to account fully for the turnaround. First, the last increase in the federal minimum wage came in late 1997, two yearsafter average wages of the lowest-skilled workers began to increase. It cannot account for subsequent increases in the wages of low-skilled workers. The statesthat have raised their minimum wages since 1997 make up only about one-third of U.S. payroll employment: It is unlikely that state minimum wages can fullyaccount for changes in average wages across the entire country.5 Moreover, there is an anomaly in the time-series relationship between minimum wages and inequality: In the data, the level of the minimum wage is correlated with inequality at both the bottom (where it should be) and the top (where it shouldn'tbe, if a low minimum wage is a cause and not a consequence of high inequality) of the wage distribution.6 The booming economy of the late 1990s is the most likely explanation for the turnaround, as it raised the average wages of all workers, including those with the lowest skills.7 5 States that raised their minimum wages include Alaska, California, Connecticut, Delaware, Hawaii, Illinois, Massachusetts, Maine, New York, Oregon, Rhode Island, Vermont, and Washington. The District of Columbia also raised its minimum wage. 6 Lee (1999) finds that in the 1980s the fall in the real value of the minimum wage can account forincreasing inequality at the bottom of the wage distribution, suggesting that minimum wageincreases of the mid-1990s also propped up wages at the bottom of the wage distribution, althoughAutor, Katz, and Kearney (2004) raise some caution about this interpretation. Namely, theyhighlight that much of the decline in the real value of the minimum wage during the 1980soccurred during an economic downturn, whereas the minimum wage increases in the 1990s were legislated during economic expansions. 7 Studies of labor market cyclicality, e.g., Hoynes (2000) and Hines, Hoynes, and Krueger (2001), show that earnings and (especially) employment are procyclical and that less educated individualsexperience greater cyclical variation than more educated individuals. Why College Education Is Still Worth It How good an investment finishing college is depends on both earningsand costs--the earnings of college graduates relative to high school graduates andthe costs of attending college (both tuition and foregone earnings). Tuition andfees for a four-year college for the 2003-2004 academic year averaged $7,091; the average net price--tuition and fees net of grants--was $5,558 (both amountsin 2003 dollars).8 If we assume that tuition and fees continue to rise as they didbetween the 1999-2000 and 2003-2004 school years, and conservatively look atsticker rather than net prices, the average full-time student entering a program inthe fall of 2003 who completes a bachelor's degree in four years will pay $30,325 in tuition and fees. If we assume an opportunity cost equal to the average annualearnings of a high school graduate (from the March 2004 Current PopulationSurvey) and a 5 percent discount rate for time preference, the total cost of attending college rises to $107,277. In other words, college is worthwhile for an average student if getting a bachelor's degree boosts the present value of herlifetime earnings by at least $107,277. 8 U.S. Department of Education, 2005, based on data from the National Postsecondary StudentAid Study. What is the boost to the present value of wages? At a 5 percent annualdiscount rate, it is $402,959. The net present value of a four-year degree to anaverage student entering college in the fall of 2003 is roughly $295,682--the difference between $402,959 in earnings and $107,277 in total costs.9 A student entering college today can expect to recoup her investment within 10 years ofgraduation. 9 Assuming that the college graduate-high school graduate earnings gap is constant over thelifecycle and equals the difference in average annual earnings for these two education groups asmeasured in the 2004 March Current Population Survey, a college graduate earns $27,800 more in inflation adjusted dollars per year. Alternatively, if we assume annual earnings will follow averageearnings by age, the net present value to a first year student in the fall of 2003 is roughly $246,923 ($354,200 in earnings minus $107,277 in tuition, fees, and lost wages). Note that by using annualearnings we take into account the higher rates of unemployment among high school graduates. This may not be correct, to the extent that lower unemployment is not the result of completing thebachelor's degree; rather, it may be result of having the personal factors that made it likely that anindividual would complete the degree in the first place. It still pays to go to college--very much so, at least as much as ever before.10 10 Note, however, that future changes in the U.S. labor market might affect relative compensation. If many more people who otherwise would not have attended college decide to do so, a dramatically increased supply of college graduates would compete in the labor market, and hence, the net benefits of college might be significantly smaller than we calculate. Letters commenting on this piece or others may be submitted at http://www.bepress.com/cgi/submit.cgi?context=ev References and Further Reading Autor, David H., Lawrence F. Katz, and Melissa S. Kearney. 2004. "Trends in U.S. Wage Inequality: Re-Assessing the Revisionists," Unpublished manuscript. Bound, John and George Johnson. 1992. "Changes in the Structure of Wages in the 1980's: An Evaluation of Alternative Explanations," The American EconomicReview, 82(3), pp. 371-392. Card, David. 1999. "The Causal Effect of Education on Earnings" in Handbook of Labor Economics, Vol. 3A (Orley C. Ashenfelter and David Card, editors). (Amsterdam: Elsevier), pp. 1801-1863. Current Population Survey. 2003. "Table 2.5: Educational Attainment of the Foreign-Born Population 25 Years and Over by Sex and Year of Entry: 2003," The Foreign-Born Population in the United States: March 2003. (P20-539). Retrieved from http://www.census.gov/population/socdemo/foreign/ppl174/tab02-05.xls on April 1, 2005. Hines Jr., James R., Hilary W. Hoynes, and Alan B. Krueger. 2001. "Another Look at Whether a Rising Tide Lifts All Boats," in The Roaring Nineties: CanFull Employment Be Sustained? (Alan B. Krueger and Robert M. Solow, editors) (New York: The Russell Sage Foundation), pp. 493-537. Hoynes, Hilary W. 2000. "The Employment and Earnings of Less Skilled Workers Over the Business Cycle," in Finding Jobs: Work and Welfare Reform. (Rebecca Blank and David Card, editors) (New York: The Russell Sage Foundation), pp. 23-71. Katz, Lawrence F. and Kevin M. Murphy. 1992. "Changes in Relative Wages, 1963-1987: Supply and Demand Factors," The Quarterly Journal of Economics, 107(1) (February), pp. 35-78. Lee, David S. 1999. "Wage Inequality in the United States During the 1980s: Rising Dispersion or Falling Minimum Wage?" The Quarterly Journal of Economics, 114(3) (August), pp. 977-1023. Pierce, Brooks. 2001. "Compensation Inequality," The Quarterly Journal of Economics, 116(4) (November), pp. 1493-1525. Snyder, T. D., A.G. Tan, and C. M. Hoffman. 2004. "Table 174. Total fall enrollment in degree-granting institutions, by attendance status, sex of student, and control of institution: 1947 to 2001," Digest of Education Statistics, 2003. (NCES-2005-025). U.S. Department of Education, National Center for Education Statistics. Washington, D.C.: Government Printing Office. Retrieved from http://www.nces.ed.gov/programs/digest/d03/tables/xls/tab174.xls on April 1, 2005. U.S. Department of Education, National Center for Education Statistics. 2005. National Postsecondary Student Aid Study: Undergraduate Online Data AnalysisSystem. Produced by The Berkeley Electronic Press, 2005 Acknowledgements: We thank Gadi Barlevy, Jonas Fisher, and Alan Krueger for useful conversations, and Kyung-Hong Park for expert research assistance. All errors in fact or interpretation are ours. The opinions in this paper do not reflect those ofthe Federal Reserve Bank of Chicago or the Federal Reserve System. From checker at panix.com Mon Dec 12 01:36:38 2005 From: checker at panix.com (Premise Checker) Date: Sun, 11 Dec 2005 20:36:38 -0500 (EST) Subject: [Paleopsych] NYT: Hold the Limo: The Prom's Canceled as Decadent Message-ID: Hold the Limo: The Prom's Canceled as Decadent http://www.nytimes.com/2005/12/10/nyregion/10prom.html [More of the reaction to the excesses of the age. I most definitely did not go to any prom at Fountain Valley School, an all-male (at the time) boarding school outside Colorado Springs. I don't even remember if their was a prom, though Sarah recalls Dick going to one five years later. I thought girls were as silly as my mother and sister and did not want to have anything to do with them. (My opinions about my mother and sister have certainly changed since then! And I think they have, too.) [I took Nancy Johnson to a dance in the eighth grade and took another at Fountain Valley School in the ninth grade, but Steve Reierstad took her away from me and left me alone for the rest of the evening. (He was later expelled from the school, I think for smoking, but he must really have wanted to get kicked out. He was a bully. When a roommate, Vance Thompson, left (voluntarily), I had a room all to myself. For reasons I don't recall, if I were to take a roommate (Ned Fetcher), Steve's rooming arrangements could be changed so he could have a roommate more to his liking. He promised to stop bullying if I agreed. I did, but the bullying resumed shortly thereafter. A lesson learned! [Ned asked me whether I believed in God. Yes. Why? I couldn't answer and so dropped the belief until I could come up with some good reasons. He started a lifelong fascination with religion, but 51 years later, I still haven't come up with good reasons to believe in a Stone Age creator. I met up with him at my 25th reunion but haven't been able to reestablish contact with him by e-mail. Maybe I gave up too many other beliefs, some of which he cherishes. [Anyhow, my parents housed some visiting foreign students for a couple of nights, in the Summer of 1961. I had to take the girl to a dance. My friend, Roy Dent, was also there. My activity for the evening was to drink a Coke and talk to Roy about the fourth dimension. No dancing at all, and the girl left to dance with others. She had a good time, too. [In college, I thought I really ought to be dating girls--UVa's undergraduate Arts and Sciences will still all boys--and in my final year signed up with Operation Match, one of the first computerized dating services. I found a girl closely matched: Episcopalian background, physicians as father, same sized city,.... It turned out we have NOTHING in common. [I stayed on at UVa, switching from math to economics in graduate school. I twice dated Sally Ann Moyer (Mary Washington College), a staunch Republican and Nixonite. Now this was just too much! I could see Nixon over what's-his-name but being actually enthusiastic about Nixon? I was willing to go for a third date, but she wasn't interested. Then five dates with Gladys Swanson, a sweet girl at the same college. She felt warmly about me, but our IQ levels were just too far apart. I was falling in love with her too, but I got the better of myself before I wound up in a marriage that would have been been okay, indeed pleasant, but not much more than that. [Now to my 13th date. (I missed one somewhere.) Peter Graham, a friend of mine at UVa, went to Mary Washington College and asked, "Who would like to date a mathematical genius?" One Sarah Banks answered she would, and so Peter told me I had a date, not just for an evening, but for an entire "big" weekend, from Friday at 6:00 till Sunday about the same time. My date would arrive by bus from Fredericksburg to the Rotunda. I had not particularly wanted a date, but I went ahead anyhow. I fired off Miss Banks a letter, telling me about myself and inviting her to do the same. Here's what appeared in my mail, on 1967 April 14, two days before I was actually to meet her: Dear Frank, Tho' an exhaustive description is requested, physical I presume, let me say only that I probably won't come as a shock, being in appearance just another Mary Washington lady, prudish of background and foul of mind. Please do find enclosed, however, a piece of said mind: cluttered, eclectic, and probably repressed, the obvious result of spending part of my childhood in English fog, listening to the BBC. Despite my gothic tendencies, I'm really harmless. The impression I give has been compared to a white rabbit in a daisy field, an owl in a dusty attic, and a mouse in a haunted haystack. This is not to imply that I am either cute or sweet, & certainly not shy. By life style, I am a hopeless, scatter- footed dabbler, constantly acquiring new weaknesses. I will stick my nose into anything, particularly if I know little or nothing about it (like economics, likewise Colorado.) Let me guess at your artistic.... tendencies? While sleuthing on my own, you were described to me as "a one-man happening. *all* the time." Sounds like a man with a taste for the *High Camp*. And if you believe, with me & Marshall MacLuhan, that Today, *Art* is anything is anything you can get away with, the Bless Pete! we may yet do well by each other. My ideal weekend is full of noise, elbow warfare, & good conversation. Hope you weren't expecting a letter that comes to the point. Actually, I wouldn't dream of giving you fair warning. One helpful hint: *Never* take me seriously. Exit, Sarah [What is this wondrous creature! I read and reread the letter and was it was love before the first sight. I told her I'd meet her carrying a copy of Mahler's Second Symphony. I just assumed she'd know who Mahler was, though I doubt any of my previous dates did! She was carrying a copy of a book on pop art. Her first reaction to me and my black-framed glasses was "what has Peter gotten me into?". We walked across the Lawn and down to my apartment on Brandon Avenue, without thinking that this was in complete violation of UVa rules, for which I could be expelled. On the way in, I showed her the bumper sticker of my Austin Healey, which read "Bumper Sticker." She exclaimed that she didn't need to explain what pop art was to me. [We talked and talked and talked. I took her to an Escher exhibit at the Alderman Library and thence to the stacks, where I delightedly showed her a copy of a religious book that featured drawings of the U.S. Presidents and which of the Lost Tribes of Israel they came from. We went up and down a winding staircase, going down a number of half-stories, back up a smaller number, and arriving at the same place! I did eventually figure out what had happened but have not, to this day, explained how I can do much the same thing in Old Cabell Hall. She spent the night, with some other Mary Washington girls, at the house Peter and some other boys had rented. UVa had a policy that girls could not enter boys' premises and certainly not spend the night there. There were a number of little old ladies who rented out rooms for $5 for the night to just such girls. Why Peter's house qualified, I don't know. Maybe it didn't, but Mary Washington College apparently did not check too closely about this matter. [At the end of a "big" weekend of talking, I simply informed Miss Banks that I'd be seeing her the following weekend, which I did. The following weekend, too, and the weekend after that, the weekend after that, until the end of the semester. (It took me until the second date to realize I wanted to spend the rest of my life with this creature.) Once or twice, she said she had gotten too far behind in her work but that I could come visit her but I'd have to wait till late in the evening before she was ready. I wound up spending the night in a room in a little old lady's house up in Fredericksburg myself, there being no girls renting houses where the boys could stay. [I had met her grandmother at her farm near Warrenton. Though intending to leave her there, happily, my Healey broke down (it really did, though her grandmother was highly suspicious) and I had to spend the night there. We discreetly slept in separate rooms. I met her parents in Alexandria also. Miss Banks went home but I managed to take the2 1/2 drive every weekend to see her during the Summer. [I drove her out to Colorado Springs to show her off to my parents. Dad pronounced her to be one of the finest women he had ever met of any age. The next Semester included weekly dates, until at last we got married between semesters on 1968 February 2. It was at the grandmother's farm, which had been in the family since about 1830 and where a great grandmother and father has been married before. The ceremonies were conducted by a Unitarian minister, who had allowed me to expunge all references to the supernatural. I had verses read, though, from a family Bible, an old one before it got translated: CHAPTER 3 1 To every thing there is a season, and a time to every purpose under the heaven: 2 A time to be born, and a time to die; a time to plant, and a time to pluck up that which is planted; 3 A time to kill, and a time to heal; a time to break down, and a time to build up; 4 A time to weep, and a time to laugh; a time to mourn, and a time to dance; 5 A time to cast away stones, and a time to gather stones together; a time to embrace, and a time to refrain from embracing; 6 A time to get, and a time to lose; a time to keep, and a time to cast away; 7 A time to rend, and a time to sew; a time to keep silence, and a time to speak; 8 A time to love, and a time to hate; a time of war, and a time of peace. ----------- [Dad offered to pay for a honeymoon to one of those islands that has no bookstores. I said I'd rather just have the friends who came to our wedding come visit us in our new apartment. We are still suspended in that magical moment between the wedding reception and the honeymoon, which has yet to take place. [And we haven't stopped talking.] ______________________________________________________________ By PAUL VITELLO Prom night, that all-American rite of passage that fell out of favor during the anti-establishment 1960's and then made a comeback in the conservative tilt of the Reagan era, probably always inhabited terrain destined to become a battleground in the so-called culture wars. It is about social manners, class, gender roles; and to a more or less open degree, it is about sex. That may explain why recent decisions by two Roman Catholic high school principals on Long Island to cancel proms for the class of 2006 - both citing exasperation with what the educators described as a decadent "prom culture" - seem to have struck a chord well beyond the worlds of Catholics, high schools or Long Island. Newspaper editorial writers, social scientists and parents across the country linked through Web sites have responded in the past two months with what seems like a giant exhalation of relief, as if someone had finally said what they had long feared to say. "Strike up the orchestra for Brother Kenneth Hoagland, principal of Kellenberg Memorial High School in Uniondale, N.Y.," read an Oct. 23 editorial in The Chicago Tribune. "Not because he has canceled the Long Island school's spring prom but because in doing so he provoked what should be local discussions nationwide about prom night activities and about parents and educators who don't do their jobs." Underlying the concern seems to be a widespread uncertainty about the coming-of-age ritual embodied in the modern prom - the $500 to $1,000 spent on dress, limo and parties before and after the actual event. It has become not uncommon for parents to sign leases for houses, where couples room together, for post-prom weekend events or for parents to authorize boat excursions in which under-age drinking is not just winked at but expected. Trumping it all, of course, is the uncertainty about sex. "Common parlance tells us that this is a time to lose one's virginity," Brother Hoagland and other administrators of Kellenberg High wrote in a letter to parents in March, warning them that the prom might be canceled unless parents stopped financing what, in effect, the school considered bacchanals. "It is a time of heightened sexuality in a culture of anything goes," the letter added. "The prom has become a sexual focal point. This is supposed to be a dance, not a honeymoon." Six months after the initial letter, administrators canceled the prom by fiat, citing not just sex and alcohol use, but also what they described as materialism run amok. A month later, in November, administrators at another Roman Catholic school on Long Island, Chaminade High School in Mineola, followed suit, explaining that the prom was being canceled because its decadence and "showcase of affluence" were "opposed to our value system." Both principals reported receiving letters of support and requests for interviews from all over the world. British, Australian, Japanese and Ukrainian newspapers, for instance, ran prominent features about the principals' bold stand against American decadence. Whether those "local discussions nationwide" urged by The Chicago Tribune lead to a larger consensus about proms, or remain small countercultural acts in what has become a $2.7 billion prom industry, some observers viewed them as opening an interesting new front in the continuing battle over American values. "I think there is a general desire to bring religious values into public life, and these actions against the prom seem like signs of that," said John Farina, a researcher at Georgetown University who studies the intersection of religion and culture. "To some extent, it reflects the influence of John Paul II - his willingness to confront and resist the dominant culture. As a teacher, I wish more educators had that kind of backbone." An opposing view was expressed by George M. Kapalka, a professor of psychological counseling at Monmouth University in West Long Branch, N.J. Resisting unacceptable behavior and banning it, he said, represent two different spirits in education. "This is just another example of the 'just say no' policy, which has failed miserably wherever it's been applied," Professor Kapalka said. "It would be better to start the conversation with kids about values earlier than to wait until senior year and ban the prom." Among disappointed students, there was a sense that the timing of the ban was arbitrary. "It was like a slap in the face," said Shane Abrams, a 17-year-old Chaminade senior. "A lot of kids feel like: 'Why us? Why this year?' Why didn't they ban the prom last year, or the year before?" Countering the charges of prom extravagance, a number of students pointed out that the school was spending about $20 million on an athletic center, an expense they said was extravagant, also. Chaminade's headmaster, the Rev. James Williams, said the decision to cancel the senior prom this year was based on an accumulation of evidence that "the modern culture of the prom has become toxic and beyond remediation." He added: "It's part of a larger issue. Why are sweet 16 parties becoming more like weddings? Why are otherwise moderate kids suddenly pressured to go wild on one night at the end of four years of Chaminade education? "We are saying we admit that this takes place, and we won't be part of it anymore." William J. Doherty, a professor of family studies at the University of Minnesota and author of "Take Back Your Kids," a study about overscheduled children, said in a phone interview that prom excesses like those cited by Brother Hoagland and Father Williams were typical of what he calls "consumer-driven parenting." "We have parents heavily involved in orchestrating their children's experience because of this notion that experiences can be purchased," Dr. Doherty said. In the Minneapolis-St. Paul area, he said, he knew of one mother who did not want her daughter to go on a senior class trip to Cancun, but would not forbid it. "Her comment was 'how sad' it would be if her daughter was the only one at her lunch table to miss that experience. "It's not that a whole generation of parents is crazy," Dr. Doherty said. "It's that there is a subset of parents who are crazy - and the rest don't want their kids to miss out." Prom night may never replace abortion on the front line of the culture wars, but in small increments, the issue of prom night does seem to be forcing itself onto the agenda generally described as family values. Web sites ranging from those of the conservative Concerned Women for America to the nonpartisan Berkeley Parents Network, to those of various Islamic and Orthodox Jewish organizations, have in recent years posted advice to parents about proms, most of it highly cautionary. In 2002, after several students who attended a junior prom were hospitalized for alcohol poisoning, the administrators at Rye High School in Westchester County began a dialogue with students and parents about how to proceed. One option was to cancel the prom. "But we came up with a compromise," said Jim Rooney, the principal. Since 2003, Rye students attending the prom have to report to school that evening with at least one parent. The parent must sign a consent form and leave a phone number where he or she can be reached. All students then travel on a coach bus, provided by the school, to and from the prom - no limos, no sneaking drinks. "The before-prom gathering has become a nice tradition," Mr. Rooney said. "The parents and kids gather in our courtyard for pictures, and I don't think the kids would give that up for anything, at this point." On the other hand, he admitted, the school has no control over what happens after the prom bus drops seniors off back at the school. After-prom parties happen. It is almost assumed that students will seek memorable experiences according to their own standards. "A lot of them go off to these Chelsea bars," Mr. Rooney said. "I understand that most of those places are quite porous." From shovland at mindspring.com Sat Dec 10 16:59:47 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Sat, 10 Dec 2005 08:59:47 -0800 (GMT-08:00) Subject: [Paleopsych] Stealing Our Future Message-ID: <29010333.1134233987802.JavaMail.root@mswamui-andean.atl.sa.earthlink.net> America has been taken over by a gang of white-collar criminals. They are looting the system at every opportunity. They are cutting taxes and giving the money to their friends. They are dismembering vital public services. They are awarding no-bid contracts to buy goods and services at inflated prices. They are turning a blind eye to fraud. We may not be able to stop them at the moment, but we are taking notes, and there will be a day of judgement. From dsmith06 at maine.rr.com Mon Dec 12 15:42:15 2005 From: dsmith06 at maine.rr.com (David Smith) Date: Mon, 12 Dec 2005 10:42:15 -0500 Subject: [Paleopsych] 2006 William D. Hamilton Memorial Lecture Message-ID: <439D9A57.5000608@maine.rr.com> The New England Institute for Cognitive Science and Evolutionary Psychology sponsors an annual William D. Hamilton Memorial Lecture on some aspect of the interface between evolutionary biology and human nature. Since its inception in 2002, Hamilton lectures have been delivered by Robert Trivers (2002), Steven Pinker (2003), Richard Alexander (2004) and Daniel Dennett (2005), and have attracted an audience of scientists, academics and the general public from all over New England. NEI's 2006 William D. Hamilton Memorial Lecture will be delivered by Dr. David Haig, who will be speaking on 'The Divided Self: Brains, Brawn and the Superego'. The lecture will be held on April 28, 2006 at 7PM, at the Portland Campus of the University of New England in Portland, Maine. Further details of this and other NEI events open to the public will be posted, in due course, on our website at http://www.une.edu/nei David Livingstone Smith Director, NEI *The Divided Self: Brains, Brawn and the Superego* Biologists have traditionally viewed animals as machines and their brains as fitness-maximizing computers, and have emphasized the competitive struggle /between/ organisms. By contrast, psychologists and novelists have often portrayed minds as subject to internal division, and have often highlighted the conflicts that occur /within/ individuals. Now biologists have begun to recognize conflicts between genes within a single individual, an organism at odds with itself. I will illustrate this with the example of conflicts between maternally and paternally imprinted genes: genes that are expressed only when inherited from one's mother and those expressed only when inherited from one's father. *David Haig, Ph.D.* is Professor of Biology in Harvard University's Department of Organismic and Evolutionary Biology. He is an evolutionary geneticistwith a particular interest in genomic imprinting and relations between parents and offspring. He was born in Canberra, Australia, and did graduate research the evolution of plant cycles at Macquarie University in Sydney. After completing his PhD, Dr. Haig went to Oxford where he further developed his ideas on genomic imprinting and developed an interest in the conflicts between mother and fetus during human pregnancy. He then moved to Harvard, where he was nominated for the Harvard Society of Fellows, and where he continues his interest in conflicts within the genome. He is the author of /Genomic Imprinting and Kinship/ ( Rutgers, 2002) as well as numerous scientific papers, many of which are available on his web page at http://www.oeb.harvard.edu/faculty/haig/HaigHome.htm -------------- next part -------------- An HTML attachment was scrubbed... URL: From shovland at mindspring.com Wed Dec 14 04:55:54 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Tue, 13 Dec 2005 20:55:54 -0800 (GMT-08:00) Subject: [Paleopsych] Blood on the Snow Message-ID: <1018777.1134536154910.JavaMail.root@mswamui-chipeau.atl.sa.earthlink.net> This should be a season of good cheer, but it cannot be. We have little to celebrate, and many to mourn. Most of the world fears and hates us, and is justified in doing so. Everywhere we go, innocents suffer and die for our greed. Do not raise the cup of wassail, for it will be as bitter as hemlock. Laughter will fade to reflection, and hubris will go before shame. The mirror will be our enemy. From shovland at mindspring.com Fri Dec 16 04:36:41 2005 From: shovland at mindspring.com (Steve Hovland) Date: Thu, 15 Dec 2005 20:36:41 -0800 Subject: [Paleopsych] Harold Pinter's Nobel Peace Prize Speech Message-ID: Art, truth and politics In his video-taped Nobel acceptance speech, Harold Pinter excoriated a 'brutal, scornful and ruthless' United States. This is the full text of his address Harold Pinter Thursday December 08 2005 The Guardian In 1958 I wrote the following: 'There are no hard distinctions between what is real and what is unreal, nor between what is true and what is false. A thing is not necessarily either true or false; it can be both true and false.' I believe that these assertions still make sense and do still apply to the exploration of reality through art. So as a writer I stand by them but as a citizen I cannot. As a citizen I must ask: What is true? What is false? Truth in drama is forever elusive. You never quite find it but the search for it is compulsive. The search is clearly what drives the endeavour. The search is your task. More often than not you stumble upon the truth in the dark, colliding with it or just glimpsing an image or a shape which seems to correspond to the truth, often without realising that you have done so. But the real truth is that there never is any such thing as one truth to be found in dramatic art. There are many. These truths challenge each other, recoil from each other, reflect each other, ignore each other, tease each other, are blind to each other. Sometimes you feel you have the truth of a moment in your hand, then it slips through your fingers and is lost. I have often been asked how my plays come about. I cannot say. Nor can I ever sum up my plays, except to say that this is what happened. That is what they said. That is what they did. Most of the plays are engendered by a line, a word or an image. The given word is often shortly followed by the image. I shall give two examples of two lines which came right out of the blue into my head, followed by an image, followed by me. The plays are The Homecoming and Old Times. The first line of The Homecoming is 'What have you done with the scissors?' The first line of Old Times is 'Dark.' In each case I had no further information. In the first case someone was obviously looking for a pair of scissors and was demanding their whereabouts of someone else he suspected had probably stolen them. But I somehow knew that the person addressed didn't give a damn about the scissors or about the questioner either, for that matter. 'Dark' I took to be a description of someone's hair, the hair of a woman, and was the answer to a question. In each case I found myself compelled to pursue the matter. This happened visually, a very slow fade, through shadow into light. I always start a play by calling the characters A, B and C. In the play that became The Homecoming I saw a man enter a stark room and ask his question of a younger man sitting on an ugly sofa reading a racing paper. I somehow suspected that A was a father and that B was his son, but I had no proof. This was however confirmed a short time later when B (later to become Lenny) says to A (later to become Max), 'Dad, do you mind if I change the subject? I want to ask you something. The dinner we had before, what was the name of it? What do you call it? Why don't you buy a dog? You're a dog cook. Honest. You think you're cooking for a lot of dogs.' So since B calls A 'Dad' it seemed to me reasonable to assume that they were father and son. A was also clearly the cook and his cooking did not seem to be held in high regard. Did this mean that there was no mother? I didn't know. But, as I told myself at the time, our beginnings never know our ends. 'Dark.' A large window. Evening sky. A man, A (later to become Deeley), and a woman, B (later to become Kate), sitting with drinks. 'Fat or thin?' the man asks. Who are they talking about? But I then see, standing at the window, a woman, C (later to become Anna), in another condition of light, her back to them, her hair dark. It's a strange moment, the moment of creating characters who up to that moment have had no existence. What follows is fitful, uncertain, even hallucinatory, although sometimes it can be an unstoppable avalanche. The author's position is an odd one. In a sense he is not welcomed by the characters. The characters resist him, they are not easy to live with, they are impossible to define. You certainly can't dictate to them. To a certain extent you play a never-ending game with them, cat and mouse, blind man's buff, hide and seek. But finally you find that you have people of flesh and blood on your hands, people with will and an individual sensibility of their own, made out of component parts you are unable to change, manipulate or distort. So language in art remains a highly ambiguous transaction, a quicksand, a trampoline, a frozen pool which might give way under you, the author, at any time. But as I have said, the search for the truth can never stop. It cannot be adjourned, it cannot be postponed. It has to be faced, right there, on the spot. Political theatre presents an entirely different set of problems. Sermonising has to be avoided at all cost. Objectivity is essential. The characters must be allowed to breathe their own air. The author cannot confine and constrict them to satisfy his own taste or disposition or prejudice. He must be prepared to approach them from a variety of angles, from a full and uninhibited range of perspectives, take them by surprise, perhaps, occasionally, but nevertheless give them the freedom to go which way they will. This does not always work. And political satire, of course, adheres to none of these precepts, in fact does precisely the opposite, which is its proper function. In my play The Birthday Party I think I allow a whole range of options to operate in a dense forest of possibility before finally focussing on an act of subjugation. Mountain Language pretends to no such range of operation. It remains brutal, short and ugly. But the soldiers in the play do get some fun out of it. One sometimes forgets that torturers become easily bored. They need a bit of a laugh to keep their spirits up. This has been confirmed of course by the events at Abu Ghraib in Baghdad. Mountain Language lasts only 20 minutes, but it could go on for hour after hour, on and on and on, the same pattern repeated over and over again, on and on, hour after hour. Ashes to Ashes, on the other hand, seems to me to be taking place under water. A drowning woman, her hand reaching up through the waves, dropping down out of sight, reaching for others, but finding nobody there, either above or under the water, finding only shadows, reflections, floating; the woman a lost figure in a drowning landscape, a woman unable to escape the doom that seemed to belong only to others. But as they died, she must die too. Political language, as used by politicians, does not venture into any of this territory since the majority of politicians, on the evidence available to us, are interested not in truth but in power and in the maintenance of that power. To maintain that power it is essential that people remain in ignorance, that they live in ignorance of the truth, even the truth of their own lives. What surrounds us therefore is a vast tapestry of lies, upon which we feed. As every single person here knows, the justification for the invasion of Iraq was that Saddam Hussein possessed a highly dangerous body of weapons of mass destruction, some of which could be fired in 45 minutes, bringing about appalling devastation. We were assured that was true. It was not true. We were told that Iraq had a relationship with Al Quaeda and shared responsibility for the atrocity in New York of September 11th 2001. We were assured that this was true. It was not true. We were told that Iraq threatened the security of the world. We were assured it was true. It was not true. The truth is something entirely different. The truth is to do with how the United States understands its role in the world and how it chooses to embody it. But before I come back to the present I would like to look at the recent past, by which I mean United States foreign policy since the end of the Second World War. I believe it is obligatory upon us to subject this period to at least some kind of even limited scrutiny, which is all that time will allow here. Everyone knows what happened in the Soviet Union and throughout Eastern Europe during the post-war period: the systematic brutality, the widespread atrocities, the ruthless suppression of independent thought. All this has been fully documented and verified. But my contention here is that the US crimes in the same period have only been superficially recorded, let alone documented, let alone acknowledged, let alone recognised as crimes at all. I believe this must be addressed and that the truth has considerable bearing on where the world stands now. Although constrained, to a certain extent, by the existence of the Soviet Union, the United States' actions throughout the world made it clear that it had concluded it had carte blanche to do what it liked. Direct invasion of a sovereign state has never in fact been America's favoured method. In the main, it has preferred what it has described as 'low intensity conflict'. Low intensity conflict means that thousands of people die but slower than if you dropped a bomb on them in one fell swoop. It means that you infect the heart of the country, that you establish a malignant growth and watch the gangrene bloom. When the populace has been subdued - or beaten to death - the same thing - and your own friends, the military and the great corporations, sit comfortably in power, you go before the camera and say that democracy has prevailed. This was a commonplace in US foreign policy in the years to which I refer. The tragedy of Nicaragua was a highly significant case. I choose to offer it here as a potent example of America's view of its role in the world, both then and now. I was present at a meeting at the US embassy in London in the late 1980s. The United States Congress was about to decide whether to give more money to the Contras in their campaign against the state of Nicaragua. I was a member of a delegation speaking on behalf of Nicaragua but the most important member of this delegation was a Father John Metcalf. The leader of the US body was Raymond Seitz (then number two to the ambassador, later ambassador himself). Father Metcalf said: 'Sir, I am in charge of a parish in the north of Nicaragua. My parishioners built a school, a health centre, a cultural centre. We have lived in peace. A few months ago a Contra force attacked the parish. They destroyed everything: the school, the health centre, the cultural centre. They raped nurses and teachers, slaughtered doctors, in the most brutal manner. They behaved like savages. Please demand that the US government withdraw its support from this shocking terrorist activity.' Raymond Seitz had a very good reputation as a rational, responsible and highly sophisticated man. He was greatly respected in diplomatic circles. He listened, paused and then spoke with some gravity. 'Father,' he said, 'let me tell you something. In war, innocent people always suffer.' There was a frozen silence. We stared at him. He did not flinch. Innocent people, indeed, always suffer. Finally somebody said: 'But in this case "innocent people" were the victims of a gruesome atrocity subsidised by your government, one among many. If Congress allows the Contras more money further atrocities of this kind will take place. Is this not the case? Is your government not therefore guilty of supporting acts of murder and destruction upon the citizens of a sovereign state?' Seitz was imperturbable. 'I don't agree that the facts as presented support your assertions,' he said. As we were leaving the Embassy a US aide told me that he enjoyed my plays. I did not reply. I should remind you that at the time President Reagan made the following statement: 'The Contras are the moral equivalent of our Founding Fathers.' The United States supported the brutal Somoza dictatorship in Nicaragua for over 40 years. The Nicaraguan people, led by the Sandinistas, overthrew this regime in 1979, a breathtaking popular revolution. The Sandinistas weren't perfect. They possessed their fair share of arrogance and their political philosophy contained a number of contradictory elements. But they were intelligent, rational and civilised. They set out to establish a stable, decent, pluralistic society. The death penalty was abolished. Hundreds of thousands of poverty-stricken peasants were brought back from the dead. Over 100,000 families were given title to land. Two thousand schools were built. A quite remarkable literacy campaign reduced illiteracy in the country to less than one seventh. Free education was established and a free health service. Infant mortality was reduced by a third. Polio was eradicated. The United States denounced these achievements as Marxist/Leninist subversion. In the view of the US government, a dangerous example was being set. If Nicaragua was allowed to establish basic norms of social and economic justice, if it was allowed to raise the standards of health care and education and achieve social unity and national self respect, neighbouring countries would ask the same questions and do the same things. There was of course at the time fierce resistance to the status quo in El Salvador. I spoke earlier about 'a tapestry of lies' which surrounds us. President Reagan commonly described Nicaragua as a 'totalitarian dungeon'. This was taken generally by the media, and certainly by the British government, as accurate and fair comment. But there was in fact no record of death squads under the Sandinista government. There was no record of torture. There was no record of systematic or official military brutality. No priests were ever murdered in Nicaragua. There were in fact three priests in the government, two Jesuits and a Maryknoll missionary. The totalitarian dungeons were actually next door, in El Salvador and Guatemala. The United States had brought down the democratically elected government of Guatemala in 1954 and it is estimated that over 200,000 people had been victims of successive military dictatorships. Six of the most distinguished Jesuits in the world were viciously murdered at the Central American University in San Salvador in 1989 by a battalion of the Alcatl regiment trained at Fort Benning, Georgia, USA. That extremely brave man Archbishop Romero was assassinated while saying mass. It is estimated that 75,000 people died. Why were they killed? They were killed because they believed a better life was possible and should be achieved. That belief immediately qualified them as communists. They died because they dared to question the status quo, the endless plateau of poverty, disease, degradation and oppression, which had been their birthright. The United States finally brought down the Sandinista government. It took some years and considerable resistance but relentless economic persecution and 30,000 dead finally undermined the spirit of the Nicaraguan people. They were exhausted and poverty stricken once again. The casinos moved back into the country. Free health and free education were over. Big business returned with a vengeance. 'Democracy' had prevailed. But this 'policy' was by no means restricted to Central America. It was conducted throughout the world. It was never-ending. And it is as if it never happened. The United States supported and in many cases engendered every right wing military dictatorship in the world after the end of the Second World War. I refer to Indonesia, Greece, Uruguay, Brazil, Paraguay, Haiti, Turkey, the Philippines, Guatemala, El Salvador, and, of course, Chile. The horror the United States inflicted upon Chile in 1973 can never be purged and can never be forgiven. Hundreds of thousands of deaths took place throughout these countries. Did they take place? And are they in all cases attributable to US foreign policy? The answer is yes they did take place and they are attributable to American foreign policy. But you wouldn't know it. It never happened. Nothing ever happened. Even while it was happening it wasn't happening. It didn't matter. It was of no interest. The crimes of the United States have been systematic, constant, vicious, remorseless, but very few people have actually talked about them. You have to hand it to America. It has exercised a quite clinical manipulation of power worldwide while masquerading as a force for universal good. It's a brilliant, even witty, highly successful act of hypnosis. I put to you that the United States is without doubt the greatest show on the road. Brutal, indifferent, scornful and ruthless it may be but it is also very clever. As a salesman it is out on its own and its most saleable commodity is self love. It's a winner. Listen to all American presidents on television say the words, 'the American people', as in the sentence, 'I say to the American people it is time to pray and to defend the rights of the American people and I ask the American people to trust their president in the action he is about to take on behalf of the American people.' It's a scintillating stratagem. Language is actually employed to keep thought at bay. The words 'the American people' provide a truly voluptuous cushion of reassurance. You don't need to think. Just lie back on the cushion. The cushion may be suffocating your intelligence and your critical faculties but it's very comfortable. This does not apply of course to the 40 million people living below the poverty line and the 2 million men and women imprisoned in the vast gulag of prisons, which extends across the US. The United States no longer bothers about low intensity conflict. It no longer sees any point in being reticent or even devious. It puts its cards on the table without fear or favour. It quite simply doesn't give a damn about the United Nations, international law or critical dissent, which it regards as impotent and irrelevant. It also has its own bleating little lamb tagging behind it on a lead, the pathetic and supine Great Britain. What has happened to our moral sensibility? Did we ever have any? What do these words mean? Do they refer to a term very rarely employed these days - conscience? A conscience to do not only with our own acts but to do with our shared responsibility in the acts of others? Is all this dead? Look at Guantanamo Bay. Hundreds of people detained without charge for over three years, with no legal representation or due process, technically detained forever. This totally illegitimate structure is maintained in defiance of the Geneva Convention. It is not only tolerated but hardly thought about by what's called the 'international community'. This criminal outrage is being committed by a country, which declares itself to be 'the leader of the free world'. Do we think about the inhabitants of Guantanamo Bay? What does the media say about them? They pop up occasionally - a small item on page six. They have been consigned to a no man's land from which indeed they may never return. At present many are on hunger strike, being force-fed, including British residents. No niceties in these force-feeding procedures. No sedative or anaesthetic. Just a tube stuck up your nose and into your throat. You vomit blood. This is torture. What has the British Foreign Secretary said about this? Nothing. What has the British Prime Minister said about this? Nothing. Why not? Because the United States has said: to criticise our conduct in Guantanamo Bay constitutes an unfriendly act. You're either with us or against us. So Blair shuts up. The invasion of Iraq was a bandit act, an act of blatant state terrorism, demonstrating absolute contempt for the concept of international law. The invasion was an arbitrary military action inspired by a series of lies upon lies and gross manipulation of the media and therefore of the public; an act intended to consolidate American military and economic control of the Middle East masquerading - as a last resort - all other justifications having failed to justify themselves - as liberation. A formidable assertion of military force responsible for the death and mutilation of thousands and thousands of innocent people. We have brought torture, cluster bombs, depleted uranium, innumerable acts of random murder, misery, degradation and death to the Iraqi people and call it 'bringing freedom and democracy to the Middle East'. How many people do you have to kill before you qualify to be described as a mass murderer and a war criminal? One hundred thousand? More than enough, I would have thought. Therefore it is just that Bush and Blair be arraigned before the International Criminal Court of Justice. But Bush has been clever. He has not ratified the International Criminal Court of Justice. Therefore if any American soldier or for that matter politician finds himself in the dock Bush has warned that he will send in the marines. But Tony Blair has ratified the Court and is therefore available for prosecution. We can let the Court have his address if they're interested. It is Number 10, Downing Street, London. Death in this context is irrelevant. Both Bush and Blair place death well away on the back burner. At least 100,000 Iraqis were killed by American bombs and missiles before the Iraq insurgency began. These people are of no moment. Their deaths don't exist. They are blank. They are not even recorded as being dead. 'We don't do body counts,' said the American general Tommy Franks. Early in the invasion there was a photograph published on the front page of British newspapers of Tony Blair kissing the cheek of a little Iraqi boy. 'A grateful child,' said the caption. A few days later there was a story and photograph, on an inside page, of another four-year-old boy with no arms. His family had been blown up by a missile. He was the only survivor. 'When do I get my arms back?' he asked. The story was dropped. Well, Tony Blair wasn't holding him in his arms, nor the body of any other mutilated child, nor the body of any bloody corpse. Blood is dirty. It dirties your shirt and tie when you're making a sincere speech on television. The 2,000 American dead are an embarrassment. They are transported to their graves in the dark. Funerals are unobtrusive, out of harm's way. The mutilated rot in their beds, some for the rest of their lives. So the dead and the mutilated both rot, in different kinds of graves. Here is an extract from a poem by Pablo Neruda, 'I'm Explaining a Few Things': And one morning all that was burning, one morning the bonfires leapt out of the earth devouring human beings and from then on fire, gunpowder from then on, and from then on blood. Bandits with planes and Moors, bandits with finger-rings and duchesses, bandits with black friars spattering blessings came through the sky to kill children and the blood of children ran through the streets without fuss, like children's blood. Jackals that the jackals would despise stones that the dry thistle would bite on and spit out, vipers that the vipers would abominate. Face to face with you I have seen the blood of Spain tower like a tide to drown you in one wave of pride and knives. Treacherous generals: see my dead house, look at broken Spain: from every house burning metal flows instead of flowers from every socket of Spain Spain emerges and from every dead child a rifle with eyes and from every crime bullets are born which will one day find the bull's eye of your hearts. And you will ask: why doesn't his poetry speak of dreams and leaves and the great volcanoes of his native land. Come and see the blood in the streets. Come and see the blood in the streets. Come and see the blood in the streets! * Let me make it quite clear that in quoting from Neruda's poem I am in no way comparing Republican Spain to Saddam Hussein's Iraq. I quote Neruda because nowhere in contemporary poetry have I read such a powerful visceral description of the bombing of civilians. I have said earlier that the United States is now totally frank about putting its cards on the table. That is the case. Its official declared policy is now defined as 'full spectrum dominance'. That is not my term, it is theirs. 'Full spectrum dominance' means control of land, sea, air and space and all attendant resources. The United States now occupies 702 military installations throughout the world in 132 countries, with the honourable exception of Sweden, of course. We don't quite know how they got there but they are there all right. The United States possesses 8,000 active and operational nuclear warheads. Two thousand are on hair trigger alert, ready to be launched with 15 minutes warning. It is developing new systems of nuclear force, known as bunker busters. The British, ever cooperative, are intending to replace their own nuclear missile, Trident. Who, I wonder, are they aiming at? Osama bin Laden? You? Me? Joe Dokes? China? Paris? Who knows? What we do know is that this infantile insanity - the possession and threatened use of nuclear weapons - is at the heart of present American political philosophy. We must remind ourselves that the United States is on a permanent military footing and shows no sign of relaxing it. Many thousands, if not millions, of people in the United States itself are demonstrably sickened, shamed and angered by their government's actions, but as things stand they are not a coherent political force - yet. But the anxiety, uncertainty and fear which we can see growing daily in the United States is unlikely to diminish. I know that President Bush has many extremely competent speech writers but I would like to volunteer for the job myself. I propose the following short address which he can make on television to the nation. I see him grave, hair carefully combed, serious, winning, sincere, often beguiling, sometimes employing a wry smile, curiously attractive, a man's man. 'God is good. God is great. God is good. My God is good. Bin Laden's God is bad. His is a bad God. Saddam's God was bad, except he didn't have one. He was a barbarian. We are not barbarians. We don't chop people's heads off. We believe in freedom. So does God. I am not a barbarian. I am the democratically elected leader of a freedom-loving democracy. We are a compassionate society. We give compassionate electrocution and compassionate lethal injection. We are a great nation. I am not a dictator. He is. I am not a barbarian. He is. And he is. They all are. I possess moral authority. You see this fist? This is my moral authority. And don't you forget it.' A writer's life is a highly vulnerable, almost naked activity. We don't have to weep about that. The writer makes his choice and is stuck with it. But it is true to say that you are open to all the winds, some of them icy indeed. You are out on your own, out on a limb. You find no shelter, no protection - unless you lie - in which case of course you have constructed your own protection and, it could be argued, become a politician. I have referred to death quite a few times this evening. I shall now quote a poem of my own called 'Death'. Where was the dead body found? Who found the dead body? Was the dead body dead when found? How was the dead body found? Who was the dead body? Who was the father or daughter or brother Or uncle or sister or mother or son Of the dead and abandoned body? Was the body dead when abandoned? Was the body abandoned? By whom had it been abandoned? Was the dead body naked or dressed for a journey? What made you declare the dead body dead? Did you declare the dead body dead? How well did you know the dead body? How did you know the dead body was dead? Did you wash the dead body Did you close both its eyes Did you bury the body Did you leave it abandoned Did you kiss the dead body When we look into a mirror we think the image that confronts us is accurate. But move a millimetre and the image changes. We are actually looking at a never-ending range of reflections. But sometimes a writer has to smash the mirror - for it is on the other side of that mirror that the truth stares at us. I believe that despite the enormous odds which exist, unflinching, unswerving, fierce intellectual determination, as citizens, to define the real truth of our lives and our societies is a crucial obligation which devolves upon us all. It is in fact mandatory. If such a determination is not embodied in our political vision we have no hope of restoring what is so nearly lost to us - the dignity of man. * Extract from "I'm Explaining a Few Things" translated by Nathaniel Tarn, from Pablo Neruda: Selected Poems, published by Jonathan Cape, London 1970. Used by permission of The Random House Group Limited. © The Nobel Foundation 2005 Copyright Guardian Newspapers Limited From shovland at mindspring.com Fri Dec 16 04:41:27 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Thu, 15 Dec 2005 20:41:27 -0800 (GMT-08:00) Subject: [Paleopsych] Harold Pinter's Nobel Peace Prize Speech Message-ID: <28319592.1134708088142.JavaMail.root@mswamui-backed.atl.sa.earthlink.net> Art, truth and politics In his video-taped Nobel acceptance speech, Harold Pinter excoriated a 'brutal, scornful and ruthless' United States. This is the full text of his address Harold Pinter Thursday December 08 2005 The Guardian In 1958 I wrote the following: 'There are no hard distinctions between what is real and what is unreal, nor between what is true and what is false. A thing is not necessarily either true or false; it can be both true and false.' I believe that these assertions still make sense and do still apply to the exploration of reality through art. So as a writer I stand by them but as a citizen I cannot. As a citizen I must ask: What is true? What is false? Truth in drama is forever elusive. You never quite find it but the search for it is compulsive. The search is clearly what drives the endeavour. The search is your task. More often than not you stumble upon the truth in the dark, colliding with it or just glimpsing an image or a shape which seems to correspond to the truth, often without realising that you have done so. But the real truth is that there never is any such thing as one truth to be found in dramatic art. There are many. These truths challenge each other, recoil from each other, reflect each other, ignore each other, tease each other, are blind to each other. Sometimes you feel you have the truth of a moment in your hand, then it slips through your fingers and is lost. I have often been asked how my plays come about. I cannot say. Nor can I ever sum up my plays, except to say that this is what happened. That is what they said. That is what they did. Most of the plays are engendered by a line, a word or an image. The given word is often shortly followed by the image. I shall give two examples of two lines which came right out of the blue into my head, followed by an image, followed by me. The plays are The Homecoming and Old Times. The first line of The Homecoming is 'What have you done with the scissors?' The first line of Old Times is 'Dark.' In each case I had no further information. In the first case someone was obviously looking for a pair of scissors and was demanding their whereabouts of someone else he suspected had probably stolen them. But I somehow knew that the person addressed didn't give a damn about the scissors or about the questioner either, for that matter. 'Dark' I took to be a description of someone's hair, the hair of a woman, and was the answer to a question. In each case I found myself compelled to pursue the matter. This happened visually, a very slow fade, through shadow into light. I always start a play by calling the characters A, B and C. In the play that became The Homecoming I saw a man enter a stark room and ask his question of a younger man sitting on an ugly sofa reading a racing paper. I somehow suspected that A was a father and that B was his son, but I had no proof. This was however confirmed a short time later when B (later to become Lenny) says to A (later to become Max), 'Dad, do you mind if I change the subject? I want to ask you something. The dinner we had before, what was the name of it? What do you call it? Why don't you buy a dog? You're a dog cook. Honest. You think you're cooking for a lot of dogs.' So since B calls A 'Dad' it seemed to me reasonable to assume that they were father and son. A was also clearly the cook and his cooking did not seem to be held in high regard. Did this mean that there was no mother? I didn't know. But, as I told myself at the time, our beginnings never know our ends. 'Dark.' A large window. Evening sky. A man, A (later to become Deeley), and a woman, B (later to become Kate), sitting with drinks. 'Fat or thin?' the man asks. Who are they talking about? But I then see, standing at the window, a woman, C (later to become Anna), in another condition of light, her back to them, her hair dark. It's a strange moment, the moment of creating characters who up to that moment have had no existence. What follows is fitful, uncertain, even hallucinatory, although sometimes it can be an unstoppable avalanche. The author's position is an odd one. In a sense he is not welcomed by the characters. The characters resist him, they are not easy to live with, they are impossible to define. You certainly can't dictate to them. To a certain extent you play a never-ending game with them, cat and mouse, blind man's buff, hide and seek. But finally you find that you have people of flesh and blood on your hands, people with will and an individual sensibility of their own, made out of component parts you are unable to change, manipulate or distort. So language in art remains a highly ambiguous transaction, a quicksand, a trampoline, a frozen pool which might give way under you, the author, at any time. But as I have said, the search for the truth can never stop. It cannot be adjourned, it cannot be postponed. It has to be faced, right there, on the spot. Political theatre presents an entirely different set of problems. Sermonising has to be avoided at all cost. Objectivity is essential. The characters must be allowed to breathe their own air. The author cannot confine and constrict them to satisfy his own taste or disposition or prejudice. He must be prepared to approach them from a variety of angles, from a full and uninhibited range of perspectives, take them by surprise, perhaps, occasionally, but nevertheless give them the freedom to go which way they will. This does not always work. And political satire, of course, adheres to none of these precepts, in fact does precisely the opposite, which is its proper function. In my play The Birthday Party I think I allow a whole range of options to operate in a dense forest of possibility before finally focussing on an act of subjugation. Mountain Language pretends to no such range of operation. It remains brutal, short and ugly. But the soldiers in the play do get some fun out of it. One sometimes forgets that torturers become easily bored. They need a bit of a laugh to keep their spirits up. This has been confirmed of course by the events at Abu Ghraib in Baghdad. Mountain Language lasts only 20 minutes, but it could go on for hour after hour, on and on and on, the same pattern repeated over and over again, on and on, hour after hour. Ashes to Ashes, on the other hand, seems to me to be taking place under water. A drowning woman, her hand reaching up through the waves, dropping down out of sight, reaching for others, but finding nobody there, either above or under the water, finding only shadows, reflections, floating; the woman a lost figure in a drowning landscape, a woman unable to escape the doom that seemed to belong only to others. But as they died, she must die too. Political language, as used by politicians, does not venture into any of this territory since the majority of politicians, on the evidence available to us, are interested not in truth but in power and in the maintenance of that power. To maintain that power it is essential that people remain in ignorance, that they live in ignorance of the truth, even the truth of their own lives. What surrounds us therefore is a vast tapestry of lies, upon which we feed. As every single person here knows, the justification for the invasion of Iraq was that Saddam Hussein possessed a highly dangerous body of weapons of mass destruction, some of which could be fired in 45 minutes, bringing about appalling devastation. We were assured that was true. It was not true. We were told that Iraq had a relationship with Al Quaeda and shared responsibility for the atrocity in New York of September 11th 2001. We were assured that this was true. It was not true. We were told that Iraq threatened the security of the world. We were assured it was true. It was not true. The truth is something entirely different. The truth is to do with how the United States understands its role in the world and how it chooses to embody it. But before I come back to the present I would like to look at the recent past, by which I mean United States foreign policy since the end of the Second World War. I believe it is obligatory upon us to subject this period to at least some kind of even limited scrutiny, which is all that time will allow here. Everyone knows what happened in the Soviet Union and throughout Eastern Europe during the post-war period: the systematic brutality, the widespread atrocities, the ruthless suppression of independent thought. All this has been fully documented and verified. But my contention here is that the US crimes in the same period have only been superficially recorded, let alone documented, let alone acknowledged, let alone recognised as crimes at all. I believe this must be addressed and that the truth has considerable bearing on where the world stands now. Although constrained, to a certain extent, by the existence of the Soviet Union, the United States' actions throughout the world made it clear that it had concluded it had carte blanche to do what it liked. Direct invasion of a sovereign state has never in fact been America's favoured method. In the main, it has preferred what it has described as 'low intensity conflict'. Low intensity conflict means that thousands of people die but slower than if you dropped a bomb on them in one fell swoop. It means that you infect the heart of the country, that you establish a malignant growth and watch the gangrene bloom. When the populace has been subdued - or beaten to death - the same thing - and your own friends, the military and the great corporations, sit comfortably in power, you go before the camera and say that democracy has prevailed. This was a commonplace in US foreign policy in the years to which I refer. The tragedy of Nicaragua was a highly significant case. I choose to offer it here as a potent example of America's view of its role in the world, both then and now. I was present at a meeting at the US embassy in London in the late 1980s. The United States Congress was about to decide whether to give more money to the Contras in their campaign against the state of Nicaragua. I was a member of a delegation speaking on behalf of Nicaragua but the most important member of this delegation was a Father John Metcalf. The leader of the US body was Raymond Seitz (then number two to the ambassador, later ambassador himself). Father Metcalf said: 'Sir, I am in charge of a parish in the north of Nicaragua. My parishioners built a school, a health centre, a cultural centre. We have lived in peace. A few months ago a Contra force attacked the parish. They destroyed everything: the school, the health centre, the cultural centre. They raped nurses and teachers, slaughtered doctors, in the most brutal manner. They behaved like savages. Please demand that the US government withdraw its support from this shocking terrorist activity.' Raymond Seitz had a very good reputation as a rational, responsible and highly sophisticated man. He was greatly respected in diplomatic circles. He listened, paused and then spoke with some gravity. 'Father,' he said, 'let me tell you something. In war, innocent people always suffer.' There was a frozen silence. We stared at him. He did not flinch. Innocent people, indeed, always suffer. Finally somebody said: 'But in this case "innocent people" were the victims of a gruesome atrocity subsidised by your government, one among many. If Congress allows the Contras more money further atrocities of this kind will take place. Is this not the case? Is your government not therefore guilty of supporting acts of murder and destruction upon the citizens of a sovereign state?' Seitz was imperturbable. 'I don't agree that the facts as presented support your assertions,' he said. As we were leaving the Embassy a US aide told me that he enjoyed my plays. I did not reply. I should remind you that at the time President Reagan made the following statement: 'The Contras are the moral equivalent of our Founding Fathers.' The United States supported the brutal Somoza dictatorship in Nicaragua for over 40 years. The Nicaraguan people, led by the Sandinistas, overthrew this regime in 1979, a breathtaking popular revolution. The Sandinistas weren't perfect. They possessed their fair share of arrogance and their political philosophy contained a number of contradictory elements. But they were intelligent, rational and civilised. They set out to establish a stable, decent, pluralistic society. The death penalty was abolished. Hundreds of thousands of poverty-stricken peasants were brought back from the dead. Over 100,000 families were given title to land. Two thousand schools were built. A quite remarkable literacy campaign reduced illiteracy in the country to less than one seventh. Free education was established and a free health service. Infant mortality was reduced by a third. Polio was eradicated. The United States denounced these achievements as Marxist/Leninist subversion. In the view of the US government, a dangerous example was being set. If Nicaragua was allowed to establish basic norms of social and economic justice, if it was allowed to raise the standards of health care and education and achieve social unity and national self respect, neighbouring countries would ask the same questions and do the same things. There was of course at the time fierce resistance to the status quo in El Salvador. I spoke earlier about 'a tapestry of lies' which surrounds us. President Reagan commonly described Nicaragua as a 'totalitarian dungeon'. This was taken generally by the media, and certainly by the British government, as accurate and fair comment. But there was in fact no record of death squads under the Sandinista government. There was no record of torture. There was no record of systematic or official military brutality. No priests were ever murdered in Nicaragua. There were in fact three priests in the government, two Jesuits and a Maryknoll missionary. The totalitarian dungeons were actually next door, in El Salvador and Guatemala. The United States had brought down the democratically elected government of Guatemala in 1954 and it is estimated that over 200,000 people had been victims of successive military dictatorships. Six of the most distinguished Jesuits in the world were viciously murdered at the Central American University in San Salvador in 1989 by a battalion of the Alcatl regiment trained at Fort Benning, Georgia, USA. That extremely brave man Archbishop Romero was assassinated while saying mass. It is estimated that 75,000 people died. Why were they killed? They were killed because they believed a better life was possible and should be achieved. That belief immediately qualified them as communists. They died because they dared to question the status quo, the endless plateau of poverty, disease, degradation and oppression, which had been their birthright. The United States finally brought down the Sandinista government. It took some years and considerable resistance but relentless economic persecution and 30,000 dead finally undermined the spirit of the Nicaraguan people. They were exhausted and poverty stricken once again. The casinos moved back into the country. Free health and free education were over. Big business returned with a vengeance. 'Democracy' had prevailed. But this 'policy' was by no means restricted to Central America. It was conducted throughout the world. It was never-ending. And it is as if it never happened. The United States supported and in many cases engendered every right wing military dictatorship in the world after the end of the Second World War. I refer to Indonesia, Greece, Uruguay, Brazil, Paraguay, Haiti, Turkey, the Philippines, Guatemala, El Salvador, and, of course, Chile. The horror the United States inflicted upon Chile in 1973 can never be purged and can never be forgiven. Hundreds of thousands of deaths took place throughout these countries. Did they take place? And are they in all cases attributable to US foreign policy? The answer is yes they did take place and they are attributable to American foreign policy. But you wouldn't know it. It never happened. Nothing ever happened. Even while it was happening it wasn't happening. It didn't matter. It was of no interest. The crimes of the United States have been systematic, constant, vicious, remorseless, but very few people have actually talked about them. You have to hand it to America. It has exercised a quite clinical manipulation of power worldwide while masquerading as a force for universal good. It's a brilliant, even witty, highly successful act of hypnosis. I put to you that the United States is without doubt the greatest show on the road. Brutal, indifferent, scornful and ruthless it may be but it is also very clever. As a salesman it is out on its own and its most saleable commodity is self love. It's a winner. Listen to all American presidents on television say the words, 'the American people', as in the sentence, 'I say to the American people it is time to pray and to defend the rights of the American people and I ask the American people to trust their president in the action he is about to take on behalf of the American people.' It's a scintillating stratagem. Language is actually employed to keep thought at bay. The words 'the American people' provide a truly voluptuous cushion of reassurance. You don't need to think. Just lie back on the cushion. The cushion may be suffocating your intelligence and your critical faculties but it's very comfortable. This does not apply of course to the 40 million people living below the poverty line and the 2 million men and women imprisoned in the vast gulag of prisons, which extends across the US. The United States no longer bothers about low intensity conflict. It no longer sees any point in being reticent or even devious. It puts its cards on the table without fear or favour. It quite simply doesn't give a damn about the United Nations, international law or critical dissent, which it regards as impotent and irrelevant. It also has its own bleating little lamb tagging behind it on a lead, the pathetic and supine Great Britain. What has happened to our moral sensibility? Did we ever have any? What do these words mean? Do they refer to a term very rarely employed these days - conscience? A conscience to do not only with our own acts but to do with our shared responsibility in the acts of others? Is all this dead? Look at Guantanamo Bay. Hundreds of people detained without charge for over three years, with no legal representation or due process, technically detained forever. This totally illegitimate structure is maintained in defiance of the Geneva Convention. It is not only tolerated but hardly thought about by what's called the 'international community'. This criminal outrage is being committed by a country, which declares itself to be 'the leader of the free world'. Do we think about the inhabitants of Guantanamo Bay? What does the media say about them? They pop up occasionally - a small item on page six. They have been consigned to a no man's land from which indeed they may never return. At present many are on hunger strike, being force-fed, including British residents. No niceties in these force-feeding procedures. No sedative or anaesthetic. Just a tube stuck up your nose and into your throat. You vomit blood. This is torture. What has the British Foreign Secretary said about this? Nothing. What has the British Prime Minister said about this? Nothing. Why not? Because the United States has said: to criticise our conduct in Guantanamo Bay constitutes an unfriendly act. You're either with us or against us. So Blair shuts up. The invasion of Iraq was a bandit act, an act of blatant state terrorism, demonstrating absolute contempt for the concept of international law. The invasion was an arbitrary military action inspired by a series of lies upon lies and gross manipulation of the media and therefore of the public; an act intended to consolidate American military and economic control of the Middle East masquerading - as a last resort - all other justifications having failed to justify themselves - as liberation. A formidable assertion of military force responsible for the death and mutilation of thousands and thousands of innocent people. We have brought torture, cluster bombs, depleted uranium, innumerable acts of random murder, misery, degradation and death to the Iraqi people and call it 'bringing freedom and democracy to the Middle East'. How many people do you have to kill before you qualify to be described as a mass murderer and a war criminal? One hundred thousand? More than enough, I would have thought. Therefore it is just that Bush and Blair be arraigned before the International Criminal Court of Justice. But Bush has been clever. He has not ratified the International Criminal Court of Justice. Therefore if any American soldier or for that matter politician finds himself in the dock Bush has warned that he will send in the marines. But Tony Blair has ratified the Court and is therefore available for prosecution. We can let the Court have his address if they're interested. It is Number 10, Downing Street, London. Death in this context is irrelevant. Both Bush and Blair place death well away on the back burner. At least 100,000 Iraqis were killed by American bombs and missiles before the Iraq insurgency began. These people are of no moment. Their deaths don't exist. They are blank. They are not even recorded as being dead. 'We don't do body counts,' said the American general Tommy Franks. Early in the invasion there was a photograph published on the front page of British newspapers of Tony Blair kissing the cheek of a little Iraqi boy. 'A grateful child,' said the caption. A few days later there was a story and photograph, on an inside page, of another four-year-old boy with no arms. His family had been blown up by a missile. He was the only survivor. 'When do I get my arms back?' he asked. The story was dropped. Well, Tony Blair wasn't holding him in his arms, nor the body of any other mutilated child, nor the body of any bloody corpse. Blood is dirty. It dirties your shirt and tie when you're making a sincere speech on television. The 2,000 American dead are an embarrassment. They are transported to their graves in the dark. Funerals are unobtrusive, out of harm's way. The mutilated rot in their beds, some for the rest of their lives. So the dead and the mutilated both rot, in different kinds of graves. Here is an extract from a poem by Pablo Neruda, 'I'm Explaining a Few Things': And one morning all that was burning, one morning the bonfires leapt out of the earth devouring human beings and from then on fire, gunpowder from then on, and from then on blood. Bandits with planes and Moors, bandits with finger-rings and duchesses, bandits with black friars spattering blessings came through the sky to kill children and the blood of children ran through the streets without fuss, like children's blood. Jackals that the jackals would despise stones that the dry thistle would bite on and spit out, vipers that the vipers would abominate. Face to face with you I have seen the blood of Spain tower like a tide to drown you in one wave of pride and knives. Treacherous generals: see my dead house, look at broken Spain: from every house burning metal flows instead of flowers from every socket of Spain Spain emerges and from every dead child a rifle with eyes and from every crime bullets are born which will one day find the bull's eye of your hearts. And you will ask: why doesn't his poetry speak of dreams and leaves and the great volcanoes of his native land. Come and see the blood in the streets. Come and see the blood in the streets. Come and see the blood in the streets! * Let me make it quite clear that in quoting from Neruda's poem I am in no way comparing Republican Spain to Saddam Hussein's Iraq. I quote Neruda because nowhere in contemporary poetry have I read such a powerful visceral description of the bombing of civilians. I have said earlier that the United States is now totally frank about putting its cards on the table. That is the case. Its official declared policy is now defined as 'full spectrum dominance'. That is not my term, it is theirs. 'Full spectrum dominance' means control of land, sea, air and space and all attendant resources. The United States now occupies 702 military installations throughout the world in 132 countries, with the honourable exception of Sweden, of course. We don't quite know how they got there but they are there all right. The United States possesses 8,000 active and operational nuclear warheads. Two thousand are on hair trigger alert, ready to be launched with 15 minutes warning. It is developing new systems of nuclear force, known as bunker busters. The British, ever cooperative, are intending to replace their own nuclear missile, Trident. Who, I wonder, are they aiming at? Osama bin Laden? You? Me? Joe Dokes? China? Paris? Who knows? What we do know is that this infantile insanity - the possession and threatened use of nuclear weapons - is at the heart of present American political philosophy. We must remind ourselves that the United States is on a permanent military footing and shows no sign of relaxing it. Many thousands, if not millions, of people in the United States itself are demonstrably sickened, shamed and angered by their government's actions, but as things stand they are not a coherent political force - yet. But the anxiety, uncertainty and fear which we can see growing daily in the United States is unlikely to diminish. I know that President Bush has many extremely competent speech writers but I would like to volunteer for the job myself. I propose the following short address which he can make on television to the nation. I see him grave, hair carefully combed, serious, winning, sincere, often beguiling, sometimes employing a wry smile, curiously attractive, a man's man. 'God is good. God is great. God is good. My God is good. Bin Laden's God is bad. His is a bad God. Saddam's God was bad, except he didn't have one. He was a barbarian. We are not barbarians. We don't chop people's heads off. We believe in freedom. So does God. I am not a barbarian. I am the democratically elected leader of a freedom-loving democracy. We are a compassionate society. We give compassionate electrocution and compassionate lethal injection. We are a great nation. I am not a dictator. He is. I am not a barbarian. He is. And he is. They all are. I possess moral authority. You see this fist? This is my moral authority. And don't you forget it.' A writer's life is a highly vulnerable, almost naked activity. We don't have to weep about that. The writer makes his choice and is stuck with it. But it is true to say that you are open to all the winds, some of them icy indeed. You are out on your own, out on a limb. You find no shelter, no protection - unless you lie - in which case of course you have constructed your own protection and, it could be argued, become a politician. I have referred to death quite a few times this evening. I shall now quote a poem of my own called 'Death'. Where was the dead body found? Who found the dead body? Was the dead body dead when found? How was the dead body found? Who was the dead body? Who was the father or daughter or brother Or uncle or sister or mother or son Of the dead and abandoned body? Was the body dead when abandoned? Was the body abandoned? By whom had it been abandoned? Was the dead body naked or dressed for a journey? What made you declare the dead body dead? Did you declare the dead body dead? How well did you know the dead body? How did you know the dead body was dead? Did you wash the dead body Did you close both its eyes Did you bury the body Did you leave it abandoned Did you kiss the dead body When we look into a mirror we think the image that confronts us is accurate. But move a millimetre and the image changes. We are actually looking at a never-ending range of reflections. But sometimes a writer has to smash the mirror - for it is on the other side of that mirror that the truth stares at us. I believe that despite the enormous odds which exist, unflinching, unswerving, fierce intellectual determination, as citizens, to define the real truth of our lives and our societies is a crucial obligation which devolves upon us all. It is in fact mandatory. If such a determination is not embodied in our political vision we have no hope of restoring what is so nearly lost to us - the dignity of man. * Extract from "I'm Explaining a Few Things" translated by Nathaniel Tarn, from Pablo Neruda: Selected Poems, published by Jonathan Cape, London 1970. Used by permission of The Random House Group Limited. ? The Nobel Foundation 2005 Copyright Guardian Newspapers Limited _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From checker at panix.com Sat Dec 17 16:54:57 2005 From: checker at panix.com (Premise Checker) Date: Sat, 17 Dec 2005 11:54:57 -0500 (EST) Subject: [Paleopsych] Dave's Drug Regimen ; diary update of August 2005 Message-ID: Dave's Drug Regimen ; diary update of August 2005 http://www.hedweb.com/diarydav/drug-regimen.html [Can anyone evaluate this for me? Dave is David Pearce, if that means anything.] DP DRUG REGIMEN Diary update, August 2005 [1]tablets of wisdom? 1 AUGUST 2005 What am I on? 100 mg [2]amineptine (Survector) and 2 x 5 mg [3]selegiline (l-deprenyl, Eldepryl) daily. I also take omega3-rich flaxseed [[4]linseed] oil supplements; [5]LEF's "Life Extension" mix; and [6]resveratrol with [7]quercetin. And that's it - for now, at least. Amineptine increases exploratory behaviour in rats. Since taking it, I have been seized by a desire to travel the [8]world. Climbing [9]volcanoes in Indonesia, exploring [10]Machu Picchu, and communing with [11]giant tortoises in the Galapagos Islands are adverse side-effects not reported on the [12]product label. When at home in England, I spend my days webmastering for the [13]abolitionist project [my raison d'etre]; rocking autistically to [14]pop music on my [15]iPod; or pushing back the frontiers of knowledge in Borders' bookstore [16]Starbucks caf? - the haunt of [17]Brighton's movers-and-shakers, its resident conspiracy theorists, and anyone who needs a library that serves industrial-strength black [18]coffee. In spite of its dopaminergic action, I almost never bothered to try amineptine in the first place. Not being a chemist, I assumed it would have [19]dumb-drug antimuscarinic effects in virtue of its being a [20]tricyclic. Amineptine is also devilishly difficult to obtain. Colourful tales of King Rat, double-dealing Brazilian lawyers, and a cast of characters plucked from a Tarantino movie are probably best omitted here. However, I seem to have emerged unscathed. In Rio, someone stole my [21]vegetarian shoes while I was playing football on the beach; but anyone who needs to nick my shoes probably deserves them more than I do. Missing footwear aside, I've now prudently stockpiled plenty of [22]amineptine for a rainy day. Frustratingly, Servier halted production in Brazil in early 2005. So the global amineptine famine is now worse than ever. Amineptine's ill-named abuse-potential i.e. the extremely mild euphoria that follows for an hour or so after ingestion, is weak and independent of its more subtle but [23]sustained elevation of mood and motivation. As with [24]nicotine or [25]caffeine, amineptine users typically learn to self-titrate their intake for optimal effect: uncontrolled dose-escalation is rare and self-defeating. I guess that amineptine's acute action might be mildly tempting to teenagers scouring the family home in vain for household products to swallow or [26]sniff; but this was a childhood rite of passage I somehow managed to skip. After [27]Servier withdrew Survector in mainland Europe some years ago, amineptine.com received a stream of sometimes heart-rending emails from consumers, doctors and even [28]pharmacists who told us it is was the only medication that worked: was there any way to obtain an alternative supply? [Occasionally, we still get similar plaintive emails from people who say the same about the long-vanished noradrenaline and dopamine reuptake inhibitor [29]nomifensine (Merital).] Short of ordering amineptine as a "research chemical" via a chemical supply house, or [30]synthesizing it via grandma's bathtub chemistry kit, the answer was no. Even the usual [31]grey-market pharmacy sources on-line dried up. This particular drug deficit was doubly frustrating here at [32]BLTC HQ: Brighton is chemical capital of the UK, and if one wanted to score truckloads of class-A euphoriants or psychedelic [33]exotica, then one could do so (I am told) within an hour. But sustainable mood-brighteners are thin on the ground. Until tomorrow's designer genomes deliver invincible mental health for all, there is a pressing need for rationally designed psychotropics that are cheap, harmless and habit-forming. Admittedly, a marketing slogan on the lines of "Addictive By Design" isn't the ideal rallying-cry for a novel life-enriching pharmaceutical in today's prohibitionist climate. If I had a teenage daughter, I'd probably find myself chanting "Just Say No" too. But if a drug doesn't make you want to take it again, then it probably isnt any good - life-changing psychedelic epiphanies aside, and they might get disruptive every day of the week. Alas amineptine itself is no panacea. On the contrary, it's a dirty third-rate stopgap, yielding a rather one-dimensional kind of well-being suitable for a [34]Darwinian world. A noradrenergic/dopaminergic drug regimen can lend a certain inner tension to the psyche; it also makes one less introspective and more outward-directed - something of a misnomer if one is only an [35]inferential realist who believes that the so-called perceptual world is just a toy simulation each mind-brain runs. At any rate, amineptine is a [36]useful agent only because current alternatives are so poor. One day, I hope to find something better. Most studies of mood-brighteners confound the response of anxious and/or agitated depressives with what may be known, somewhat unflatteringly, as retarded melancholics. On this basis, amineptine isn't statistically superior to other contemporary meds. For all the heady talk of [37]pharmacogenetics and a new era of "personalised medicine", [38]drug companies are loathe to encourage "market segmentation" for fear of reduced profits. This reluctance is misplaced: melancholics in particular don't do at all well on current drug therapies. Indeed prescription psychotropics are mostly so dire that anyone with a melancholic streak might be better served by a blend of old-fashioned [39]Papaver Somniferum and [40]coca leaves, despite their well-advertised [41]pitfalls. Doctors bemoan the reluctance or inability of their [42]patients to take their prescription meds as instructed; but this comes perilously close to blaming the victim. "Patient compliance" is so erratic because licensed "[43]antidepressants" are often ineffective, side-effect-ridden or even actively [44]depressogenic. Perhaps this dismal track-record isn't surprising. Investigational drugs are tested to see if non-human animals will self-administer them and discarded if they do - arguably not the smartest heuristic for life-enhancement either for humans or our [45]horribly abused cousins. Thus the so-called [46]antipsychotics, for instance, frequently induce apathy, [47]dysphoria and generally [48]mess people's heads up, albeit in ways that ensure their victims intrude less on the lives of others. As it happens, a neuroleptic/antipsychotic is one of the categories of drug I have never felt brave enough to investigate. Somehow I doubt if there will ever be a [49]PiHKAL for antipsychotics: devotion to the experimental method has its limits. Once my capacity to do useful webmastering for the abolitionist project is spent, however, I dream idly about entering the Guinness Book of Records under the category of world's greatest sustained euphoria - a form of record-breaking unaccountably missing from today's roster of human achievement. Goodbye depressive realism; hello [50]hedonistic bliss. Of course, it's not going to happen; but I have fantasies of implanting stem cells and nerve growth factors into my stunted reward centres and expiring in my dotage from an uncontrolled proliferation of pleasure cells. Does this bespeak a lack of moral seriousness? Well, perhaps. On a more sober note, I take [51]selegiline at a higher dosage (2 x 5 mg daily) than is (probably) optimal for [52]life-extension purposes. The aim here is maximal selective inhibition of [53]MAO-B for improved mood and motivation; but it's not [54]MAO inhibition per se that accounts for selegiline's neuroprotective role, but its [55]propargylamine moiety. This is borne out by the neuroprotective action of the [56]S isomer of rasagiline, even though it's over 1000 times less potent as an MAO inhibitor. Selegiline, the older drug I've taken for most of the past decade, will soon be available at substantially higher and MAO-unselective dosages in the form of controlled-release [57]EMSAM patch. Bypassing the gastrointestinal tract avoids the need for dietary restrictions. I will probably try EMSAM if my rasagiline experiment doesn't work out, though only at a lower, relatively MAO-B selective dosage: agents with any kind of serotonergic action [unlike MAO-B, MAO-A also breaks down serotonin and noradrenaline] eventually make me listless. Instead, I need drive, exuberance, "life force". Anything worthwhile in this world - and most of its [58]horrors - has been achieved by larger-than-life characters, defying adversity to triumph over impossible odds. Unfortunately, I still undergo a catastrophe-reaction if one of my cacti dies or a friend wrinkles her nose in disapproval - not a good index of psychological robustness if one wants to save the world. Fortunately for victims of the syndrome in question, the Net offers hope to hormonally challenged, smaller-than-life characters, in theory at least. As an experiment, I intend shortly to substitute the novel "second generation" [59]MAO-B inhibitor [60]rasagiline (Agilect, Azilect) for selegiline. There is negligible evidence that selegiline's trace amphetamine metabolic by-products occur in sufficient quantities to exert any long-term adverse effects; but I would like to explore MAO-B inhibited life without them. Hence the attraction of [61]Professor Youdim's discovery. One reason for rasagiline's lack of abuse potential/acute enjoyability may be the equivocal role of [62]phenylethylamine (PEA). Several studies confirm PEA may have an [63]antidepressant effect. It may serve as a [catecholamine] "[64]enhancer". Yet PEA can also act acutely as an endogenous anxiogen. This tallies with my own experience: a mild anxiety or inner tension - occasionally amounting to [65]OCD-like symptoms - ensues very shortly after taking selegiline. Some subjects say they notice no subjective acute or chronic effects while on it. Others report a slight elevation of mood and alertness from the trace amphetamine [66]metabolites some three hours or so after taking a tab. I most definitely fall into the latter category too; and I can't believe it does me any long-term good. My opinion of [67]rasagiline may be coloured by factors unrelated to its [68]pharmacology. I was not wildly amused late last year to be contacted by Israeli drug giant [69]Teva's lawyers threatening the usual fire-and-brimstone over rasagiline.com if the domain isn't surrendered to their client for a token sum. Virtual estate stirs primitive territorial instincts, empire-building and the baser human passions no less than its old-world physical counterparts. Thus in tones more suitable for an unindicted war-criminal, it is alleged that the international non-proprietary name ([70]INN) in question should rightly be regarded as an unregistered trademark of Teva. [If pharmaceutical companies were as inventive in designing new drugs as they were in litigation then we'd already all be in chemical nirvana] It is hinted darkly that one might be a domain name [71]speculator rather than a corporate branding strategist. Heaven forbid. On re-reading, my slightly frosty reply might be misinterpreted to imply that accepting sizeable amounts of cash from a drug company is a notion I find almost physically too painful to contemplate. This is not in fact wholly the case. Indeed on more than one occasion I have been struck by the thought that drug companies have too much money and we have too little. It would be nice to have our own private research lab, for a start. But the way the industry aspires to monopolise the pharmaceutical namespace to control information is bad news for consumers ["patients"]. [72]Big Pharma will probably succeed in capturing the namespace in the long run, whether by fair means or foul. Certainly dot.com snobbery is a vice best suited to those with deep pockets. [73]Gaboxadol, for instance, a non-benzodiazepine GABA(a) agonist now in phase III clinical trials, was once a part of the BLTC [74]portfolio. Gaboxadol is being developed (with [75]Merck) by Teva's marketing partner, the Danish-based [76]Lundbeck. So when a fellow rings up from Turkey wanting gaboxadol.com "for my father's clothing business", I am intrigued. His tale of his sick mother touches my heart. Two phone calls later, however, he reappears in Denmark with the same ISP and [77]Speednames registration-agent as Lundbeck - who will also, as it happens, co-promote rasagiline with Teva. The plot thickens. If drug companies will resort to such subterfuge to acquire a domain name, then who knows what they might do with, say, hundreds of millions of dollars at stake in a late-stage clinical trial. Later this year, I may explore [78]agomelatine (Valdoxan). I'm uncertain about the likely effect of its action as a [79]melatonin receptor agonist. Will it just make me hypersomnolent during the day? Nor do I really understand the implications of its action at the 5-HT2b receptor. But agomelatine's role as an antagonist at the serotonin [80]5-HT(2c) receptors is a bit more exciting. Blockade of the 5-HT(2c) receptors [81]enhances frontocortical dopaminergic and adrenergic activity - which is nice. Conversely, serotonin 5-HT(2c) receptor activation can be profoundly unpleasant. Surprisingly perhaps, a drug combination of [82]BZP and the 5-HT(2c) agonist [83]TFMPP really is agreeably E-like, taken acutely at any rate; but the effect of TFMPP on its own can be dysphoria, depersonalisation and derealisation. [A friend-of-a-friend tried TFMPP once and just curled up in a foetal ball: not the ideal [84]serenic. I wonder, idly, how agomelatine combines with BZP.] Despite my slight shift in neurobabble over time, I still think in the language of receptor subtypes rather than gene expression profiles. In a decade or two, this conceptual scheme may seem almost as quaint as the idiom of [85]humoral psychology, and laughably simple-minded; but we are creatures of our time. In the more distant future, I suspect the ontology of the [86]materialist paradigm will be overthrown, leaving only its formal shell; and the world of pure consciousness will be mathematically described by the harmonics of [87]superstrings or their [88]braneworld cousins. But I guess this is the kind of speculation best confined to one's [89]diary. Nothing in my drug regime acts significantly to increase hedonic capacity as distinct from motivation. The [90]mu-opioid receptor is still taboo. However, at some stage I do at least want to add a selective kappa opioid receptor antagonist to my dopaminergic regimen [[91]kappa is the "nasty" opioid receptor; [92]mu is rewarding]. Unfortunately, [93]nor-binaltorphimine, the prototype selective kappa antagonist, is only weakly centrally active. In any case, it's hard to find. In the meantime, my native cravings for opioids must be satisfied by Darwinian social interactions. [94]Cynics might claim this source is unreliable, adulterated, expensive and of uneven quality; and true enough, I can barely offer anyone a codeine tab's equivalent of reward myself. But one way or another, we are all addicted. If I learned tomorrow that I had only a few months to live, then I'd probably exit the world with selective mu agonists to complement my regimen of dopaminergics. [Shades of [95]speedballing or the [96]Brompton cocktail]. With any luck, this kind of crude but enjoyable mix won't be necessary. In a couple of decades or so, the first true psychotropic wonderdrugs and somatic gene therapies should be available. Rational design will replace serendipity. I hope so. If one has tasted, say, the emotional release, self-insight and empathetic bliss of pharmaceutically pure [97]MDMA, then it's hard to accept the third-rate imitation of mental health bequeathed by natural selection. However, the immediate [98]product pipeline is thin. [99]Substance P (NK1 receptor) antagonists aren't panning out as hoped. [100]CRF(1) receptor antagonists are interesting, but they [101]adversely affect intellectual performance. Broad-spectrum "triple" reuptake inhibitors like [102]DOV 216,303 are promising because they inhibit the reuptake of dopamine as well as noradrenaline and serotonin; but heaven knows if and when they'll get a product license. No one seems to be working on [103]sustainable empathogen-entactogens - not even in the scientific [104]counter-culture, let alone mainstream medical science. And today's [105]opioids are all flawed. Yet in future, when opioid [106]tolerance is eliminable, [107]sub-type selectivity improved and [108]side-effects minimised, it's conceivable that this demonised class will make a comeback both as tools for life-enrichment and in psychiatric medicine - especially if the rhetoric of the War On Drugs subsides. [At present, of course, opioids are hard enough to access [109]lawfully even for serious [110]pain-relief.] In particular, neuroactive opioids targeted on sub-types of the [111]mu receptor could form the basis of some spectacularly life-enhancing cocktails. Some day, I hope personally to try a combination of a mu agonist and centrally active selective kappa antagonist, together with a [112]peripheral antagonist to minimise unwanted bodily side-effects. One of the greatest discoveries this century, I think, will be identification the final common pathway ([113]FCP) of pleasure, possibly downstream of a subtype of the mu opioid receptor. In the late 20th century, researchers had hoped they were homing in on the FCP in the mesolimbic [114]dopamine system. This optimism proved [115]premature. Incentive-motivation ["[116]wanting"] is [117]dissociable from [118]liking. But the molecular signature of pure pleasure holds the key to the universe, unlocking the power to manufacture limitless value, meaning and [119]significance - magic to infuse the [120]cosmos and all sentient life. Without the pleasure-pain axis, nothing matters. It permeates and underlies our entire conceptual scheme. [121]Utilitarians believe we just need to delete the axis at one end and vastly [122]extend it at the other. Will our descendants be hypersentient as well as superintelligent? I think so. For sure, the search for long-lasting experience-intensifiers can scarcely rank as morally urgent given our malaise-ridden existence today. Understandably, millions self-medicate with [123]alcohol to dull their awareness. Yet just occasionally, I muse on the molecular machinery needed to churn out hypervaluable experiences in ineffably delightful virtual worlds - a prospect that isn't immediately obvious when wading through literature about medium [124]spiny neurons in the rostral shell of the [125]nucleus accumbens. I would also like to find better ways of coping with [126]stress. Although my consciousness has a harder-edged quality than that of early drug-na?ve DPs, I lack the strong-mindedness and resilience that might be conferred by taking an [127]anabolic steroid. [[128]DHEA is the nearest I've got to trying one of those. It had the unwanted effect of increased libido, so I stopped.] [129]Omega-3 essential fatty acid supplementation makes one feel calmer; but it would be nice to feel serene. Whatever the cause, intolerance of stress is not the ideal qualification for running a [130]web hosting service. This holds true even if in practice one is just the dead wood: it's the [131]sysadmin who keeps everything humming. In truth, the historical origins of Knightsbridge Online are less venerable than its imposing web shop-front might suggest. Back when the web was young, I'd simply conjured up the most snobbish-sounding name I could think of from the virgin [132]UK namespace. Our chosen title was a bid to reassure KO's (few) corporate clients, some of whom might be unsettled by the wilder reaches of the BLTC websites, not to speak of the miscellaneous anarchists, Buddhists, transhumanists and other exotic life-forms populating the server. In reality, our London presence is exhausted by the server at Telehouse; and the [133]picture of a corporate head office on the KO's website bears a remarkable resemblance to the Enron HQ. Oddly enough, the Knightsbridge brand sometimes leads to confusion with our famous local corner-store. Every December, for instance, we get contacted by miscellaneous high-net-worth individuals wondering what has happened to their Christmas hampers etc. Direct contact with Harrods itself has been more limited. Several years ago, its Managing Director emailed asking if we could change our [134]hotlink [on knightsbridge.co.uk] from Mr Mohamed Al Fayed's [135]personal website to the [136]store site - an act that must surely require a certain courage. Presented with such an opportunity, any entrepreneur worth his salt would then have leapt at the chance to discuss joint ventures, strategic partnerships, etc. I just said yes, of course, and meekly complied. We remain minnows in the corporate shark-pool. For now, at least, everything seems [more-or-less] under control. Yet running a server entails perpetual worrying about disaster scenarios. Thus we rent a box in Texas that is used mainly for daily incremental off-site back-up. So if a dirty bomb takes out Telehouse, our clients [and [137]hedweb!] can rest assured that their sites will still be safe. However, the reason we set it up originally was more mundane. The controversial multi-level marketing firm [138]Herbalife objected to a [139]website by a client on the server; and they went as far as threatening our connectivity suppliers if we didn't remove the skit in question. Quite what else might ever need to be exiled is unclear. Perhaps [140]president-bush.com, currently registered to a Mr Osama bin Laden [not sure about his politics; but he pays his bills], or the lively anarchist rag [141]Schnews. Who knows. How else would I like to change my ordinary Darwinian state of consciousness? This is slightly more feasible than trying to change the world. Earlier in my life, I experienced chronic angst tinged with melancholia. Thankfully these have been chemically banished, albeit not in favour of an irrepressible joie to vivre. I would still like to eliminate various residual [142]atypical depressive signs, notably rejection-sensitivity. But I'm not yet clear how; I'm not going to take an [143]SSRI, and [144]5-HTP just makes me lethargic. Rejection-sensitivity is especially irrational given my [145]Matrix-style epistemology. The fate of my zombie avatars in other people's [146]virtual worlds really shouldn't matter per se any more than the fate of the zombies I zap in [147]Far Cry. Yet the phantoms in my own little egocentric [148]world can be scarily realistic - though I doubt if they can compete with the supernormal stimuli in synthetic [149]VR games to come. Either way, my desire for chemical self-improvement isn't entirely self-interested. Upgrading my design-specifications would mean I could do more for the [150]abolitionist project. Admittedly, I think I could die happy knowing that [151]paradise-engineering has secured a place on the lunatic fringe. But could one do more? At present, the [small] percentage of [152]sympathisers who contact us wanting to get actively involved just get deflected to a [153]web-based strategy to win hearts and minds, or urged to participate in the wider but disparate [154]transhumanist movement - though sadly, only a [155]minority of transhumanists endorse abolitionism [perhaps my predicting that posterity will view our meat-eating habits as some sort of cannibalistic holocaust displays a less than transhuman level of tact and diplomacy] But organisation-building would involve mastering the dark arts of intrigue, infighting and primate power-politics. These are mission skills for which I am [156]ill-equipped. Regrettably, any [157]post-Darwinian transition will depend on super-Machiavellian apes, probably with far higher functional testosterone levels than me. Whatever its organisational guise or ultimate idiom, the abolitionist credo deserves to be shouted from the rooftops. I only wish I could deliver barnstorming performances myself. Alas my natural inclination is to hide from a hostile world full of potential [158]predators and fearsome alpha males. My recent bouts of wanderlust are a rare [159]drug-induced aberration. A self-effacing manner is all very well and frightfully British; but it isn't going to win over the unconvinced. In fact it's been said, only half tongue-in-cheek, that it was our self-depreciating humour that lost Britain the Empire. Whatever the case for overcoming one's diffidence, my nagging suspicion is that organisation-building is premature - though I deeply [160]admire those who try to do so. If a broad power-elite consensus existed - perhaps on the lines of putting a man-on-the-moon or the [161]human genome project - then implementing any global blueprint for a [162]cruelty-free world might take a century or so. Currently this sort of timescale is sheer fantasy, or at least hugely optimistic. On more sociologically plausible, incrementalist scenarios, the [163]transition will take centuries, or even millennia. Only when the [164]reproductive revolution of designer babies starts to unfold in a few decades or so will the ethical dilemmas at stake become real to most people. [e.g. Do I want to endow my prospective kids with genes predisposing to depression, anxiety disorders or malaise? etc] This lazy biotechnological determinism chimes in all too well with my passive temperament. Until the relevant technology matures, all the high-falutin talk about transcending our biological heritage etc., sounds mere science fiction - like pain-free [165]surgery before [166]anaesthesia. Of course, history is littered with the bones of people who thought their pet [167]nostrums were inevitable. Could one's own life be no less absurd? Yes, quite possibly. But might it be absurd in the sense that its [168]Bentham-plus-[169]biotech premise is too obvious to be worth re-stating - like a fellow who walks around with a sandwich-board all day proclaiming the world is round rather than flat? Perhaps; but this is the kind of absurdity I could live with. * * * [170]The Hedonistic Imperative [171]HOME [172]DP Diary [173]Interview [174]Interview 2 [175]Future Opioids [176]Utopian Surgery? [177]Wirehead Hedonism [178]The Good Drug Guide [179]The Pinprick Argument [180]MDMA: Utopian Pharmacology [181]Critique of Huxley's Brave New World E-mail Dave [182]dave at hedweb.com References 1. http://www.hedweb.com/diarydav/welcome.htm 2. http://www.amineptine.com/ 3. http://www.selegiline.com/ 4. http://www.biopsychiatry.com/lipidsmood.htm 5. http://www.lef.org/ 6. http://nootropics.com/resveratrol/neuroprotectant.html 7. http://www.nootropics.com/quercetin/index.html 8. http://www.huxley.net/organic.htm 9. http://www.sanur.org/indonesia/mount/bromo.html 10. http://www.hedweb.com/diarydav/machu-picchu.html 11. http://www.tortoises-turtles.com/galapagos.html 12. http://www.amineptine.com/product-info/index.html 13. http://www.abolitionism.com/ 14. http://www.hedweb.com/diarydav/4-5star.htm 15. http://www.apple.com/ipod/ 16. http://www.hedweb.com/diarydav/borders-brighton.html 17. http://hedweb.com/brighton/jedi.html 18. http://nootropics.com/caffeine/faq.html 19. http://www.biopsychiatry.com/dumbdrug.htm 20. http://www.biopsychiatry.com/tricy.htm 21. http://www.vegetarian-shoes.co.uk/ 22. http://www.amineptine.com/refs/index.html 23. http://www.amineptine.com/amineptinevfluox.html 24. http://www.biopsychiatry.com/tobacco/index.html 25. http://www.biopsychiatry.com/cafnic.htm 26. http://www.glue-sniffing.com/inhalants.html 27. http://www.servier.com/ 28. http://www.biopsychiatry.com/online-pharmacies/ 29. http://www.nomifensine.com/refs/ 30. http://www.amineptine.com/synthesis/manufacture.html 31. http://www.yourpharmastore.com/ 32. http://www.bltc.com/ 33. http://www.brightoncity.com/martyrdom.html 34. http://www.utilitarianism.com/pinprick-argument.html 35. http://cns-alumni.bu.edu/~slehar/quotes/russell.html 36. http://www.amineptine.com/aminonset.htm 37. http://www.biopsychiatry.com/pharmacogenetics.htm 38. http://www.biopsychiatry.com/drugcompanies/index.html 39. http://www.opioids.com/images/opium-poppy.html 40. http://www.cocaine.org/cocaleaves.html 41. http://www.opioids.com/timeline/index.html 42. http://www.hedweb.com/bgcharlton/sdtm.html 43. http://www.biopsychiatry.com/refs/index.html 44. http://www.biopsychiatry.com/melser.htm 45. http://www.animal-rights.com/ 46. http://www.biopsychiatry.com/antipsychotics.html 47. http://www.biopsychiatry.com/neuroleptics.htm 48. http://www.hedweb.com/bgcharlton/atypical-neuroleptics.html 49. http://www.mdma.net/alexander-shulgin/pihkal-tihkal.html 50. http://utilitarianism.com/hedonism.html 51. http://www.selegiline.com/refs/ 52. http://www.selegiline.com/deplong.html 53. http://www.selegiline.com/review.htm 54. http://www.selegiline.com/mao.html 55. http://www.selegiline.com/propargylamines.html 56. http://www.rasagiline.com/neuroprotective.html 57. http://www.selegiline.com/article/emsam.html 58. http://www.amphetamines.com/adolf-hitler.html 59. http://www.rasagiline.com/mao-b.html 60. http://www.rasagiline.com/ 61. http://www.nootropics.com/smartdrugs/future.html 62. http://www.selegiline.com/phenylethylamine.html 63. http://www.selegiline.com/pea.html 64. http://www.selegiline.com/enhancers.html 65. http://www.biopsychiatry.com/dopamocd.htm 66. http://www.selegiline.com/amphet.html 67. http://www.rasagiline.com/refs/index.html 68. http://www.rasagiline.com/pharmacology.html 69. http://www.tevapharm.com/ 70. http://www.who.int/medicines/organization/qsm/activities/qualityassurance/inn/orginn.shtml 71. http://www.dnjournal.com/columns/cover070405.htm 72. http://www.biopsychiatry.com/bigpharma/bigpharma.html 73. http://www.biopsychiatry.com/gaboxadol.htm 74. http://www.bltc.com/bltc-research.html 75. http://www.merck.com/ 76. http://www.lundbeck.com/ 77. http://speednames.com/ 78. http://www.biopsychiatry.com/agomelatine.htm 79. http://www.biopsychiatry.com/agomelatine-valdoxan.htm 80. http://www.biopsychiatry.com/valdoxan.htm 81. http://biopsychiatry.com/valdoxan.htm 82. http://mdma.net/tfmpp/tfmpp-bzp.html 83. http://www.biopsychiatry.com/tfmpp/index.html 84. http://www.biopsychiatry.com/serenanx.htm 85. http://www.general-anaesthesia.com/people/blood-letting.html 86. http://www.materialism.com/ 87. http://www.superstringtheory.com/ 88. http://en.wikipedia.org/wiki/M-theory 89. http://www.hedweb.com/diarydav/index.html 90. http://www.hedweb.com/opioids/opiates.html 91. http://opioids.com/kappa/depressive.html 92. http://opioids.com/mu/genepharm.html 93. http://opioids.com/kappa/kappa-antagonist.html 94. http://www.oxytocin.org/oxytoc/love-science.html 95. http://www.biopsychiatry.com/cocktail.htm 96. http://www.cocaine.org/ 97. http://www.mdma.net/index.html 98. http://www.neurotransmitter.net/newdrugs.html 99. http://www.biopsychiatry.com/substancep-antag.htm 100. http://www.biopsychiatry.com/crf1-antagonists.htm 101. http://www.biopsychiatry.com/crfmem.htm 102. http://www.biopsychiatry.com/antidepressantbroad.htm 103. http://www.mdma.net/ 104. http://www.designer-drugs.com/pte/12.162.180.114/dcd/chemistry/rhodiummirror.html 105. http://www.opioids.com/ 106. http://www.opioids.com/tolerance/paincontrol.html 107. http://www.opioids.com/cogmood/subtypes.html 108. http://www.opioids.com/methylnaltrexone/index.html 109. http://www.opioids.com/offshorepharmacy/index.html 110. http://www.opioids.com/legal/criminalised.html 111. http://opioids.com/mu/interact.html 112. http://opioids.com/methylnaltrexone/structure.html 113. http://www.biopsychiatry.com/fcp.htm 114. http://www.biopsychiatry.com/dopamine-antidepressants.htm 115. http://www.wireheading.com/pleasure.html 116. http://www.wireheading.com/pleasure/wanting-liking.html 117. http://www.biopsychiatry.com/hyperdopaminergic.html 118. http://www.wireheading.com/pleasure/index.html 119. http://www.wireheading.com/hypermotivation.html 120. http://www.hedweb.com/object32.htm 121. http://www.utilitarianism.com/ 122. http://www.hedweb.com/ 123. http://www.paradise-engineering.com/misc/index.html 124. http://www.biopsychiatry.com/medium-spiny.htm 125. http://www.nucleus-accumbens.com/ 126. http://moodfoods.com/misc/cold-soup.html 127. http://www.biopsychiatry.com/steroids/anabolic.html 128. http://www.biopsychiatry.com/dhea-antidepressant.htm 129. http://moodfoods.com/omega3/ 130. https://www.knightsbridge.net/ 131. http://www.sysadminday.com/ 132. http://www.nominet.org.uk/ 133. http://www.knightsbridge.net/faq/hq.html 134. http://www.knightsbridge.co.uk/ 135. http://www.alfayed.com/ 136. http://www.harrods.com/ 137. http://www.hedweb.com/confile.htm 138. http://www.herbalife.com/ 139. http://www.herbal-lies.com/ 140. http://www.president-bush.com/ 141. http://www.schnews.org.uk/ 142. http://www.biopsychiatry.com/atypical.html 143. http://biopsychiatry.com/emotionalblunting.htm 144. http://www.biopsychiatry.com/5htp-supplements.htm 145. http://whatisthematrix.warnerbros.com/rl_cmp/phi.html 146. http://cns-alumni.bu.edu/~slehar/Representationalism.html 147. http://www.farcry-thegame.com/ 148. http://www.huxley.net/organic.htm 149. http://www.vrsource.org/ 150. http://www.utilitarianism.com/biotech.html 151. http://www.huxley.net/ 152. http://www.hedweb.com/jon-martin/index.html 153. http://www.bltc.com/faq.html 154. http://www.transhumanism.org/ 155. http://www.hedweb.com/object27.htm 156. http://www.mdma.net/ecstasy-honesty.html 157. http://www.hedweb.com/object26.htm 158. http://www.herbweb.net/ 159. http://www.hallucinogens.com/lsd/francis-crick.html 160. http://www.abolitionist-society.com/ 161. http://www.biopsychiatry.com/pharmacogenomics.htm 162. http://www.bltc.com/buddhism-suffering.html 163. http://www.hedweb.com/object31.htm 164. http://hedweb.com/object30.htm 165. http://www.general-anaesthesia.com/people/velpeau.html 166. http://www.general-anaesthesia.com/ 167. http://www.hedweb.com/hedethic/interview.html 168. http://www.utilitarian.net/bentham/ 169. http://www.paradise-engineering.com/biotechnology/index.html 170. http://www.hedweb.com/hedab.htm 171. http://www.hedweb.com/index.html 172. http://www.hedweb.com/diarydav/index.html 173. http://www.hedweb.com/hedethic/interview.html 174. http://www.hedweb.com/hedethic/interview.htm 175. http://opioids.com/ 176. http://www.general-anaesthesia.com/ 177. http://www.wireheading.com/ 178. http://www.biopsychiatry.com/ 179. http://www.utilitarianism.com/ 180. http://www.mdma.net/index.html 181. http://www.huxley.net/ 182. mailto:dave at hedweb.com From checker at panix.com Sat Dec 17 16:56:51 2005 From: checker at panix.com (Premise Checker) Date: Sat, 17 Dec 2005 11:56:51 -0500 (EST) Subject: [Paleopsych] David Pearce interviwed by RU Sirius Message-ID: David Pearce interviwed by RU Sirius http://www.hedweb.com/hedethic/interview.html First published in the [1]NeoFiles Date: 16 December 2003 Feeling Groovy, Forever David Pearce in Conversation with R.U. Sirius This manifesto outlines a strategy to eradicate suffering in all sentient life. So begins David Pearces Web-based manifesto [2]The Hedonistic Imperative. Pearce believes that through such technological manipulations as genetic engineering, better drugs, and precise stimulation of various localities in the brain, human beings (just for starters) can live in a sort-of paradise in which all unpleasant states of consciousness have been banished to the old Darwinian Era. These new-found paradisical brain-states will exist within the context of an advanced, nanotechnologized society in which oppressive external conditions have also been eliminated. For Pearce, the great shift into a hedonic society will come about by genetic intervention: Gene therapy will be targeted both on somatic cells and, with even greater forethought, the [3]germ-line. If cunningly applied, a combination of the cellular enlargement of the meso-limbic [4]dopamine system, selectively enhanced metabolic function of key intra-cellular sub-types of opioidergic and serotonergic pathway, and the disablement of several countervailing inhibitory feedback processes will put in place the biomolecular architecture for a major transition in human evolution His website [5]HEDWEB includes the substantial Hedonistic Imperative treatise, as well as a [6]marvelous critique of Huxleys Brave New World, another lengthy discussion of [7]MDMA (ecstasy), a philosophical essay that tries to answer the question [8]Why does anything exist?, and a section advocating Animal Liberation, or at least something akin to a global welfare state for higher non-human lifeforms. Pearce lives quietly in Brighton, England. He communicates masterfully through his website and is unaccustomed to being interviewed. But I prevailed upon him in a transatlantic phone conversation. NEOFILES: While initial steps towards your Hedonistic Imperative seem to involve improved drugs and wireheading (stimulating pleasure centers in the brain), what you are really talking about is biological manipulations that will produce humans who experience a variety of positive states ranging from high functioning well-being to serene bliss, and who dont experience negative states - or at least only the functional analogs of negative states that lack the raw feel of mental pain as we understand it today. Can you say a bit about the technology behind this idea? DAVID PEARCE: Well, there are technical obstacles and ideological obstacles to the abolitionist project. But if one deals first with the technical challenges, I think there are essentially three options. One is wireheading. Wireheading is (probably) a dead-end. But it is illuminating because the procedure shows that pleasure has no physiological tolerance. That is to say, its just as exhilarating having ones pleasure centers stimulated 24 hours after starting a binge as it was at the beginning. NF: in contrast to recreational drugs where euphoriants and even the best hallucinogens have diminishing returns DP: Yes, the high is typically followed by the low, or at least by severely diminished rewards as the negative feedback mechanisms of the brain kick in. Something similar occurs with natural rewards such as food, drink and sex. But with wireheading this doesnt happen. Pleasure, and perhaps pure pleasure alone, shows no tolerance. Of course, our image of wireheading itself is dreadful. People confuse it with torture or the coercive psychiatry of One Flew Over The Cuckoos Nest. And a whole society based on wireheading wouldnt be sustainable, in its crude forms at least. No one would want to reproduce and raise children. So, secondly, theres the option of designing better drugs. The prospect of lifelong drug-induced happiness strikes many people as unappealing. Drug-induced happiness sounds shallow, amoral and one-dimensional. But the pleasure drugs of the future will be far richer in their effects than, say, soma in Huxleys notorious Brave New World. At present were missing out on some incredibly beautiful states of consciousness because of the legacy of our brutish Darwinian past and the bioconservative ideologies that sustain it. Even so, I think drugs are only a stopgap. In the long run, if were morally serious about creating a cruelty-free world, were going to use the third option, genetic engineering. Right now, were on the brink of a reproductive revolution, the era of designer babies if you will, where responsible parents will choose the genetic makeup of their kids. Initially, were only going to tinker with the genome. Eventually, I think were going to rewrite it altogether. And to be deliberately simplistic: imagine if you could choose the average lifetime mood level of your future offspring on a genetic dial - with number 1 on the dial representing modest well-being and number 10 representing sublime bliss. What setting would you choose for your child? Most prospective parents, I think, will choose settings at the higher end of the scale not sublime bliss perhaps, but certainly genotypes encoding a predisposition to lifelong happiness. We may perhaps want many different things for our kids (high intelligence, good looks, success), but their happiness is at least one of these criteria; and ultimately, I think, its the most important. The good news here is that in future, such (un)happiness neednt be left to a cruel Darwinian genetic lottery or Fate. So its worth stressing that progress towards the abolition of suffering doesnt entail the global adoption of an ideology of paradise engineering - or anything so grandiose and utopian as the abolitionist project I advocate. Initially at least, progress to a kinder world merely entails parents taking genetic decisions about whats best for their kids... Of course, this revolution in the technology of reproductive medicine is still some way off. Today, even early adopters arent doing anything much more ambitious than choosing their childs gender. But in maybe three or four decades or so, and possibly substantially sooner, well be choosing such traits as the average hedonic set point of our children. Over time, I think allelic combinations [suites of variant copies of mission-critical genes] that leave their bearers pre-disposed to unpleasant states of consciousness unpleasant states that were genetically adaptive in our ancestral environment - will be weeded out of the gene pool. For a very different kind of selection pressure is at work when evolution is no longer blind and random i.e. when rational agents pre-design the genetic make up of their future offspring in anticipation of its likely effects on their kids. In that sense, were heading for a Post-Darwinian transition ultimately I believe to some form of paradise-engineering, but perhaps to something else altogether. NF: In very rough terms, what were talking about is juicing up the dopaminergic and serotonergic systems, among a bunch of other neurochemical tweaks, in very precise ways, at the level of the genes, once we fully understand how genes control these things. DP: Yes. The neural basis of our so-called basic moods and emotions is simpler than so-called higher cognitive functions. But undeniably, this neural basis is still fiendishly complicated, the simplicity of wireheading notwithstanding. For instance, the mesolimbic dopamine system may not be, as weve sometimes supposed, the final common pathway of pleasure in the brain: dopamine apparently mediates wanting (i.e. incentive-motivation) as much as liking, which is signaled by activation of the mu opioid receptors. But if we focus here on the simple monoamines, an obvious target for intervention is indeed the mesolimbic dopamine system. One of the most common objections to the idea of abolishing suffering ignoring here the prospect of full-blown paradise-engineering is that without the spur of discontent wed soon become idle and even bored. If we were all happy, what would we do all day? But enhanced dopamine function is associated, not just with euphoria, but with heightened motivation; a deeper sense of meaningfulness, significance and purpose; and an increased sensitivity to a greater range of rewards. So one possible option for paradise engineering is to focus on enriching the dopaminergic system to promote (a genetic predisposition to) lifestyles of high achievement and intellectual productivity. Thats one option at least. Another sort of predisposition is to pursue a lifetime of introspection, meditation and blissful tranquility. If I seem to dwell unduly on ways of enriching dopamine function, thats because exploring its amplification is a useful corrective to a widespread misconception i.e. that happiness inevitably leads to stagnation. The critical point, I think, is that to be blissful isnt the same as being blissed out. NF: What about the possibility that madness could come from these amplified states? DP: Well, you cant just unselectively pump up the dopaminergic system and hope to induce states of high-functioning well-being. You might just induce chronic psychosis instead. Genetically enriching our mental health demands a deeper understanding of the workings of the brain than we have today. The era of mature genomic medicine is still decades away. But consider even something as simple and monogenetic as the association between one variant of [9]dopamine DRD4 receptor allele and an unusually optimistic, novelty-seeking temperament. Other things being equal, this trait may be seen as positive. Most prospective parents, if given the choice, would probably opt for an allele predisposing to such a trait in preference to, say, any genotype predisposing to a depressive, anxiety-ridden temperament for their kids. To take another example: prospective parents in future will probably opt for two copies of the longer version of the allele of serotonin transporter gene (5-HTTLPR) whose shorter version is associated with anxiety disorders and neuroticism. I stress that these are just toy examples. More sophisticated versions of genetic choices such as the above are likely to be commonly available later this century and beyond. Such choices will presumably be assisted by computational software with an ultra-friendly user interface so we dont all have to become molecular psychiatrists and can concentrate on making high-level choices of trait instead. One objection springs to mind here. Mood and personality are influenced by a multitude of different genes, not to mention the vagaries of the environment. So it might seem that all but the simplest interventions, involving only a handful of alleles, will lead to an impossible combinatorial explosion of possibilities - and unanticipated consequences to match. This may indeed be the case. The very expression designer babies conjures up a dystopian nightmare, not paradise-engineering. However, mature quantum computing will allow us (in a few decades??) to perform super-sophisticated modeling and fabulously complex simulations which are (many) orders of magnitude more powerful than anything feasible today. I think the pessimists will be confounded. I could be wrong. We shall see. NF: Returning to the intermediary stage drug development, you seem to find the greatest promise in the development of anti-depressants and in MDMA. Care to explain this? DP: Yes. MDMA (Ecstasy) is interesting not least because of the way its use challenges our notion that drug-taking must be inherently selfish i.e. hedonistic in the baser sense on the term. At its best, the MDMA experience shows that drug-induced well-being can be profoundly loving, insightful and empathetic. Unfortunately, MDMA itself is potentially neurotoxic to the serotonergic axons - even at non-heroic dosages. Although the claims of the drug warriors about its dangers are clearly overblown, theres no denying that MDMA isnt the sort of agent you can use regularly on a long-term basis in the way you would take a so-called anti-depressant or other psychoactive prescription drug. Yet here I think lies the crux. The mainstream medical conclusion drawn from MDMAs (probable) human toxicity is that MDMA - and other insight-and empathy drugs used by the scientific counterculture - should be banned, or at least their use discouraged. But theres a better option: we should be systematically researching ways to design safe and sustainable entactogen/empathogens. Critically, their neurotoxicity can be dissociated from their therapeutic effect. And once the neurological signature and precise molecular mechanisms underlying both the magic and the ugly post-E serotonin dip are worked out, theres no reason why states of blissful empathy cant be sustained indefinitely. If we consider the goal worthwhile, then this task is just a technical challenge with a technical solution. Something akin to Naranjo's brief fleeting moment of sanity [induced by taking MDMA] can become our default condition of mental health. Perhaps. Alas the rather ill-assorted class of drugs today marketed as antidepressants dont do much to enrich our capacity for empathy or self-insight. But they are a good example of agents that dont have a fast, up-down effect. Rather, they induce a steady improvement of mood, reduced anxiety levels, and enhanced emotional resilience - for at least some of the people who take them. The mood uplift they offer is quite modest: only a small percentage of people ever feel better than well on Prozac; and some people even feel worse. Also, the reward is often delayed by as much as several weeks, possibly to allow nerve cell growth in the hippocampus. Right now the drug companies are working on faster-acting antidepressants - with only limited success it has to be said, owing to the taboo on targeting the dopaminergic and opioid systems. But delayed drug-induced reward is actually a long-term therapeutic advantage for any good psychoactive agent because it minimizes the likelihood of uncontrolled dosage-escalation posed by the use of fast acting euphoriants. Of course, conventional wisdom is that anti-depressants exist only to help people who are diagnosed as clinically depressed; and such drugs arent of benefit to anyone else. That may be true with most of the older tricyclics at least; and most of their current successors. But theres no reason, in principle, why everyone cant have their moods enriched and uplifted in a controllable way, whether by drugs or gene therapy. Although I cant prove it, I think our descendants will be animated by gradients of well-being beyond the bounds of normal human experience. I stress the controllability here because we dont want genetically susceptible people switching to uncontrolled manic exuberance though mildly hypomanic states can sometimes be extraordinarily productive. In short, we need a vastly enriched conception of mental health. At present, if a drug company came up with the ideal pleasure drug - a real blockbuster wonderdrug designed to enrich the lives of everyone who took it - then it simply wouldnt get a product license. Absurdly, theres no way it could be legally marketed. [Nor could, say, an authentic intelligence-booster or smart drug be marketed either] This is because to get a product license for an investigational drug you have to indicate some officially recognized disease or disorder that the drug potentially alleviates or cures. Just helping the dull-witted and malaise-ridden (as we all are, by the lights of posterity) doesnt count. Crazy. NF: You mention nanotechnology as part of the paradise engineered future but dont say much about its role. How do you see this? DP: The role of nanotechnology in keeping us all physically healthy and wealthy is covered in admirable depth elsewhere. So I just focus on one particular application of nanotechnology, albeit (I think) a morally important application. If the abolitionist project is ever to be completed, then it must extend not just to humans but to the rest of the living world. Its easy to dismiss nonhuman suffering as comparatively trivial in its intensity compared to our own. I hope the skeptics are right; but all the indications are this isnt the case. The more primitive the feeling or emotion, the more intense it typically feels. The biological substrates of suffering are disturbingly constant throughout the vertebrate line. I think a lot of the animals we abuse and kill are functionally and morally akin to human infants and toddlers. Anyhow, in this context, if one believes that it is ethically desirable to eliminate suffering from the world, then nanotechnology will be necessary to penetrate the recesses of the oceans and the furthest reaches of the reaches of the animal kingdom. If we do ever want to redesign the global eco-system and rewrite the vertebrate genome, then this is the kind of mega-project that could only be done with nanotech. At any rate, it will be within our computational resources to do so. I hope well take our godlike powers seriously and use them ethically. NF: Lots of people will think this is a bad idea, even if it can be achieved. You seem to cover every conceivable objection in the manifesto and in your critique of Brave New World but can you speak briefly to the likely main objection; that personalities that are not forged out of difficulty will be lacking and somehow de-humanized? DP: I think the opposite is true. Other things being equal, enhancing our enjoyment of life is character-building. This sounds a bit odd, even paradoxical, but one of the nastier aspects of melancholic depression and its common sub-clinical variants today is the syndrome of so-called learned helplessness and behavioral despair. Milder forms of this syndrome are endemic to the population at large. People prone to depression give up too easily. Theyve only a limited capacity to anticipate reward or experience happiness. They arent easily motivated. By contrast, the new mood-enriching technologies will cure weakness of will. They are potentially empowering, even liberating. For the more one loves life, the more motivated one is to carry out ones goals and life projects. When feeling happy and energized, one takes on challenges that would daunt frailer spirits. Ideally, one will be able to use biotech to transform oneself into the sort of person one wants to be rather than passively accepting I cant help it, I was born like that. Its suffering that dehumanizes and demoralizes us, not well-being. Suffering is not ennobling or character-building; its ultimately just nasty - and potentially functionally redundant. Rationalizing its existence makes suffering (sometimes) more bearable; but that's all. That which does not crush me makes me stronger, said Nietzsche; yes, but thats the trouble: all too many people today do have their spirits crushed by the cruelties of Darwinian life. But not for much longer, I think. A final point. Uniform bliss isnt any more motivating than uniform despair. To enjoy a high functioning and intellectually discerning bliss, well need to explore gradients of well-being. In the language of the information-theoretic paradigm, what matters to the way we function is not our absolute location on the pleasure-pain axis, but that we are informationally sensitive to fitness-relevant changes in our internal and external environment. Thus whereas today many people are driven by gradients of discontent, in the future I think well be animated by gradients of bliss. Some days will be sublime. Others will be merely wonderful. But critically, there will be one particular texture (what it feels like) of consciousness that will be missing from our lives; and that will be the texture of nastiness. I think the absence of Darwinian suffering will be the foundation of any future civilization. * * * [10]The Hedonistic Imperative [11]HOME [12]Future Opioids [13]Utopian Surgery? [14]Wirehead Hedonism [15]The Good Drug Guide [16]Paradise Engineering [17]Nanotechnology Hotlinks [18]MDMA: Utopian Pharmacology [19]Critique of Huxley's Brave New World [20]David Pearce interviwed by Jon Despres [21]Sinti?ndose maravillosamente, por siempre E-mail Dave [22]dave at hedweb.com References 1. http://www.life-enhancement.com/neofiles/ 2. http://www.hedweb.com/hedethic/hedonist.htm 3. http://www.scienceblog.com/community/article588.html 4. http://www.columbia.edu/~jh299/DA.html 5. http://www.hedweb.com/welcome.htm 6. http://www.hedweb.com/huxley/bnw.htm 7. http://www.hedweb.com/ecstasy/index.html 8. http://www.hedweb.com/nihilism/nihilfil.htm 9. http://rae.tnir.org/cryonics/breakthrough/2.html 10. http://www.hedweb.com/hedab.htm 11. http://www.hedweb.com/index.html 12. http://opioids.com/ 13. http://www.general-anaesthesia.com/ 14. http://www.wireheading.com/ 15. http://www.biopsychiatry.com/ 16. http://paradise-engineering.com/ 17. http://www.nanotechnologist.com/ 18. http://www.mdma.net/index.html 19. http://www.huxley.net/ 20. http://www.hedweb.com/hedethic/interview.htm 21. http://www.hedweb.com/hedethic/es.html 22. mailto:dave at hedweb.com From checker at panix.com Sun Dec 18 18:57:15 2005 From: checker at panix.com (Premise Checker) Date: Sun, 18 Dec 2005 13:57:15 -0500 (EST) Subject: [Paleopsych] NYTBR: 'The Republican War on Science, ' by Chris Mooney Message-ID: 'The Republican War on Science,' by Chris Mooney http://select.nytimes.com/preview/2005/12/18/books/1124990016827.html [I loved Horgan's "Rational Mysticism," a book that goes into drug-induced religious states a lot. I recommend it to many and recommend to all his "The End of Science," which doesn't pound on that theme so much as interviews many well-known scientists about their work. It could well be that we are nearing wit's end with our three-pound brains. They are not big enough to get the theoretical part of a successful string theory, or so it seems. And what's happened that's as important in the fifty years since the deciphering of DNA and the discovery of the Big Bang? [The book Horgan reviews may very well be accurate. As long as the state is big enough to do good, it can also do harm. Even still, the depredations that corporations do amount to far less than they are taxed. [The big scandal is not even noticed, no more than fish notice water. This is bogus and biased *social* science, the spurious dogmas and research that underlie the Human Betterment Industry (health-education-welfare, which is mostly paid for by government in this country and in most. Health is 50% paid for by the government and education 80-90%. Hard to say about welfare, since people do provide for their own. Never seen any data on it). The government part of the HBI has been greater than private manufacturing for at least thirty years! And the HBI has a huge control over the upbringing of youth. While manufacturers would like to have this control, they don't need to, since their products are purchased voluntarily. [And yet, what is actually gotten from all this spending is hard to distinguish from zero. Economists can't find any relationship between education spending and education, nor between health care spending and health. And the dependency caused by welfare is so well-known that even liberals worry about it. [Yes, that the Republicans are catching up with the fine art of raiding the public fisc that had been such a specialty of the Democrats, is a major change from the previous division between the party of the taxpayers and the party of the taxeaters. "The Republican War on Science" is on the front page of the NYT Book Review today. I wish other issue of the Review would feature The Democratic War on Social Science.] THE REPUBLICAN WAR ON SCIENCE By Chris Mooney. 342 pp. Basic Books. $24.95. Review by JOHN HORGAN Last spring, a magazine asked me to look into a whistleblower case involving a United States Fish and Wildlife Service biologist named Andy Eller. Eller, a veteran of 18 years with the service, was fired after he publicly charged it with failing to protect the Florida panther from voracious development. One of the first species listed under the Endangered Species Act, the panther haunts southwest Florida's forests, which builders are transforming into gated golf communities. After several weeks of interviews, I wrote an article that called the service's treatment of Eller "shameful" - and emblematic of the Bush administration's treatment of scientists who interfere with its probusiness agenda. My editor complained that the piece was too "one-sided"; I needed to show more sympathy to Eller's superiors in the Wildlife Service and to the Bush administration. I knew what the editor meant: the story I had written could be dismissed as just another anti-Bush diatribe; it would be more persuasive if it appeared more balanced. On the other hand, the reality was one-sided, to a startling degree. An ardent conservationist, Eller had dreamed of working for the Wildlife Service since his youth; he collected first editions of environmental classics like Rachel Carson's "Silent Spring." The officials who fired him based their denial that the panther is threatened in part on data provided by a former state wildlife scientist who had since become a consultant for developers seeking to bulldoze panther habitat. The officials were clearly acting in the spirit of their overseer, Secretary of the Interior Gale Norton, a property-rights advocate who has questioned the constitutionality of aspects of the Endangered Species Act. This episode makes me more sympathetic than I might otherwise have been to "The Republican War on Science" by the journalist Chris Mooney. As the title indicates, Mooney's book is a diatribe, from start to finish. The prose is often clunky and clich?d, and it suffers from smug, preaching-to-the-choir self-righteousness. But Mooney deserves a hearing in spite of these flaws, because he addresses a vitally important topic and gets it basically right. Mooney charges George Bush and other conservative Republicans with "science abuse," which he defines as "any attempt to inappropriately undermine, alter or otherwise interfere with the scientific process, or scientific conclusions, for political or ideological reasons." Science abuse is not an exclusively right-wing sin, Mooney acknowledges. He condemns Greenpeace for exaggerating the risks of genetically modified "Frankenfoods," animal-rights groups for dismissing the medical benefits of research on animals and John Kerry for overstating the potential of stem cells during his presidential run. In "politicized fights involving science, it is rare to find liberals entirely innocent of abuses," Mooney asserts. "But they are almost never as guilty as the Right." By "the Right," Mooney means the powerful alliance of conservative Christians - who seek to influence policies on abortion, stem cells, sexual conduct and the teaching of evolution - and advocates of free enterprise who attempt to minimize regulations that cut into corporate profits. The champion of both groups - and the chief villain of Mooney's book - is President Bush, whom Mooney accuses of having "politicized science to an unprecedented degree." Some might quibble with "unprecedented." When I starting covering science in the early 1980's, Ronald Reagan was pushing for a space-based defense against nuclear missiles, called Star Wars, that a chorus of scientists dismissed as technically unfeasible. Reagan stalled on acknowledging the dangers of acid rain and the buildup of ozone-destroying chlorofluorocarbons in the atmosphere. Warming the hearts of his religious fans, Reagan voiced doubts about the theory of evolution, and he urged C. Everett Koop, the surgeon general, to investigate whether abortion harms women physically and emotionally. (Koop, though an ardent opponent of abortion, refused.) Mooney notes this history but argues that the current administration has imposed its will on scientific debates in a more systematic fashion, and he cites a slew of cases - including the Florida panther affair - to back up his claim. One simple strategy involves filling federal positions on the basis of ideology rather than genuine expertise. Last year, the White House expelled the eminent cell biologist Elizabeth Blackburn, a proponent of embryonic stem-cell research, from the President's Council on Bioethics and installed a political scientist who had once declared, "Every embryo for research is someone's blood relative." And in 2002 the administration appointed the Kentucky gynecologist and obstetrician W. David Hager to the Reproductive Health Drugs Advisory Committee of the Food and Drug Administration. Hager has advocated treating premenstrual syndrome with Bible readings and has denounced the birth control pill. In addition to these widely reported incidents, Mooney divulges others of which I was unaware. In 2003 the World Health Organization and Food and Agricultural Organization (W.H.O./F.A.O.), citing concerns about rising levels of obesity-related disease, released a report that recommended limits on the intake of fat and sugar. The recommendations reflected the consensus of an international coalition of experts. The Sugar Association, the Grocery Manufacturers of America and other food industry groups attacked the recommendations. William R. Steiger, an official in the Department of Health and Human Services, then wrote to W.H.O.'s director general to complain about the dietary report. Echoing the criticism of the industry groups, Steiger questioned the W.H.O. report's linkage of obesity and other disorders to foods containing high levels of sugar and fat, and he suggested that the report should have placed more emphasis on "personal responsibility." Steiger later informed the W.H.O. that henceforth only scientists approved by his office would be allowed to serve on the organization's committees. In similar fashion, the Bush administration has sought to control the debate over climate change, biodiversity, contraception, drug abuse, air and water pollution, missile defense and other issues that bear on the welfare of humans and the rest of nature. What galls Mooney most is that administration officials and other conservative Republicans claim that they are guided by reason and respect for "sound science," whereas their opponents are ideologues peddling "junk science." In the most original section of his book, Mooney credits "Big Tobacco" with inventing and refining this Orwellian tactic. After the surgeon general's office released its landmark 1964 report linking smoking to cancer and other diseases, the tobacco industry sought to discredit the report with its own experts and studies. "Doubt is our product," declared a 1969 Brown & Williamson memo spelling out the strategy, "since it is the best means of competing with the 'body of fact' that exists in the mind of the general public." After the E.P.A. released a report on the dangers of secondhand smoke in 1992, the Tobacco Institute berated the agency for preferring "political correctness over sound science." Within a year Philip Morris helped to create a group called The Advancement of Sound Science Coalition (Tassc), which challenged the risks not only of secondhand smoke but also of pesticides, dioxin and other industrial chemicals. (The executive director of Tassc in the late 1990's was Steven Milloy, who now "debunks" global warming and other environmental threats in the Foxnews.com column "Junk Science.") Newt Gingrich and other Republicans soon started invoking "sound science" and "junk science" while criticizing government regulations. A veteran tobacco lobbyist also played a role in the Data Quality Act, which Mooney calls "a science abuser's dream come true." Jim Tozzi, who served in the Office of Management and Budget before becoming a consultant for Philip Morris and other companies, helped draft the legislation and slip it into a massive appropriations bill signed into law in 2000, late in the Clinton administration. The act, which raises the standard for scientific evidence justifying federal regulations, is designed to induce what one critic calls "paralysis by analysis." While the law does not exclusively serve business interests (for example, Andy Eller successfully used it to challenge the Fish and Wildlife Service's policies on panther habitat), they have been its main beneficiaries. Already it has been employed by loggers, herbicide makers, manufacturers of asbestos brakes and other companies to challenge unwelcome regulations. Mooney, who grew up in New Orleans, seems particularly incensed when he addresses the issue of global warming. He notes that Bush officials have repeatedly ignored or altered reports by the National Academy of Sciences, the E.P.A. and other groups tying global warming to fossil fuel emissions. Mooney devotes nearly a whole chapter to denouncing Senator Daniel Inhofe of Oklahoma, a Republican and chairman of the Committee on Environment and Public Works, who once said human-induced global warming might be "the greatest hoax ever perpetrated on the American people." Republicans' "refusal to consider mainstream scientific opinion fuels an atmosphere of policy gridlock that could cost our children dearly," declares Mooney, who finished his book before Hurricane Katrina. I can only imagine how he feels now. Mooney implicates the news media in this crisis. Too often, he says, reporters covering scientific debates give fringe views equal weight in a misguided attempt to achieve "balance." To back up this claim, Mooney cites a study of coverage of global warming in four major newspapers, including this one, from 1988 to 2002. The study concluded that more than 50 percent of the stories gave "roughly equal attention" to both sides of the debate, even though by 1995 most climatologists accepted human-induced global warming as highly probable. Mooney notes that one prominent doubter and sometime Bush administration adviser on climate change, the M.I.T. meteorologist Richard Lindzen, is a smoker who has also questioned the evidence linking smoking and lung cancer. Mooney's critique has understandably annoyed some of his colleagues. In a review in The Washington Post, the journalist Keay Davidson faults Mooney for not acknowledging how hard it can be to distinguish good science from bad. Philosophers call this the "demarcation problem." Demarcation can indeed be difficult, especially if all the scientists involved are trying in good faith to get at the truth, and Mooney does occasionally imply that demarcation consists simply of checking scientists' party affiliations. But in many of the cases that he examines, demarcation is easy, because one side has an a priori commitment to something other than the truth - God or money, to put it bluntly. Conservative complaints about federally financed "junk science" may ultimately prove self-fulfilling. Government scientists - and those who receive federal funds - may toe the party line to avoid being punished like the whistleblower Andy Eller (who was rehired last June after he sued for wrongful termination). Increasingly, competent scientists will avoid public service, degrading the quality of advice to policy makers and the public still further. Together, these trends threaten "not just our public health and the environment," Mooney warns, "but the very integrity of American democracy, which relies heavily on scientific and technical expertise to function." If this assessment sounds one-sided, so is the reality that it describes. John Horgan is director of the Center for Science Writings at the Stevens Institute of Technology. His latest book is "Rational Mysticism." From shovland at mindspring.com Mon Dec 19 04:26:15 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 18 Dec 2005 20:26:15 -0800 Subject: [Paleopsych] Questions Message-ID: George Bush goes to a primary school to talk to the kids to get a little PR. After his talk he offers question time. One little boy puts up his hand and George asks him his name. "Stanley," responds the little boy. "And what is your question, Stanley?" "I have 3 questions. First, why did the USA invade Iraq without the support of the UN? Second, why are you President when Al Gore got more votes? And third, whatever happened to Osama Bin Laden?" Just then, the bell rings for recess. George Bush informs the kiddies that they will continue after recess. When they resume George says, "OK, where were we? Oh, that's right: question time. Who has a question?" Another little boy puts up his hand. George points him out and asks him his name. "Steve," he responds. "And what is your question, Steve?" "Actually, I have 5 questions. First, why did the USA invade Iraq without the support of the UN? Second, why are you President when Al Gore got more votes? Third, whatever happened to Osama Bin Laden? Fourth, why did the recess bell go off 20 minutes early? And fifth, what happened to Stanley?" -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at panix.com Sat Dec 17 16:55:11 2005 From: checker at panix.com (Premise Checker) Date: Sat, 17 Dec 2005 11:55:11 -0500 (EST) Subject: [Paleopsych] David Pearce interviwed by RU Sirius Message-ID: David Pearce interviwed by RU Sirius http://www.hedweb.com/hedethic/interview.html First published in the [1]NeoFiles Date: 16 December 2003 Feeling Groovy, Forever David Pearce in Conversation with R.U. Sirius This manifesto outlines a strategy to eradicate suffering in all sentient life. So begins David Pearces Web-based manifesto [2]The Hedonistic Imperative. Pearce believes that through such technological manipulations as genetic engineering, better drugs, and precise stimulation of various localities in the brain, human beings (just for starters) can live in a sort-of paradise in which all unpleasant states of consciousness have been banished to the old Darwinian Era. These new-found paradisical brain-states will exist within the context of an advanced, nanotechnologized society in which oppressive external conditions have also been eliminated. For Pearce, the great shift into a hedonic society will come about by genetic intervention: Gene therapy will be targeted both on somatic cells and, with even greater forethought, the [3]germ-line. If cunningly applied, a combination of the cellular enlargement of the meso-limbic [4]dopamine system, selectively enhanced metabolic function of key intra-cellular sub-types of opioidergic and serotonergic pathway, and the disablement of several countervailing inhibitory feedback processes will put in place the biomolecular architecture for a major transition in human evolution His website [5]HEDWEB includes the substantial Hedonistic Imperative treatise, as well as a [6]marvelous critique of Huxleys Brave New World, another lengthy discussion of [7]MDMA (ecstasy), a philosophical essay that tries to answer the question [8]Why does anything exist?, and a section advocating Animal Liberation, or at least something akin to a global welfare state for higher non-human lifeforms. Pearce lives quietly in Brighton, England. He communicates masterfully through his website and is unaccustomed to being interviewed. But I prevailed upon him in a transatlantic phone conversation. NEOFILES: While initial steps towards your Hedonistic Imperative seem to involve improved drugs and wireheading (stimulating pleasure centers in the brain), what you are really talking about is biological manipulations that will produce humans who experience a variety of positive states ranging from high functioning well-being to serene bliss, and who dont experience negative states - or at least only the functional analogs of negative states that lack the raw feel of mental pain as we understand it today. Can you say a bit about the technology behind this idea? DAVID PEARCE: Well, there are technical obstacles and ideological obstacles to the abolitionist project. But if one deals first with the technical challenges, I think there are essentially three options. One is wireheading. Wireheading is (probably) a dead-end. But it is illuminating because the procedure shows that pleasure has no physiological tolerance. That is to say, its just as exhilarating having ones pleasure centers stimulated 24 hours after starting a binge as it was at the beginning. NF: in contrast to recreational drugs where euphoriants and even the best hallucinogens have diminishing returns DP: Yes, the high is typically followed by the low, or at least by severely diminished rewards as the negative feedback mechanisms of the brain kick in. Something similar occurs with natural rewards such as food, drink and sex. But with wireheading this doesnt happen. Pleasure, and perhaps pure pleasure alone, shows no tolerance. Of course, our image of wireheading itself is dreadful. People confuse it with torture or the coercive psychiatry of One Flew Over The Cuckoos Nest. And a whole society based on wireheading wouldnt be sustainable, in its crude forms at least. No one would want to reproduce and raise children. So, secondly, theres the option of designing better drugs. The prospect of lifelong drug-induced happiness strikes many people as unappealing. Drug-induced happiness sounds shallow, amoral and one-dimensional. But the pleasure drugs of the future will be far richer in their effects than, say, soma in Huxleys notorious Brave New World. At present were missing out on some incredibly beautiful states of consciousness because of the legacy of our brutish Darwinian past and the bioconservative ideologies that sustain it. Even so, I think drugs are only a stopgap. In the long run, if were morally serious about creating a cruelty-free world, were going to use the third option, genetic engineering. Right now, were on the brink of a reproductive revolution, the era of designer babies if you will, where responsible parents will choose the genetic makeup of their kids. Initially, were only going to tinker with the genome. Eventually, I think were going to rewrite it altogether. And to be deliberately simplistic: imagine if you could choose the average lifetime mood level of your future offspring on a genetic dial - with number 1 on the dial representing modest well-being and number 10 representing sublime bliss. What setting would you choose for your child? Most prospective parents, I think, will choose settings at the higher end of the scale not sublime bliss perhaps, but certainly genotypes encoding a predisposition to lifelong happiness. We may perhaps want many different things for our kids (high intelligence, good looks, success), but their happiness is at least one of these criteria; and ultimately, I think, its the most important. The good news here is that in future, such (un)happiness neednt be left to a cruel Darwinian genetic lottery or Fate. So its worth stressing that progress towards the abolition of suffering doesnt entail the global adoption of an ideology of paradise engineering - or anything so grandiose and utopian as the abolitionist project I advocate. Initially at least, progress to a kinder world merely entails parents taking genetic decisions about whats best for their kids... Of course, this revolution in the technology of reproductive medicine is still some way off. Today, even early adopters arent doing anything much more ambitious than choosing their childs gender. But in maybe three or four decades or so, and possibly substantially sooner, well be choosing such traits as the average hedonic set point of our children. Over time, I think allelic combinations [suites of variant copies of mission-critical genes] that leave their bearers pre-disposed to unpleasant states of consciousness unpleasant states that were genetically adaptive in our ancestral environment - will be weeded out of the gene pool. For a very different kind of selection pressure is at work when evolution is no longer blind and random i.e. when rational agents pre-design the genetic make up of their future offspring in anticipation of its likely effects on their kids. In that sense, were heading for a Post-Darwinian transition ultimately I believe to some form of paradise-engineering, but perhaps to something else altogether. NF: In very rough terms, what were talking about is juicing up the dopaminergic and serotonergic systems, among a bunch of other neurochemical tweaks, in very precise ways, at the level of the genes, once we fully understand how genes control these things. DP: Yes. The neural basis of our so-called basic moods and emotions is simpler than so-called higher cognitive functions. But undeniably, this neural basis is still fiendishly complicated, the simplicity of wireheading notwithstanding. For instance, the mesolimbic dopamine system may not be, as weve sometimes supposed, the final common pathway of pleasure in the brain: dopamine apparently mediates wanting (i.e. incentive-motivation) as much as liking, which is signaled by activation of the mu opioid receptors. But if we focus here on the simple monoamines, an obvious target for intervention is indeed the mesolimbic dopamine system. One of the most common objections to the idea of abolishing suffering ignoring here the prospect of full-blown paradise-engineering is that without the spur of discontent wed soon become idle and even bored. If we were all happy, what would we do all day? But enhanced dopamine function is associated, not just with euphoria, but with heightened motivation; a deeper sense of meaningfulness, significance and purpose; and an increased sensitivity to a greater range of rewards. So one possible option for paradise engineering is to focus on enriching the dopaminergic system to promote (a genetic predisposition to) lifestyles of high achievement and intellectual productivity. Thats one option at least. Another sort of predisposition is to pursue a lifetime of introspection, meditation and blissful tranquility. If I seem to dwell unduly on ways of enriching dopamine function, thats because exploring its amplification is a useful corrective to a widespread misconception i.e. that happiness inevitably leads to stagnation. The critical point, I think, is that to be blissful isnt the same as being blissed out. NF: What about the possibility that madness could come from these amplified states? DP: Well, you cant just unselectively pump up the dopaminergic system and hope to induce states of high-functioning well-being. You might just induce chronic psychosis instead. Genetically enriching our mental health demands a deeper understanding of the workings of the brain than we have today. The era of mature genomic medicine is still decades away. But consider even something as simple and monogenetic as the association between one variant of [9]dopamine DRD4 receptor allele and an unusually optimistic, novelty-seeking temperament. Other things being equal, this trait may be seen as positive. Most prospective parents, if given the choice, would probably opt for an allele predisposing to such a trait in preference to, say, any genotype predisposing to a depressive, anxiety-ridden temperament for their kids. To take another example: prospective parents in future will probably opt for two copies of the longer version of the allele of serotonin transporter gene (5-HTTLPR) whose shorter version is associated with anxiety disorders and neuroticism. I stress that these are just toy examples. More sophisticated versions of genetic choices such as the above are likely to be commonly available later this century and beyond. Such choices will presumably be assisted by computational software with an ultra-friendly user interface so we dont all have to become molecular psychiatrists and can concentrate on making high-level choices of trait instead. One objection springs to mind here. Mood and personality are influenced by a multitude of different genes, not to mention the vagaries of the environment. So it might seem that all but the simplest interventions, involving only a handful of alleles, will lead to an impossible combinatorial explosion of possibilities - and unanticipated consequences to match. This may indeed be the case. The very expression designer babies conjures up a dystopian nightmare, not paradise-engineering. However, mature quantum computing will allow us (in a few decades??) to perform super-sophisticated modeling and fabulously complex simulations which are (many) orders of magnitude more powerful than anything feasible today. I think the pessimists will be confounded. I could be wrong. We shall see. NF: Returning to the intermediary stage drug development, you seem to find the greatest promise in the development of anti-depressants and in MDMA. Care to explain this? DP: Yes. MDMA (Ecstasy) is interesting not least because of the way its use challenges our notion that drug-taking must be inherently selfish i.e. hedonistic in the baser sense on the term. At its best, the MDMA experience shows that drug-induced well-being can be profoundly loving, insightful and empathetic. Unfortunately, MDMA itself is potentially neurotoxic to the serotonergic axons - even at non-heroic dosages. Although the claims of the drug warriors about its dangers are clearly overblown, theres no denying that MDMA isnt the sort of agent you can use regularly on a long-term basis in the way you would take a so-called anti-depressant or other psychoactive prescription drug. Yet here I think lies the crux. The mainstream medical conclusion drawn from MDMAs (probable) human toxicity is that MDMA - and other insight-and empathy drugs used by the scientific counterculture - should be banned, or at least their use discouraged. But theres a better option: we should be systematically researching ways to design safe and sustainable entactogen/empathogens. Critically, their neurotoxicity can be dissociated from their therapeutic effect. And once the neurological signature and precise molecular mechanisms underlying both the magic and the ugly post-E serotonin dip are worked out, theres no reason why states of blissful empathy cant be sustained indefinitely. If we consider the goal worthwhile, then this task is just a technical challenge with a technical solution. Something akin to Naranjo's brief fleeting moment of sanity [induced by taking MDMA] can become our default condition of mental health. Perhaps. Alas the rather ill-assorted class of drugs today marketed as antidepressants dont do much to enrich our capacity for empathy or self-insight. But they are a good example of agents that dont have a fast, up-down effect. Rather, they induce a steady improvement of mood, reduced anxiety levels, and enhanced emotional resilience - for at least some of the people who take them. The mood uplift they offer is quite modest: only a small percentage of people ever feel better than well on Prozac; and some people even feel worse. Also, the reward is often delayed by as much as several weeks, possibly to allow nerve cell growth in the hippocampus. Right now the drug companies are working on faster-acting antidepressants - with only limited success it has to be said, owing to the taboo on targeting the dopaminergic and opioid systems. But delayed drug-induced reward is actually a long-term therapeutic advantage for any good psychoactive agent because it minimizes the likelihood of uncontrolled dosage-escalation posed by the use of fast acting euphoriants. Of course, conventional wisdom is that anti-depressants exist only to help people who are diagnosed as clinically depressed; and such drugs arent of benefit to anyone else. That may be true with most of the older tricyclics at least; and most of their current successors. But theres no reason, in principle, why everyone cant have their moods enriched and uplifted in a controllable way, whether by drugs or gene therapy. Although I cant prove it, I think our descendants will be animated by gradients of well-being beyond the bounds of normal human experience. I stress the controllability here because we dont want genetically susceptible people switching to uncontrolled manic exuberance though mildly hypomanic states can sometimes be extraordinarily productive. In short, we need a vastly enriched conception of mental health. At present, if a drug company came up with the ideal pleasure drug - a real blockbuster wonderdrug designed to enrich the lives of everyone who took it - then it simply wouldnt get a product license. Absurdly, theres no way it could be legally marketed. [Nor could, say, an authentic intelligence-booster or smart drug be marketed either] This is because to get a product license for an investigational drug you have to indicate some officially recognized disease or disorder that the drug potentially alleviates or cures. Just helping the dull-witted and malaise-ridden (as we all are, by the lights of posterity) doesnt count. Crazy. NF: You mention nanotechnology as part of the paradise engineered future but dont say much about its role. How do you see this? DP: The role of nanotechnology in keeping us all physically healthy and wealthy is covered in admirable depth elsewhere. So I just focus on one particular application of nanotechnology, albeit (I think) a morally important application. If the abolitionist project is ever to be completed, then it must extend not just to humans but to the rest of the living world. Its easy to dismiss nonhuman suffering as comparatively trivial in its intensity compared to our own. I hope the skeptics are right; but all the indications are this isnt the case. The more primitive the feeling or emotion, the more intense it typically feels. The biological substrates of suffering are disturbingly constant throughout the vertebrate line. I think a lot of the animals we abuse and kill are functionally and morally akin to human infants and toddlers. Anyhow, in this context, if one believes that it is ethically desirable to eliminate suffering from the world, then nanotechnology will be necessary to penetrate the recesses of the oceans and the furthest reaches of the reaches of the animal kingdom. If we do ever want to redesign the global eco-system and rewrite the vertebrate genome, then this is the kind of mega-project that could only be done with nanotech. At any rate, it will be within our computational resources to do so. I hope well take our godlike powers seriously and use them ethically. NF: Lots of people will think this is a bad idea, even if it can be achieved. You seem to cover every conceivable objection in the manifesto and in your critique of Brave New World but can you speak briefly to the likely main objection; that personalities that are not forged out of difficulty will be lacking and somehow de-humanized? DP: I think the opposite is true. Other things being equal, enhancing our enjoyment of life is character-building. This sounds a bit odd, even paradoxical, but one of the nastier aspects of melancholic depression and its common sub-clinical variants today is the syndrome of so-called learned helplessness and behavioral despair. Milder forms of this syndrome are endemic to the population at large. People prone to depression give up too easily. Theyve only a limited capacity to anticipate reward or experience happiness. They arent easily motivated. By contrast, the new mood-enriching technologies will cure weakness of will. They are potentially empowering, even liberating. For the more one loves life, the more motivated one is to carry out ones goals and life projects. When feeling happy and energized, one takes on challenges that would daunt frailer spirits. Ideally, one will be able to use biotech to transform oneself into the sort of person one wants to be rather than passively accepting I cant help it, I was born like that. Its suffering that dehumanizes and demoralizes us, not well-being. Suffering is not ennobling or character-building; its ultimately just nasty - and potentially functionally redundant. Rationalizing its existence makes suffering (sometimes) more bearable; but that's all. That which does not crush me makes me stronger, said Nietzsche; yes, but thats the trouble: all too many people today do have their spirits crushed by the cruelties of Darwinian life. But not for much longer, I think. A final point. Uniform bliss isnt any more motivating than uniform despair. To enjoy a high functioning and intellectually discerning bliss, well need to explore gradients of well-being. In the language of the information-theoretic paradigm, what matters to the way we function is not our absolute location on the pleasure-pain axis, but that we are informationally sensitive to fitness-relevant changes in our internal and external environment. Thus whereas today many people are driven by gradients of discontent, in the future I think well be animated by gradients of bliss. Some days will be sublime. Others will be merely wonderful. But critically, there will be one particular texture (what it feels like) of consciousness that will be missing from our lives; and that will be the texture of nastiness. I think the absence of Darwinian suffering will be the foundation of any future civilization. * * * [10]The Hedonistic Imperative [11]HOME [12]Future Opioids [13]Utopian Surgery? [14]Wirehead Hedonism [15]The Good Drug Guide [16]Paradise Engineering [17]Nanotechnology Hotlinks [18]MDMA: Utopian Pharmacology [19]Critique of Huxley's Brave New World [20]David Pearce interviwed by Jon Despres [21]Sinti?ndose maravillosamente, por siempre E-mail Dave [22]dave at hedweb.com References 1. http://www.life-enhancement.com/neofiles/ 2. http://www.hedweb.com/hedethic/hedonist.htm 3. http://www.scienceblog.com/community/article588.html 4. http://www.columbia.edu/~jh299/DA.html 5. http://www.hedweb.com/welcome.htm 6. http://www.hedweb.com/huxley/bnw.htm 7. http://www.hedweb.com/ecstasy/index.html 8. http://www.hedweb.com/nihilism/nihilfil.htm 9. http://rae.tnir.org/cryonics/breakthrough/2.html 10. http://www.hedweb.com/hedab.htm 11. http://www.hedweb.com/index.html 12. http://opioids.com/ 13. http://www.general-anaesthesia.com/ 14. http://www.wireheading.com/ 15. http://www.biopsychiatry.com/ 16. http://paradise-engineering.com/ 17. http://www.nanotechnologist.com/ 18. http://www.mdma.net/index.html 19. http://www.huxley.net/ 20. http://www.hedweb.com/hedethic/interview.htm 21. http://www.hedweb.com/hedethic/es.html 22. mailto:dave at hedweb.com From shovland at mindspring.com Mon Dec 19 04:56:57 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 18 Dec 2005 20:56:57 -0800 Subject: [Paleopsych] Gondola Repair in Venice Message-ID: A wrong turn, and you see something you might not otherwise see... Steve Hovland www.stevehovland.net -------------- next part -------------- A non-text attachment was scrubbed... Name: Gondola Repair.jpg Type: application/octet-stream Size: 170421 bytes Desc: not available URL: From HowlBloom at aol.com Wed Dec 21 06:18:11 2005 From: HowlBloom at aol.com (HowlBloom at aol.com) Date: Wed, 21 Dec 2005 01:18:11 EST Subject: [Paleopsych] Fwd: Lies & intelligent design Message-ID: <241.414c75e.30da4da3@aol.com> In a message dated 12/20/2005 10:08:39 PM Eastern Standard Time, kendulf at shaw.ca writes: Dear Friends, DER SPIEGEL in today's account of the judical defeat of "Intelligent Design" wrote as follows: "Richter John Jones erkl?rte in der Urteilsbegr?ndung, es sei erstaunlich, dass mehrere Mitglieder des Schulrats stolz ihren Glauben in der ?ffentlichkeit verk?ndeten, sich aber nicht scheuten, zu l?gen". In translation:"Judge Jon Jones declared in his judgement that it is astonishing, that a number of members of the Schoolcouncil proclaimed proudly their faith, but did not heistate to lie". Astonishing and wonderful of the Judge! The very essence of science is disinterested integrity, and that has been lacking in Intelligent Design and its various mutants. Cheers, Val Geist ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Recent Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Advanced Technology Working Group, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Institute for Accelerating Change ; executive editor -- New Paradigm book series. For a peek into the ultimate cross-disciplinary field, Omnology, see http://bigbangtango.net/website/omnology.html For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded message was scrubbed... From: Val Geist Subject: Re: Lies & intelligent design Date: Tue, 20 Dec 2005 19:08:22 -0800 Size: 6611 URL: From checker at panix.com Thu Dec 22 16:01:17 2005 From: checker at panix.com (Premise Checker) Date: Thu, 22 Dec 2005 11:01:17 -0500 (EST) Subject: [Paleopsych] New Scientist: Instant Expert: Christmas Science Message-ID: Instant Expert: Christmas Science http://www.newscientist.com/popuparticle.ns?id=in122 [All articles appended. This is going to all kinds of people, Christian or not, for its scientific interest. Enjoy!] Special Reports On Key Topics In Science And Technology Ever wondered how they celebrate Christmas at the [1]South Pole, pondered the psychology behind [2]Christmas card lists and [3]Christmas dinner, or wondered why an average of 80,000 Brits end up [4]hospitalised every holiday season? Here NewScientist.com guides you through the science of all things festive. [5]Partying to excess and [6]stuffing our faces have become synonymous with the Christmas season. Watch out though - eating too much just before bed really can give you [7]bad dreams. Turkey, [8]pudding, [9]festive spices, mulled wine and other [10]exotically-flavoured booze are just some of the food items you're likely to encounter. Hulking great broiler turkeys bare [11]little resemblance to the fleet-footed wild birds they are descended from. Though they taste good, it's important to cook them thoroughly to avoid a bout of festive [12]food poisoning. Christmas is the season to give, but most people look forward to receiving too. However, if you find that after the [13]orgy of pre-Christmas spending, the [14]tree isn't surrounded by a stack of gifts bearing your name, this is how to [15]take action. Targeting a grandparent who has the most undisputable genetic link to you may be the key. But be careful if you receive a gift card though, [16]as retailers are raking in our cash from unclaimed vouchers. On the other hand, if you've got to buy gifts for people you don't like, we have some tips for [17]bypassing the Christmas spirit that would make Ebenezer [18]Scrooge proud. Ho, ho, ho! Father Christmas is a far more loveable and jolly Christmas icon. But have you ever wondered how - by [19]reindeer power alone - he manages the [20]incredible feat of delivering gifts to 850 million or so households in a single night? Could Santa could be using [21]wormholes to travel through space and time? If the big guy ever finds it all too exhausting, he might consider moving to the [22]north pole of another planet. However, while he is still doing a useful job here on Earth, the North American Aerospace Defense Command (NORAD) in Colorado provides an annual online tracking service to [23]map Santa's progress around the world on Christmas eve. For people of a less jolly disposition, such as those afflicted by [24]seasonal affective disorder, Christmas is a depressing time of year. Some may choose to [25]skip the holidays altogether. Entering into the Christmas spirit might be better for your [26]state of mind though - research suggests just one really fantastic Christmas might ensure that the sight of tinsel and fairy lights gives you a warm, cosy feeling forever. Decorating our homes with tinsel, [27]fake snow, [28]icicles and other sparkly decorations is another longstanding Christmas tradition. Old-fashioned decorations such as [29]holly, ivy and mistletoe may have a sinister side though, despite their beauty, and commercial harvesting of moss is [30]depleting wild populations. More kitsch forms of decoration such as life-size glowing reindeer, illuminated nativity scenes and animatronic [31]snowmen found on porches throughout the US, are currently getting a [32]21^st century digital makeover. John Pickrell, 22 December 2005 References 1. http://www.newscientist.com/channel/being-human/christmas/mg16021658.900 2. http://www.newscientist.com/channel/being-human/christmas/mg18024265.900 3. http://www.newscientist.com/channel/being-human/christmas/mg18424793.000 4. http://www.newscientist.com/channel/being-human/christmas/mg16021656.400 5. http://www.newscientist.com/channel/being-human/christmas/mg18825311.500 6. http://www.newscientist.com/channel/being-human/christmas/mg17223225.500 7. http://www.newscientist.com/channel/being-human/christmas/mg16822704.100 8. http://www.newscientist.com/channel/being-human/christmas/mg17022867.100 9. http://www.newscientist.com/channel/being-human/christmas/mg16822704.400 10. http://www.newscientist.com/channel/being-human/christmas/mg18825312.500 11. http://www.newscientist.com/channel/being-human/christmas/mg16021655.600 12. http://www.newscientist.com/channel/being-human/christmas/dn6820 13. http://www.newscientist.com/channel/being-human/christmas/mg14419510.800 14. http://www.newscientist.com/channel/being-human/christmas/mg14419572.300 15. http://www.newscientist.com/channel/being-human/christmas/mg18424791.800 16. http://www.newscientist.com/channel/being-human/christmas/mg18825312.200 17. http://www.newscientist.com/channel/being-human/christmas/mg16822703.900 18. http://www.newscientist.com/channel/being-human/christmas/mg15020365.000 19. http://www.newscientist.com/channel/being-human/christmas/dn4491 20. http://www.newscientist.com/channel/being-human/christmas/mg12416965.500 21. http://www.newscientist.com/channel/being-human/christmas/mg14419573.700 22. http://www.newscientist.com/channel/being-human/christmas/mg16422184.800 23. http://www.newscientist.com/channel/being-human/christmas/mg16422183.500 24. http://www.newscientist.com/channel/being-human/christmas/mg17223225.100 25. http://www.newscientist.com/channel/being-human/christmas/mg15220615.600 26. http://www.newscientist.com/channel/being-human/christmas/mg18024266.000 27. http://www.newscientist.com/channel/being-human/christmas/mg16822703.600 28. http://www.newscientist.com/channel/being-human/christmas/mg18024265.300 29. http://www.newscientist.com/channel/being-human/christmas/mg16422183.400 30. http://www.newscientist.com/channel/being-human/christmas/mg18424792.700 31. http://www.newscientist.com/channel/being-human/christmas/mg16021655.800 32. http://www.newscientist.com/channel/being-human/christmas/mg17623746.600 'tis the season to be jolly in Antarctica * Ian Anderson * 19 December 1998 AT THEIR base camp, the New Zealand scientists like to start celebrating Christmas Day early. In fact, last Christmas, at Scott Base on the edge of the Ross Sea, Christmas dinner began at 11 pm on Christmas Eve and lasted well into the afternoon of Christmas Day. It must have something to do with the 24-hour daylight at this time of year: with midday and midnight virtually indistinguishable, planning any event takes on a tinge of the surreal. At the New Zealanders' festivities, formal dress is expected-although not the sort seen at the party held last month, when all in attendance, men included, had to wear skirts. No, at the Christmas do, men wear tuxedos with just the women in long dresses. Members of the New Zealand 3 Squadron, the Iroquois helicopter squadron, donned the white jackets and pressed blue trousers of their mess kits. The helicopter pilots, in Antarctica to ferry supplies and people to remote locations, waited on the candlelit tables. The windows were blacked out. The best silverware and the whitest of tablecloths were used. The Christmas tree had to be artificial for under the Antarctic treaty no trees can be taken down to Antarctica. "Whether we start early this year is up to the social committee," says Paul "Ham" Hamilton, the cook at Scott Base. "Whatever is decided, I'll be ready. When we do something down here, we go the whole hog." Ham, and assistant cook Chris Bray, are preparing a feast for the 60 or so people who will be on base at Christmas. (Scott Base ordinarily caters for 80 or more during the Antarctic summer, but some take the 7-hour ride on the Hercules back to Christchurch to be with their families.) Ham manages a respectable spread on the day: home-made p?t?, soup, hot turkey, ham and pork with all the trimmings, a cold meat collation, at least four salads, steamed pudding and assorted desserts and cheeses, all washed down by good New Zealand wine. "We don't exactly starve down here," says Mike Mahon, a technician about to spend his third Christmas in Antarctica. Ham can even just about guarantee fresh vegetables for the salads. They grow in a hydroponic hothouse, set up on the ice a short walk from the main buildings. Under UV lighting, the plants are kept at a constant temperature of 25 ?C, with water supplied by a desalination plant. Christmas dinner is a much bigger event at McMurdo, the American base two kilometres away. Thirty-one cooks, with at least that number of volunteers, will use 10 convection ovens and nine baker's ovens to prepare Christmas dinner for 1200 people. They will eat in four one-hour shifts of 300 people each, beginning at 3 pm on the day. Antarctic cod, a delicacy caught from the sea ice just offshore, will be on the menu, as will more than 140 kilograms of duck and some 200 kilograms of fillet steak, not to mention 180 litres of gravy and over 200 litres of dressing. In the days leading up to Christmas Day, helicopters from both bases will fly in with hampers for the scientists and field assistants who will spend Christmas on the ice. Some of these doughty researchers will manage to cook a chicken in a pressure cooker. Teams with adjacent field camps will get together for the day, and even exchange gifts-one member usually dresses up as Santa Claus. There is music and back at base, carollers from the Chapel of the Snows, a Christian church at McMurdo, will visit both bases. In the field someone will have a mouth organ or a squeezebox and a songbook of carols. There is one activity held at Scott Base on Christmases past that is unlikely to be repeated, but you never know. The Polar Plunge involves diving into the icy seawater wearing nothing but socks and a harness to pull you out if you get into difficulties. Such pursuits don't go down well on a full stomach. The psychology of the perfect Christmas dinner * 25 December 2004 * Graham Lawton IT'S a rite of passage that's been lurking on the horizon since you left home. But no amount of forward planning can prepare you for this. Christmas dinner is at your place - and you're doing the cooking. You know the score: you're going to have to conjure up at least three courses, along with wine and cheese. A choice of desserts is mandatory. You'll run out of pans, and will have to wash up every 15 minutes. And worst of all, the world expert - your mum - will be there, casting a critical eye over your efforts. So how can you make sure dinner is divine? It's easier than you might think. Good food, it turns out, is all in the mind. The way people perceive your dinner has less to do with what's on their plate than what's in their heads. At least that is what Brian Wansink, a food psychologist at the University of Illinois in Urbana-Champaign, has found. Among other topics, Wansink studies how the right environmental cues can make ordinary food seem fantastic. He has spent years feeding people cheap, mass-produced, bog-standard or downright horrible food and then bamboozling them into believing they like it. "Taste is tremendously subjective," he says. "People are not too smart to be fooled." Most of Wansink's work has focused on places such as canteens and restaurants, but you can turn a lot of what he has learned to your advantage. The basic idea is to harness what psychologists call the "halo effect". Put simply, this means that if you make people feel good about just a few aspects of an experience, everything else about it will seem better. Restaurant food benefits hugely from the halo effect, Wansink says. It is no accident that restaurants pay lots of attention to their decor and lighting, the appearance of the waiting staff, the look of their menus and even the names and descriptions of their dishes. All conspire to create the belief that the chef will be taking similar pains over your meal. And if the food is half-decent, it works. Wansink has shown as much by experimenting on the staff and students who frequent his university cafeteria. In one experiment he fed people pieces of a chocolate brownie that had seen better days, then asked them how much they would be prepared to pay for it. People who got the brownie on a paper plate with a cheap napkin were not impressed, saying on average that they would only stump up 57 cents for it. But when they got the same shrivelled piece of cake dusted with icing sugar on a glass plate, they were lavish in their praise, offering to stump up $1.12. "That's almost twice as much," Wansink enthuses. The lesson here is simple: use your best crockery, buy some nice napkins, and camouflage your accidents with garnish. But it's not just about presentation. The way you describe your food dramatically alters how people rate it. In another experiment, Wansink served people cheap canned meatballs. Not surprisingly they weren't a big hit. But when he described them as "spicy meatballs", people liked them much more, saying they tasted "deeply spicy" - even though they were about the blandest thing that he could find. And when he fobbed the same ghastly stuff off under the label of "local family gourmet recipe", people suddenly loved it. "Labels really help," says Wansink, but not any old labels. To pull this trick off you have to tap into one of three basic food-related emotions: sensory (descriptive words such as tender or creamy), nostalgic (traditional, home-baked) or geographic (Italian, French). Of course, you can't label dishes at the Christmas dinner table, and if you announce your dessert as "traditional creamy Cajun trifle", people are going to look at you in a funny way. But there are variations you can use. Take every opportunity to talk up your ingredients. Casually mention that you got the parsnips from an organic farmers' market, even if you didn't. And if someone remarks on the stuffing, mention with a chuckle that they should make sure they get some more before it all goes: you won't be hand-peeling chestnuts again next year. Talking turkey Even if you find it hard to tell lies about vegetables, there is one element of the meal you must talk up. While most people can tell the difference between spoiled meat and fresh meat, they struggle to distinguish between OK meat and really good-quality meat. "So they look for environmental cues to reinterpret the sensory experience," Wansink says. "There are lots of ways to make people think 'Oh yeah, this is the really good stuff'." The lesson? Pretend you know an organic turkey farmer. Props can create a halo effect too. Leave a bunch of flat-leaved parsley or chervil in a prominent position, even if you didn't use any. When guests wander into the kitchen, fiddle about with complicated pieces of kitchen equipment and obscure-looking utensils. At this point you may be getting worried. Surely you're inviting disaster and ridicule by drawing so much attention to the food? Not a bit of it. Wansink has shown that making people pay close attention to the food they're eating makes it taste better. He once gave his long-suffering colleagues a cheap, mass-produced vegetable juice and asked them to rate its flavour. When they were simply told it was a "new type of juice" they said it was OK. But if he told them it was a carefully blended mixture of vegetables and asked them to identify as many as they could, they rated it much more highly. The wine can also help: whatever you serve, make sure it at least sounds good. Wansink once did an experiment where he served a French meal with complementary wine - a dire, $2-a-bottle plonk. When people knew it was rubbish they rated the whole meal quite poorly. But when he stripped the labels off the bottles and replaced them with something more classy, they rated the food much more highly - and ate more. By now you should be well on your way to becoming a legend in your own kitchen. But there is one more way to score points. As part of his studies, Wansink has also investigated what earns people a reputation as a good cook. He found that a great reputation comes in three distinct flavours. The first two are bad news for the unskilled: Wansink calls them "inventive" and "new recipe" cooks, and to qualify as either you need to be good at cooking. Both categories are knowledgeable, skilled and keen; the kind of people who grow their own herbs and think peeling shallots is fun. The only real difference between them is that "new recipe" cooks use books, whereas "inventive" cooks make it up as they go along. This is not the sort of thing you can fake. But there is a third way to gain a culinary reputation, and it has nothing to do with the food. "Social occasion" cooks stick to simple, well-known recipes - think casseroles - and use the time they saved on shallot-peeling to make their guests feel at home. So if you're doing Christmas dinner this year, banish all thoughts of cooking something unusual. Don't even entertain the notion of stir-frying the sprouts in ginger and garlic: put them on to boil, and open the wine. Then all you have to do is mingle. Just one more thing. These tricks are tried, tested and trustworthy, yet be warned. By all means put them to work, but glory can come at a price. Don't blame us if your mum books herself in for next year too. The magic number * 20 December 2003 * Meredith F. Small IT'S one of the great Christmas traditions. If you celebrate the holiday at all, by now you will probably have dispatched greetings to family, friends and colleagues. That group is unique to you, but in one remarkable respect our Christmas card lists are very similar: their length. It turns out that most of us send cards to around 150 people. What's more, this magic number of intimacy seems to be an evolutionary legacy reflecting the natural size of our species' social groups. Even in an era of sophisticated global communication, it seems, we are only able to keep up with the same number of people as our hunter-gatherer ancestors. It's not easy to probe people's card-sending habits. The holiday season is busy and stressful enough without having to answer detailed questions about each person on your list. But when UK-based anthropologists Russell Hill from the University of Durham and Robin Dunbar from the University of Liverpool roped in their own friends, they were pleasantly surprised by the response. "It was amazing how many people already kept records of who they sent cards to and those they received," says Hill. "A section of the population, it turns out, is actively monitoring its social networks." The anthropologists managed to recruit 43 card senders who sent a total of 2984 cards - about 68 per person. If you take the entire household into account, each sender contacted a network of 154 friends, family and acquaintances, on average, or 125 if you only count the people named on the envelopes. The number 150 was eerily familiar to Hill and Dunbar. In the early 1990s, Dunbar carried out a study of non-human primates, comparing the size of the neocortex - the part of the brain associated with higher functions such as intelligence - with the typical group size for each species. All primates, including humans, are social creatures who depend on each other for survival, so you would expect some kind of correlation between brain size and the ability to deal with complex social networks. Dunbar did, in fact, discover a linear relationship, with group size increasing as a function of brain size. And a similar pattern has since been found among social carnivores and cetaceans. Using his non-human primate scale, Dunbar then predicted that humans should have social networks of about 150 people. Support for this number comes from tribal societies and from history. Modern hunters and gatherers typically live in clans of about 150 members (although they can range from 90 to 200); Neolithic villages in the Middle East also contained between 150 and 200 people; and according to the General Synod of the Church of England, the ideal size for a congregation is 154. The Christmas card list, Hill and Dunbar suggest, is simply the modern Western manifestation of an ancient ability to maintain only so many social contacts. But why is 150 the magic number for our species? Ecological and economic factors play their part. The size of hunter-gatherer groups, for example, is constrained by the amount of territory people can cover and its productivity in terms of food. Smallholders, likewise, are only able to feed a certain number of people from their land. But from around 12,000 years ago, with the beginnings of agriculture, human societies have had the economic potential to expand into vast cities. We can cope with such large groups because we are able to recognise the faces of the many thousands of people we encounter in a lifetime. But, argue Dunbar and Hill, we do not have a social relationship with all these people. Instead, we tend to build subgroups of around 150. Dunbar suspects this must have some adaptive value, pointing out that networks seem to fall apart if they get any bigger. "Hutterite communities in the United States split in two when they reach 150 because they claim they can't control the group by peer pressure alone when it goes beyond that number," he says. "There is also a folk rule in business organisation which says you need a management structure once your organisation gets above 200 because rivalries build up." The other factor limiting the size of our social networks is time. Baboons and chimps spend around 20 per cent of each day on one-to-one grooming to consolidate their bonds, and Dunbar has argued that gossip is the human equivalent. The big difference is that we can effectively communicate with up to three people at once, which means we can form bigger social networks than other primates. Beyond this limit, however, conversational groups tend to break up. And modern technology is no help. It allows us to throw our net wider in geographical terms, but does not increase the number of people in it. "Email and snail mail let you maintain contact with distant network members," says Dunbar. "But this kind of communication doesn't allow you to create a deep relationship." Because the quality of our relationships matters, we still only have the cognitive ability to maintain the same number as our ancestors. Previous studies have shown that most people can define precisely a group of about three to seven very close friends to whom they turn in times of distress, a group of about 15 who provide sympathy and support, and a wider circle of friends of about 35 who represents their personal community. Hill and Dunbar confirmed this when they asked card senders to rate their emotional attachment to each person on their list on a scale of 0 to 10. The card senders each identified a close circle of seven friends they saw regularly, a larger group of about 21 that they saw less often, and a group of 35 people to whom they were not so emotionally attached, but were still closer than the full list. The researchers also found that senders are willing to invest extra time in long-distance relationships that they value, often including a letter or long message with cards sent to people who lived far away and whom they only contacted once a year. Our Christmas card lists are maps of who we know and who we care about. But Christmas is not just a time to write cards; it also a time to reflect on what all these relationships mean. Friendships fluctuate over time: an acquaintance may become a friend, or a member of our inner circle might drop out. Although we can only handle the same number of social connections as our ancestors, globalisation means we have a lot more choice about who to include in our network. That's why each year we need to make that list, and check it twice, before we lick the stamps. Lethal Xmas * 19 December 1998 * Stephanie Pain CHRISTMAS comes but once a year. And that's probably just as wellnot just for those of us with Scroogish tendencies, but for doctors, fire-fighters and loss adjusters. The season of joy is also the season of strange accidents and unusual injuries. Never mind the perils of poorly cooked poultry and high-velocity corkswhich are year-round hazards these days. Almost everything that makes Christmas festive lands hundreds of people in hospital each year. Christmas trees are a menacereal or artificial, with their fairy lights and candles, paper chains, tinsel and baubles. Boughs of holly, ivy wreaths and sprigs of mistletoe: poisonous every one. Artificial snow? It irritates the lungs. An open fire to roast the chestnuts on? Forget it if you want to stay safe. Crackers? Only if you're wearing safety goggles: the "bangs" can burn and airborne novelties inflict serious eye injuries. Even the pudding can put you in the emergency ward. Silver coins concealed among the currants can choke a child, while adults face the danger of "Christmas pudding flashback". This has nothing to do with memories of puddings past, but a lot to do with Christmas spirits. If your pudding isn't perfect without some flaming brandy, remember to stop pouring before you strike the match. "In terms of accidents, home is the most dangerous place of all," says Roger Vincent, a spokesman for Britain's Royal Society for the Prevention of Accidents. "We tend to see home as a safe haven. We relax and forget about safety. And at Christmas we relax even more." RoSPA reckons that over the 12 days of Christmas around 80 000 people in Britain end up in hospitaland 130 will die. Some injuries, particularly falls, burns and scalds, are inevitable when crowds of people make merry in unfamiliar rooms, armed with hot drinks or glasses of sherry, surrounded by piles of slippery wrapping paper and stray toys. The kitchen is especially dangerous, with dishes of hot fat and sharp knives in the hands of a tipsy and distracted cook. Other accidents are linked to the more frivolous features of Christmas. For instance, in a report published in 1990, Britain's Consumer Safety Unit estimated that nationwide there were 2000 injuries caused by Christmas trees, fairy lights and decorations. Decorations were to blame for just over half the injuries. And almost half of these were the result of falls as people balanced precariously on tables, chairs and other unstable pieces of furniture to hang trimmings or fix the fairy to the top of the tree. The remainder were mostly internal injuries to children who had chewed on glass baubles or other tempting decorations. Around 34 per cent of accidents were linked to trees, and ranged from a poke in eye with a stout twig to cuts and bruises from inexpert attempts to lop off branches. Only to be expected perhaps. But there are some accidents you can't foresee. "In one case the victim was a 28-year-old male who was walking past a Christmas tree when an insect flew in his ear," says the report. The final 12 per cent of injuries were attributed to fairy lights burns, electric shocks and internal injuries to toddlers who chewed the bulbs. "Lights are safer than they used to be," says Vincent. But Christmas is a time for tradition and if you only use them once a year, those ancient lights should last for ever, shouldn't they? Not when they are crumpled into a heap in the attic for the rest of the year, the wires bent and knotted. Damaged wires can electrocute. Some older lights have large bulbs which become hot enough to burn hands or ignite tinder-dry pine needles. And, while fire brigades say fake trees are safer, that doesn't include the metallic sort. Faulty wiring on the fairy lights can electrify the whole tree. And that's not all. According to the US Consumer Product Safety Commission in Washington DC, trees are responsible for about 500 fires in the US each Christmas, causing some $20 million worth of damage to homes. The risk of a fire in the home is 14 per cent higher over the Christmas holiday in the US and these fires are 30 per cent more likely to result in death, says the US Fire Administration. Christmas trees are full of inflammable resin and covered with kindlingin the form of fine needles that quickly dry out in a heated house. The fresher the tree, the safer it is. "Bounce its butt before you buy," advises the US Consumer Safety Commission: a newly cut tree won't leave a pile of needles on the ground. Assuming the tree lasts until Twelfth Night, what then? Enough people have attempted to get rid of their tree by feeding it into the fire to prompt a deluge of warnings each year from fire departments. One American family lost everything but received no sympathy from their insurers. "They decided to push the pointy end in first," says a friend of the family. "They thought they could push the trunk in and burn it bit by bit." When the house was razed to the ground the insurance company was reluctant to pay up. "Tantamount to arson", was the loss adjuster's verdict. In Britain, too, the number of house fires soars over the Christmas holiday. There were 213 fires a day in December 1996, 8 per cent more than the daily average during the winter months. The Home Office, which is responsible for Britain's fire service, puts the blame squarely on Christmas, pointing the finger at faulty fairy lights and trees placed too close to hot coals or candles. Trees are not the only fire risk. "There's been an increase in candle fires as decorative candles have become popular," says Vincent. Piles of wrapping paper will ignite in a flash. And of course, there's that real fire. For most of winter, central heating is fine, but come Christmas and suddenly there's nothing to compare with a warm hearth and a glow in the grate. Put the cards on the mantelpiece and hang a stocking or two for Santa, and the risks of a fire multiply. And if you have a fire just once a year, the chances are that the chimney hasn't been swept and no one has checked the flue. The fire needn't set the house alight to put you at risk. If your flue is faulty, there is a danger of carbon monoxide poisoning. Around 50 people in England and Wales die each year from carbon monoxide poisoning. Many more people suffer the effects of poisoning but the symptoms are often mistaken for flu or food poisoning. A few Christmases ago, a London family decided to light their fire as a special Christmas treat. They were found gathered around itunconscious. Tall buildings around the house prevented the escape of exhaust gases, which flooded back down the chimney into the room. Fortunately, this particular family was discovered in time. Carbon monoxide is a stealthy killer, but it's not the only poison to worry about. Badly thawed and undercooked turkeys lay hundreds low with food poisoning each Christmas. Last Christmas, the Medical Toxicology Unit in London received around 100 calls about children who had eaten holly berries, 30 inquiries about mistletoe, 80 about the Christmas cherry, 10 about poinsettias and one about the Christmas rose. Mistletoe berries contain a mix of toxic proteins and alkaloids which irritate the stomach and slow the heart and, in bad cases, can induce hallucinations. Holly berries might be popular with birds but they are not good for humans, although they were recently downgraded as a poison. "Holly is not as toxic as we thought," says Virginia Murray, deputy medical director of the Medical Toxicology Unit. "But I wouldn't recommend you make jam with it." Ivy's not too good for children either, but it tastes so bad that few children eat the berries. There seems to be some confusion about how poisonous poinsettia is, but the leaves contain unpleasant irritants. "If you chew them you'll get a nasty mouth rash and an upset stomach," says Rose Ann Soloway of the American Association of Poison Control Centers in Washington DC. The Christmas rose, Helleborus niger, causes such severe diarrhoea that the ancient Greeks used it as a chemical weapon in the sixth century BC. Another popular pot plant at this time of year is the orange-berried Christmas cherry, Solanum pseudocapsicum. Like its close relatives, the nightshades and the potato, this plant contains alkaloids that cause sickness and stomach pains. The most common poison of all is alcohol. While most adults suffer nothing worse than the whirling pits, a mouth like sandpaper and a bad headache, children are at much greater risk. Every year, a small number of children suffer alcohol poisoning, usually after draining the dregs left by adults. "People have a party and it's not surprising they don't clear up at 2 am," says Vincent. "But then the children get up early, go downstairs and drink the leftovers." Children suffer worse effects than adults, and not just because they are smaller. "Weight for weight, children are much more susceptible," says Murray. And if you think there can't be anything else to worry about, don't forget the presents. Leaving perfumes and chocolates under the tree is asking for trouble if you have children or pets (chocolate can kill a dog). And toys can kill. According to the US Consumer Product Safety Commission, in 1996 toy-related injuries put 140 000 children in hospital in the US; 13 died from their injuries. In most cases, accidents happen when small children get hold of toys intended for older onesnot just the chemistry sets, says Soloway, but toys with small parts that can choke a toddler. Lead painted toys are rare these days, but doctors still see a few cases of poisoning. Button batteriesthe sort that power many toyspose a different threat. "A swallowed battery," says Soloway, "is a real medical emergency if it lodges in the oesophagus." The charge around the battery burns a hole in the soft tissue. If the battery reaches the stomach, it is probably safeas long as it doesn't break up and shed its load of mercury. So how does anyone survive Christmas? There is plenty of advice around on how to stay safe. Buy an artificial tree and invest in a new set of lights. Wrap the mistletoe in safety nettingor better still stick to plastic sprigs. Empty the egg nog immediately the last guest has gone. Or perhaps you'd be better off on an Australian beach, where Christmas Day sees turkey-stuffed crowds soaking up the sun. No worries there. Except, that is, for the big jump in the number of drownings, barbecue burns and shark attacks. Happy Christmas. Get stuffed * 22 December 2001 * Kate Douglas POURING tomato soup through someone's nose is sometimes the best way to discover why Christmas is a weight-watcher's worst nightmare. OK, so it's a far cry from turkey with all the trimmings, but this is still the cutting edge of "hedonics"-the study of gastronomic pleasure. Sit back, relax, and loosen your belt another notch if it makes you feel more comfortable, because what you're about to read may leave you feeling rather queasy. There's a lot we don't understand about the problems of overeating, but Martin Yeomans of Sussex University and Steve French of Sheffield University have set themselves the task of probing the mystery. Yeomans and French are the "good cop, bad cop" of appetite interrogation-the one specialising in the pleasures of eating, the other an expert on fullness and the pain of overindulgence. Between them, they've just about got it covered. Yeomans can explain that irresistible urge some of us have to trough an entire litre tub of Ben & Jerry's Caramel Chew Chew even though we've just eaten a full meal. And French knows why feeling full to bursting point prevents most people-Mr Creosote excepted-reaching for that last, explosive, wafer-thin mint. "The novel aspect of our work is trying to bring together two parallel strands and see how they interact," says Yeomans. Hedonism research is no box of chocolates. In fact, for Yeomans and French's volunteers it is mostly tomatoes. For starters they get tomato soup-through the nose if the researchers want to measure the satiating effects of various nutrients independently of any eating pleasure they might provide. Soup is followed by pasta with-you've guessed it-tomato and onion sauce. The "tasty" version of this dish contains herbs and seasonings and the "bland" variety comes au naturel. It's a lonely business pushing out the frontiers of appetite research. Each volunteer is tested separately, with only a computer as a lunchtime companion, secretly assessing how much of the palatable or bland food he has eaten and interrupting him every few forkfuls to ask how he is feeling vis ? vis the old appetite. Christmas dinner it is not, but the results are incontrovertible: tasty food initially boosts hunger more than bland food, it makes you feel full more slowly, and leaves you eating more overall. But if there's one thing this duo has learned in almost a decade of nasal drips, intragastric infusions and tasteless pasta, it's that there's no simple explanation for gluttony. Their distasteful experiments have convinced them that how much we eat depends on the outcome of a battle between the opposing forces of pleasure and satiation. And if some of you are already reaching for another mince pie, muttering that with all the pleasure on offer at this time of year you can't be held responsible for your actions, then tuck in, because you're probably right. French, Yeomans and an assortment of other appetite researchers have evidence that confirms the worst fears of every calorie counter-Christmas really is out to get you. Everything about the festive season makes you overindulge. Just take a look at the "big meal" itself. The simple act of sitting down with those you know and love has already sealed your fate. John de Castro from Georgia State University in Atlanta has found that in the company of friends, you can expect to put away 44 per cent more nosh. Your consumption increases in proportion to the number of people at the table and, worse still, the more you're enjoying yourself, the more courses you'll eat. You will probably start with an aperitif. It may look harmless, but Yeomans's experiments show that a single tipple before you tuck in sets things off on the wrong footing. When he plied some of his gastronomic guinea pigs with a libation disguised as apple-flavoured pop shortly before lunch, they ended up eating more than other volunteers who had consumed real soft drinks, even though they had no idea they had been drinking alcohol. And "bah! humbug!" to diet coke drinkers who think they can fill up on carbon dioxide and aspartame, because several studies show that only drinks containing large amounts of real sugar will knock a hole in your appetite. You might want a snack with that drink. Go on, treat yourself to a sliver of salami, or perhaps a nice creamy dip with some of those hand-fried potato chips on the side. Breadsticks and other carbohydrate-laden, low-fat snacks can take a back seat because, well, it's Christmas. But be warned: you're bound to be nobbled by those high-fat nibbles, because you are destined to eat more of them-even supposing they tasted no better than the carb-rich, low-fat alternatives. The explanation, according to French, is that despite containing fewer calories, high-carbohydrate snacks fill you up more than fatty ones. So don't trust your body to regulate your calorie intake: go easy on the cheesy footballs. Sooner or later, Christmas dinner proper will make it onto the table. And that's where your problems really begin. For an hors d'oeuvre you are bound to be faced with a rich little morsel-smoked salmon, p?t? perhaps or something with a more highfalutin French name, looking beautiful and dripping with cholesterol. You would be wise to think of this as the place where art meets arteriosclerosis. The look, the smell and that initial burst of flavour all conspire to produce the hedonic qualities that are Yeomans's speciality. A few mouthfuls of this pleasure-on-a-plate and, his studies show, you actually become hungrier than you were before you began eating. It's all going horribly wrong. And it gets worse. Whereas appetite researchers had traditionally thought that we compensate for a calorific starter by eating less later in the meal, that's simply not true, according to Yeomans, French and co. They concocted some high- and low-calorie soups, and fed them to volunteers who didn't know which was which. The volunteers were then allowed to eat as much of a given main course as they wanted. The ones who had started with the high-calorie soup ended up consuming around 1000 kilojoules more overall than people given the meagre starter. And it didn't make any difference whether the starter calories came in the form of fats or carbohydrate. Some appetite researchers have reported that fatty foods cause people to eat less but Yeomans suspects this effect is psychological rather than physiological- if you know that a particular starter is destined to become an insulating lining for your arteries you may consciously rein back during the main course. Anyway, enough of the starters. Let's move on to the main attraction, the grand spread. The festive table is groaning with goodies, myriad assorted dishes making a full-scale assault on your senses. The needle on the "hedonometer" is off the end of the scale and you're in big trouble. Whether you are a traditionalist content with turkey and all the trimmings, a gourmand who likes to experiment with swans' tongues, wombats' pouches and pigs' ears, or a vegetarian happy to graze on spicy nut loaf and assorted pulses, the key word here is "variety". That's because variety is the source of strife. Barbara Rolls from Pennsylvania State University has shown that people given four different types of food guzzled 60 per cent more at a sitting than those fed on a single item, even if this single item was their favourite. Astonishingly, if you are given a mixture of three pasta shapes you'll down 15 per cent more calories than if there was just one shape on offer. And when variety combines with deliciousness, the bathroom scales had better prepare themselves for a battering. Perhaps you don't need Yeomans and French to tell you this, but you will find your Yuletide spread more tempting than an unlimited supply of gruel. Their experiments have confirmed the simple truth that the yummier the food, the more we eat. Based on their premise that appetite is a balance between pleasure and pain-between the hedonic value of the food we are eating and the uncomfortable feelings associated with being stuffed to the gunwales-it doesn't take a rocket scientist to figure out that you are going to need a considerable amount of pain to outweigh the pleasures of this particular meal. And that's even before you have added liquid refreshment to the equation. There's still debate about whether or not alcohol makes you fat (New Scientist, 27 November 1999, p 50), but there is no getting away from two facts: first, pure alcohol has 29 kilojoules per gram (that's only about 8 kilojoules less than pure fat) and second-and, perhaps, more dangerously-alcohol lowers your inhibitions. "The jury is still out on what alcohol is doing at a metabolic and physiological level," says French. But whether or not it makes you feel hungrier, it is bound to summon that little inebriated voice in your head telling you to throw caution to the wind. What the hell, the diet can wait until tomorrow. In the meantime, there's dessert to think about. Will it be plum pudding with lashings of brandy butter, passion fruit pavlova or the cr?me br?l?e? All three options are high in sugar and fat, which, research indicates, take longer than protein to induce that fit-to-burst feeling. French thinks he knows why. "Once nutrients are absorbed, the next place they have a big influence is in the liver," he says. Here they are oxidised in a reaction that somehow creates feelings of satiation. "There seems to be a hierarchy of metabolism," says French. Proteins are broken down first, then carbohydrate and lastly fat. Which could explain why protein fills you up faster, and fat fills you up least quickly. Better try a little of each dessert then. And while we're at it, pass the cheese board. You can surely find space for a few hundred blue-veined calories-stilton does pack them in so efficiently, after all. At this stage you may find yourself flagging. But the Christmas dinner has another trick up its sleeve. Coffee will shortly arrive to perk you up. We know caffeine stimulates your central nervous system and dilates blood vessels, but some research suggests it also increases gastric secretions. Result: a surge in appetite in time for the chocolates. Bloated now, you may already be vowing never to eat again. Give it a couple of hours, though, and you'll be tucking into a mince pie or some fruit cake. If you keep this up, your stomach will gradually expand, allowing you to shovel ever greater quantities of food into it before stretch receptors in its walls scream out to the brain for mercy. But the pain is worth it: once stretched, your stomach will have extra capacity for tomorrow's Boxing Day feast. If you really can't stomach the thought of a whole week of gorging yourself silly, the best way to exercise self-control is probably to lock yourself away in a cupboard with a half ration of Ready Brek. Alternatively, there's a more scientific solution. High-fat food fed directly into the duodenum fills you up more than the same food eaten the conventional way. French and Yeomans' work even suggests that a blast of fat to the stomach may also do the trick. The downside is that you'll need to be fitted with a plastic tube running through your nose, down your throat and into you digestive tract. The prospect of sitting around the dinner table at Christmas ingesting cheese souffl?, duck ? l'orange and tiramisu via a nasal drip is probably too gruesome for all but the most dedicated weight-watcher to contemplate. Maybe it's best just to accept your fate. Eat, drink and be merry, for tomorrow we diet. Discovering the nature of party people * 24 December 2005 SO MANY parties, so little time - so how to choose which one to grace with your presence? Time was when there was no option but to line up the invitation cards and pick the most promising. But that's all in the past, thanks to mobile phones. Now you can find out what all those parties are like by calling friends on the spot and comparing notes. After a few phone calls, the jury is in and it's time to head for the Best Party in Town. It sounds like the perfect solution to the party problem. But according to two German physicists it is a recipe for disaster. In a paper about to appear in the International Journal of Modern Physics, Steffen Trimper and Marian Brandau of the Martin Luther University in Halle, Germany, show that mobile phones can ruin everybody's evening. Just ask Tobi. His little party in Berlin was going just fine until some early arrivals rang to tell their friends what a fab time they were having. The result was like a "nuclear chain reaction", Tobi later told the German magazine Der Spiegel. Hordes of people from all over Berlin responded to the calls, and called their friends as well, before heading straight for Tobi's flat. In the end, what had been a quiet gathering became a full-on frenzy of 120 people. "It was barely possible to control it," Tobi said. "The police came round three times." It was Tobi's story that first got Trimper thinking about the way parties can go wrong. "After reading the article, I spoke to my sister, and found that her daughters had seen this phenomenon," Trimper recalls. "And so had my PhD student." It seemed there was some genuine phenomenon at work. And it was an intriguing one. To Trimper, who spends his time pondering the actions of molecules in solids, the descriptions of how parties suddenly take off sounded like a "phase change", such as when ice turns to water. Brandau, meanwhile, saw hints of "social network" effects which arise from the way apparent strangers often have unexpected friend-of-a-friend links. Were these parallels a coincidence, or could the actions of party-goers be captured by some bizarre combination of solid state physics and network theory? Trimper and Brandau decided to investigate. They began by constructing a mathematical model whose origins lie in a now famous experiment conducted almost 40 years ago by the American sociologist Stanley Milgram. He sent letters to around 300 randomly chosen people in the US, explaining that the letters were to be forwarded on to a target person in Massachusetts. Which sounds simple enough, except that the recipients weren't told the address of the target: only his name, occupation and a few other personal details. They were asked to send the letter to any acquaintance they thought had a better chance of knowing the target. Milgram made a note of how many re-postings were needed before the letters reached their target. Unsurprisingly, most of the letters never arrived. But 20 per cent did - and, astonishingly, usually after passing between just five or six intermediaries. Milgram's discovery confirmed the suspicions of countless party-goers who find mutual friends in common: it really is a "small world". Network theorists have discovered that it only takes a sprinkling of random long-range links to short-circuit an otherwise sprawling network and turn it into a small world where everyone is connected to everyone else via just a few intermediaries. But of course that doesn't explain how people home in on this handful of long-range links so effectively. After all, we may know some of our friends' friends, and even some friends of theirs, but the resulting circle of acquaintances is hardly vast. This part of the mystery was resolved in 2002 by a team led by sociologist Duncan Watts at Columbia University in New York. The team pointed out something the mathematicians had overlooked (some might say all too predictably): people aren't just points on a network. We all have a host of different facets to our character, from nationality and gender to occupations and interests. As such, the notion of the "length" of social links is far richer than mere geographical distance: people can be separated by continents, yet still have much in common. Faced with choosing the next recipient of a letter in Milgram-style experiments, people tend to pick friends with two or three traits in common with the target. And Watts and his colleagues showed that this gives far more scope for hitting those crucial random long-range links that turn big networks into small worlds. Picky people But they also found that another factor plays a key role: clannishness. If we were all very picky about the company we keep - or highly "homophilic" in the argot of social network theory - the world would be made up of isolated cliques with nothing to say to each another. If, on the other hand, we found one another endlessly fascinating regardless of background, the world would be one big babel. The real world is clearly somewhere in between, and Watts and his colleagues showed the small-world effect can survive a certain amount of clannishness. For parties, however, Trimper and Brandau have discovered that the same effect can spell disaster. In trying to understand the runaway party effect, Trimper and Brandau incorporated the theory developed by Watts and his colleagues into a computer simulation. They created a virtual community of 1000 party-goers, whose overall level of homophilia the researchers could vary from risibly snooty to wantonly promiscuous. They set the homophilia level, gave all the party-goers virtual mobile phones and packed them off to 10 virtual parties. Once at their party, each would report back to their friends elsewhere. "We kept things very simple," says Trimper. "The idea was to see how many friends they knew at the party, and if they heard of a party with more of their friends, they would leave and go to that one." With only one variable to control - the level of homophilia - the model was certainly simple. Yet Trimper and Brandau found that once the homophilia reached a critical level, the guests would suddenly start reaching for their virtual coats. They became so picky about who they spent time with that they quit parties with too few like-minded people and ended up at one party with all their snooty friends. When that happened, all but one of the virtual hosts was left weeping over their canap?s. The real surprise is that all this happens at a critical level of pickiness. "The effect was very sudden, like a phase change," says Trimper. "We had no expectation that so simple a model would give rise to something like this." Even so, the pair are convinced the model's behaviour reflects a genuine effect, and one with implications far beyond the party scene. For example, it may cast light on how new ideas come to dominate the thinking of particular groups, such as political factions. It also suggests ways of getting new products to dominate the market by exploiting word-of-mouth recommendations. The key lies in the homophilia level, which is somehow related to how many short and long links there are in people's social networks. The exact nature of that relationship is still unclear; Trimper is working overtime to get to grips with all the implications of the findings. But, he points out, one implication is already clear. "When your party guests arrive," Trimper says, "make sure you take their mobile phones as well as their coats." The nightmare before Christmas * 23 December 2000 * Nicola Jones YOU'RE NOT exactly sure how it happened, but somehow you've managed to plough your way through three helpings of turkey, two helpings of sprouts, which you don't even like, a whole plate of Aunt Edna's festive cheese balls and the chimney off the gingerbread house. Like everyone else, you're slumped in your chair with a self-satisfied grin on your face, glowing happily, drinking your last egg-nog before bed. It's a lovely scene, but there might be something nastier ahead of you than a few extra inches on your waistline. We've all heard the story a thousand timeseating too much food just before bed can give you bad dreams. Especially if it's spicy, or fermented, and definitely if it's cheese that smells like your sock drawer. I've noticed it. My mom's warned me about it. It's established fact, isn't it? Well, not exactly. There is something called "The Pickled Walnut Theory", says Tore Nielsen, a sleep researcher from the Dream and Nightmare Laboratory at the Sacred Heart Hospital in Montreal, Canada. The theory simply says you can get nightmares from what you eat (apparently especially if it's a pickled walnut). But that's about it, says Nielsen, no explanation provided. "A lot of dream experts pooh-pooh this as a kind of myth," he says. "But my opinion is that it's very likely." And there is some evidence to back Nielsen up. Sort of. That evidence mainly revolves around a number of neurotransmitters in the brain that control the amount of time you spend in rapid eye movement (REM) sleep, one of the most rejuvenating bits of our night and also the phase when you're most likely to dream. Eating foods containing particular chemicals can bump the levels of neurotransmitters up and down, playing havoc with dreams. For example milk, considered by most parents and researchers to be a soporific, contains tryptophan, an amino acid that is also used by doctors to relieve nightmares. Tryptophan increases the brain's levels of serotonin, a neurotransmitter that can cut down on your REM sleep. Ironically, that means warm milk probably diminishes the amount of restful sleep you get at night, but it also simmers down your dreamsbad or good. (What that might mean for Santa, with the billions of milk-and-mince-pie donations, is anybody's guess.) So it's probably not tryptophan in cheese that's responsible for its nightmarish reputation. But it might be the tyraminea chemical that bumps up noradrenalin levels in the brain. High noradrenalin, like serotonin, tends to be associated with less REM sleep, and so less dreaming. But noradrenalin also makes blood vessels constrict and blood pressure rise, and that could make the dreams you do have racy, even nightmarish, suggests Nielsen. The more aged the cheese, or the more rancid, the more tyramine it'll have. Ditto for overripe mandarin oranges. And if the cranberries in the sauce have gone off, then hold on to your reindeer, it could well be a bumpy night. If food can be linked to bad dreams, then perhaps that even explains the nightmarish visitations experienced by Ebenezer Scrooge on Christmas Eve. He tucks into a late-night nosh and then sits down to a pan of gruel when his long-dead business partner Jacob Marley, looking awfully ghost-like and making frightening chain-scraping noises, appears to waft through the door. Scrooge, who has perhaps read up on the effects of tyramine, proclaims wisely, "You may be an undigested bit of beef, a blot of mustard, a crumb of cheese, a fragment of an underdone potato. There's more of gravy than of grave about you, whatever you are!" The trouble is that not everyone is convinced by the possible links between food and bad dreams. "There's no evidence that spicy food causes nightmares," says Ernest Hartman, a psychiatrist at the sleep clinic at Newton-Wellesley Hospital in Newton, Massachusetts, and arguably the world expert on nightmares. The whole thing about cheese, as far as he's concerned, is a myth. But maybe the explanation isn't so chemically complicated. Maybe it's just that cheese, like most dairy products, is notoriously difficult to digest, suggests Rafael Cabeza, a neuropharmacologist from the University of Texas at El Paso. Get a tummy full of it, and you may spend the night tossing and turning. The more often you wake up, the more likely it is that your dreams are interrupted and so you remember them. If they are bad dreams that can leave you with the impression of a night plagued by nightmares. Apart from pungent cheese, a bellyful of Christmas cheer could also trigger a nightmare or two. Take mulled wine. Aside from the double dose of tyramine you'll get from the alcohol and the fermenting oranges, there's more trouble brewing. Go to bed the worse for wear, and you'll initially dream less because alcohol suppresses REM sleep. Once that effect wears off, usually around the early hours of the morning, your brain "rebounds" and crams as much REM sleep into as short a time as possible. Your dreams get more vivid, sometimes even frightening. It's like someone shouting in your ear when you expect them to whisper, says Nielsen. Things get worse if you're trying to go cold-turkey on cigarettes and have a nicotine patch or two stuck to your tummy. The patch increases levels of dopamine, another neurotransmitter, which has been tentatively linked to increasing REM sleep. People who use patches often complain of nightmares. And if you're doped up on beta-blockers for your high blood pressure that could cause trouble too. These drugs initially increase noradrenalin, suppressing REM sleep. But just as with booze, you may get a rebound effect and an onslaught of early-morning dreaming. Then there's the issue of just how you fall into bed after the season's festivities. "The way we sleep, the postures we assume, even the way we touch our spouse, that affects our dreams," says Nielsen. Hartman says that his patients seem to complain more of nightmares when they sleep on their backs, although he doesn't know why this might be so. The Netherlanders of the Middle Ages, on the other hand, were apparently so convinced of the importance of posture that they slept in cupboard-like vertical spaces called "box beds" to stop the bad dreams triggered, they believed, by their diet rich in salted fish. Today the jury is out on whether food, including salted fish and Christmas binges, can really cause nightmares. But we could soon know the truth now that Nielsen's on the case. "It would be an interesting research project," he muses. "You could probably get a pizza company to sponsor you..." Savour the festive flavour * 24 December 2005 * Caroline Williams YOU either love them or hate them. Nothing divides opinion round the Christmas dinner table more than Brussels sprouts. If you ever have the urge to find out what gives sprouts their distinctive taste, rest assured that food scientists will be able to tell you. Besides identifying flavour molecules, researchers have spent decades coming up with all manner of methods and equations to explain the way food and drinks release their flavours, a vital part of how they taste. But for all their work, they are easily stumped. Ask the same food scientists to analyse the wine, sherry and brandy you might wash your dinner down with, and you'll have trouble getting an answer. For mysterious reasons, when it comes to explaining taste, booze just hasn't succumbed to analysis like other foodstuffs, liquid or otherwise. At last, however, improved techniques may be bringing us closer to understanding how alcoholic drinks tickle our taste buds. And along the way we may get answers to fundamental questions, such as: does the shape of a wine glass make a difference; does warming a brandy glass improve the flavour; and does it really matter whether a cocktail is shaken or stirred? Our perception of flavour is linked to the levels of aromatic compounds that are released when we eat or drink. Most of what we think of as taste actually comes from our sense of smell, courtesy of the olfactory receptors in the roof of the nose. So measuring aroma release is important in deciphering not only flavour but our whole experience of whether or not we like a food. It is vital for the "nose" of a wine or spirit, which makes the difference between a satisfying draught and a disappointing sniff. But until recently, no one had found a way to measure aroma release from alcoholic drinks in a realistic way. Aroma release is traditionally measured by placing a sample of a drink in a sealed container and then comparing the levels of aromatic compounds in the liquid versus the surrounding air. But flavour researcher Andy Taylor of the University of Nottingham, UK, thinks that this method is far too crude: while it gives an idea of what is going on before a bottle is uncorked, it bears no relation to our actual experience of a good wine or whisky. "You cannot explain the dynamics of flavour in a sealed container," says Taylor. What is needed instead is a technique that can measure aromas as they are released. So Taylor and his student Maroussa Tsachaki set out to mimic what happens in the glass by wafting air over alcoholic solutions and measuring the concentrations of flavour compounds coming off the drink. Using this technique, they are confident that they can sniff out some answers. Experiments on sealed samples have already revealed differences between how alcoholic and soft drinks release aromas. Aroma compounds tend to diffuse out of a water-based solution more readily than from a solution containing alcohol. Many aromatic chemicals are hydrophobic, so alcoholic solutions dissolve them more readily than plain water, "locking" the aromas in the solution. As a result, there is generally more aroma in the air above a water-based solution than an alcoholic one of the same concentration. But while sealed jars are all very well, is this the same as what happens in real life? When Taylor's team blew air over a water-based solution, they found that the release of aromas dropped off as the top layer of the solution became depleted. This is no surprise: after all, freshly made orange drink smells stronger than a glass that has been standing around all day. Taylor expected to see a similar drop-off with alcohol. But when the team repeated the experiment with an alcoholic solution, the numbers didn't seem to add up. For some reason, alcohol kept giving off aromas for far longer than water-based drinks. His team repeated the experiment three times to check the results. "Something about the alcohol keeps aroma levels in air high," he says. "I couldn't explain it. It's counter-intuitive." Understanding how alcohol keeps flavours ticking over could help improve the flavour of your favourite tipple, allowing manufacturers to engineer a great tasting nose in an otherwise ordinary bottle of plonk. But as yet, no one knows what causes the effect. One idea is that it is down to spontaneous convection currents developing as the drink breathes. "In a solution of alcohol, the ethanol starts to evaporate when it comes into contact with air," says Taylor. He speculates that in solutions over 12 per cent alcohol, the rate of ethanol evaporation at the surface is enough to cool this layer, causing it to sink down the sides of the glass. This pushes warmer liquid at the bottom up to the surface, creating a self-stirring effect. It could explain Taylor's results. An alcoholic drink left to self-stir would carry on releasing aromas into the air, long after a comparable water-based solution had lost aroma compounds from the top layer. In water, the only way to encourage more aromas out of solution is to stir it. Another explanation is that the structure and behaviour of ethanol molecules helps the aroma compounds diffuse through the solution to the surface and so escape. As yet, there is no evidence to confirm or refute either of these theories, but physical chemist Colin Bain from the University of Durham, UK, is putting his money on convection currents. "They are not a new concept in physical chemistry, but maybe flavour researchers have never thought about them before," he says. You can see them in action by pouring a thin layer of cream on top of a liqueur such as Tia Maria. After a few seconds, the surface of the cream starts to break up into roiling convection cells (The Last Word, New Scientist, 25 January 2003). "If you cover the glass with a plate and turn off evaporation, the current stops," says Bain. In reality, he says, there are very likely two sets of circulating currents - one in the body of the liquid and another caused by surface tension pulling liquid up the wall of the glass. As the liquid climbs, alcohol evaporates out, which increases the surface tension and ensures more liquid is pulled up. At a certain height the solution starts flowing back down the sides in "fingers" or "tears". You can see this, Bain says, if you blow gently onto a glass of whisky. This increases the rate of evaporation and causes more fingers to flow down the sides. But before any of this research can lead to the perfect drink, there's plenty more work left to do. If convection currents really are driving flavour release, for example, the shape of the glass could be a vital part of the aroma experience. Warming and swilling the glass may affect the currents, speeding or slowing evaporation. And since all this happens before the glass has even touched our lips, the all-important next step will be to measure flavour compounds as the drink enters your mouth, throwing all manner of aromas up the back of the throat and into the nose. Now all Taylor has to do is find a graduate student dedicated enough to spend three years of their life drinking wine. Any volunteers? Scary spice * 23 December 2000 * Kathryn Brown DECK the halls, jingle the bells, and raise a holiday toast with a glass of egg-nogtopped with its traditional dusting of hallucinogen. No, not the booze. Nutmeg. That fragrant spice in your kitchen cabinet has a hidden sideand, when consumed in high doses, a mind-bending bang powerful enough to melt your granny's muffins. How can this be? When chefs describe nutmeg, they use terms like "warm," "sweet" and "full-bodied". Physicians have their labels, too: "delusional", for instance, and "psychotic". Both camps are accurate. A sprinkle of nutmeg is bitter-cinnamon delight. But a few teaspoons of the stuff can poison you, because nutmeg contains compounds that carry an intenseand rather unpleasanthallucinogenic high. "Nutmeg contains a volatile oil," explains David Seigler, a plant biologist at the University of Illinois in Urbana-Champaign. And that oil includes compounds such as myristicin and elemicin. In our bodies, some researchers suggest, these two compounds break down into MMDA and TMA, psychoactive substances that send a drug-like kick to the brain. People who swallow enough nutmegabout 2 tablespoons of the ground spice, the equivalent of perhaps a single nutmeg nutcan suffer hallucinations, nausea and heart palpitations a few hours after indulging. That's quite a potent punch from what is, after all, only the seed of an evergreen tropical tree. Native to the East Indies, nutmeg trees now flourish in the Caribbean, Brazil, India and Sri Lanka as well. The outer flesh of the nutmeg fruit can be eaten as is, or preserved like candy. Inside sits the nutmeg seedbut not alone. A ruby-red membrane, called an aril, coils around the pit, and is the source of another seasonal spice: mace. The nutmeg tree has the distinction of yielding two quite separate spices. Four centuries ago, the only nutmeg trees to be found fringed Run Island in the Banda Sea, in what is now eastern Indonesia. At the time, nutmeg was rumoured to cure various ills, including the plague that was then sweeping across Asia and Europe. Eager for control of this precious resource, the British and Dutch waged war. Even back then, thrill seekers knew nutmeg's secret kick, and more than a few on both sides reportedly grew addicted. According to one account, Charles Sackville, the sixth Earl of Dorset, regularly choked down spoonfuls of the spice, and was once imprisoned after an evening of nutmeg frenzy for "running up and down all night almost naked through the street". More recently, during the psychedelic 60s, sensation seekers turned to nutmeg as a cheap alternative to more conventional hallucinogens. One of Seigler's friends spent several days in the hospital recovering from a severe nutmeg experience. "He was at a boys' boarding school," Seigler says, "and they couldn't lay their hands on the standard drugs of the day." In prisons too, nutmeg was among the accessible routes to chemically altered reality. The black activist Malcolm X sampled the spice in a Boston jail. Jazz musician Charlie Parker reportedly partook as well, washing his down with cola. Despite its fragrance, nutmeg is no sweet high. Just ask emergency room physician Lance Becker of the University of Chicago. Becker vividly recalls an evening some eight years ago when a 23-year-old college student stumbled in wailing: "I'm going to die, I'm going to die." The student had smashed a nutmeg seed and swallowed about a quarter of it. By the time he got to the ER he was sweating profusely, with pounding heart and skyrocketing blood pressure. The hospital admitted him for the couple of days it took for his symptoms to subside. Nutmeg is called the "spice of madness" for good reason, says Becker."This is not exactly a happy trip." Indeed, reports of nutmeg intoxicationincluding at least one deathcontinue to crop up sporadically in the medical literature. In 1998, researchers in Ireland and Norway published two case studies. Five years earlier, doctors at Gordon Hospital in central London described an unfortunate 25-year-old man who had reportedly gobbled down just half a gram of nutmeg and needed tranquillisers to calm his frazzled nervous system. Reports like these suggest that nutmeg is usually a one-time high, notes Becker. "I doubt there are many people who do this more than once," he says. Not for sheer pleasure, anyway. But nutmeg is also a popular ingredient in folk medicine. In China, people take it to calm an upset stomach or relieve rheumatism. South-east Asian villagers dine on rice cooked with nutmeg seeds as a remedy for dysentery, anorexia and colic. Elsewhere, hopefuls have swallowed nutmeg to boost libido. And in the early 1990s, researchers even found evidence that myristicinfound in parsley and carrots as well as nutmeginhibits lung tumours in lab mice. Most of us, though, restrict our use of nutmeg to the kitchen. If you have a taste for nutmeg, go ahead and sprinkle it on your holiday feast. Egg-nog, pumpkin pie and bread pudding, for instance, all beg for a little spicing up. "If you have a reasonable diet and take in normal amounts of nutmeg, the spice probably won't affect you at all," says Mark Kantor, a nutrition specialist at the University of Maryland in College Park. "Remember that almost anything can be toxic if you consume too much of it." Do store your nutmeg far from the reach of curious children, though. "Kids will swallow anything," warns Becker. Perhaps that classic kitchen tome, The Joy of Cooking, says it best: "Use it sparinglybut often." What makes a Turkey the right stuff? * 19 December 1998 * Rosie Mestel HERE'S something not to think about as you tuck into your holiday fare. A hulking turkey tom, too massive to enjoy a more natural experience, is having his abdomen expertly massaged. He soon gives up his prize: a small sample of milky-grey semen, which is added to a pool of similar offerings from other breeder toms and taken to the long row of turkey hens. Each hen is deftly upended. In goes the semen. Out, rather later, come eggs, which duly make chicks and eventually the scrumptious, sizzling slices you're about to douse in gravy and spear onto your fork. The wild turkey is a fleet-footed, slimline forest creature that passes its life gobbling (and also "yelping", "putting" and "purring", for those who thought "Gobble!" was all turkeys said), roosting in trees, grubbing for acorns and beetles, and occasionally attacking rural postal workers. Its black, brown and cream feathers afford it good camouflage. Its reproductive output is modest-just 12 eggs each spring (12 more, perhaps, if it loses the first clutch). A full-grown male weighs 10 kilograms at most. Ample, but hardly ostentatious. As for collecting semen: go ahead, just try it. Compare that with man's creation-the snow-white, 30-kilogram domestic turkey tom, heaving his ballast of snow-white breast meat before him as he waddles about as best he can. Or the domestic turkey hen-slimmer, yes, but able to lay a cool 120 eggs in a 27-week reproductive marathon. How did this strange bird come to be, and where is modern science taking it? The turkey's origins are shadowy. Centuries ago, Native Americans domesticated it. The Spaniards took it to Europe. Settlers brought it back to America again. Despite this convoluted history, it was not until the 1950s that the birds were bred for ivory feathers (Western consumers prefer a breast devoid of black, hairy, down remnants), and even more recently that serious breeding for bulk began. Unwitting selection There was a time when breeding focused on size and speedy development, and precious little on the growth of the bones that support all that heft, resulting in bow-legged birds that were barely able to walk. But Karl Nestor, a University of Wisconsin turkey geneticist with 38 years of breeding under his belt, showed it didn't have to be that way. Nestor selected not just for bulk but for wide, sturdy leg bones and walking ability, measured somewhat subjectively on a scale of 1 to 5. Professional breeders have done likewise, and today's turkeys, though hardly graceful, have a better gait than their forebears. Classical breeding requires that in each generation you select the creatures with the most desirable characteristics to produce the next generation. You don't need to know anything about the genes you're selecting, or the physiology that you're changing. So does anyone know why turkeys now grow so big? The best insights come from chickens, not turkeys, but the principles are probably the same. Evolutionary biologist Jared Diamond of the University of California in Los Angeles and his colleague Sue Jackson were so interested in knowing how the broiler chicken developed, they spent long hours weighing and measuring guts and other body parts, and compared these measures with those of the chicken's ancestor, the svelte, fleet-of-wing wild jungle fowl. The brain, they found, is smaller in the broiler. It makes sense. You don't need to be astute and alert if you're cooped up in a pen all your life, and brains are energetically expensive. Legs are thinner and lighter in the broiler, too, which also makes sense. Sturdy legs aren't important if you're not doing much moving. Thus, without knowing what they were doing, breeders selecting for bulk chose animals in which energy was shunted away from "unimportant" body parts towards "important" ones-namely, the meat. And that's not all. Broilers consume lots more feed-there's a good chance that greediness has been inadvertently selected for. The guts have risen to this challenge. Gram for gram, the broiler's guts absorb nutrients such as glucose and amino acids less efficiently, not more. But the bird still absorbs food more effectively, as its small intestine is nearly three times more massive than that of the jungle fowl. Selecting for bulk, breeders unwittingly selected for big, fat digestive organs. Like chickens, domestic turkeys have smaller brains than their wild relatives; these are, after all, creatures that can drown themselves by staring up too long at the rain. And their guts are more massive, says James Croom of North Carolina State University in Raleigh. Still, turkeys aren't really like chickens: they haven't been intensively bred for as long, so they retain more of their wild habits. They herd. They gobble. And turkey hens, unlike commercial egg-laying chickens, go broody. "Broody" means that mothers become motherly, a real headache for breeders. It isn't merely that the hen will hiss and peck when anyone tries to take her eggs. Her ovaries can regress and she'll stop laying altogether. Small-scale farmers nipped broodiness in the bud by putting problem hens in rooms filled with rocks, or out in the cold-anything to help them forget the nest. Try paying that kind of attention to a modern-day flock of 25 000. Luckily, endocrinologist Mohamed El Halawani of the University of Minnesota in St Paul has come up with a novel tool: an anti-broodiness vaccine. Endless tricks His strategy is clever but logical. Mothering behaviour is promoted by the hormone prolactin, made in the pituitary gland when the turkey hen touches her eggs. Stop prolactin release, reasoned El Halawani, and mothering behaviour would be prevented too. He achieved this by injecting hens, a few weeks before laying, with another brain protein called vasoactive intestinal peptide, needed to trigger prolactin production. The birds make antibodies to this VIP, and these antibodies, when bound to the bird's own VIP, inactivate it. Thus, no prolactin-and no broodiness. The hens just keep laying and laying and laying. Efforts, meanwhile, are being made to ensure that every turkey tom who gives of his semen gets a fair crack at contributing to the next generation. The problem with pooling turkey ejaculate is that some toms, no matter how wondrous their traits, rarely fertilise an egg. Sperm from certain super-males wins out almost every time. Today, with the help of a turkey sperm motility test, breeders can tell just which bird's sperm does what. Toms with super-fast sperm can be identified, and toms with wonderful traits but wimpy sperm can be given more exclusive treatment. There's no end to the tricks that scientists are trying. Breeding continues, for bulk and fecundity and also for such qualities as better feed conversion and disease resistance. Today's birds are less likely to make you sick, because they can be sprayed with harmless bacteria that take up residence in their gut, inhibiting growth of Salmonella, and helping to prevent food poisoning. And there are edible films that can be sprayed on the carcass to kill such nasty bugs. So Merry Christmas! Tuck in! As those in the business say, "May 1999 bring prosperity to you and the turkey industry." Undercooked turkeys can harbour superbugs * 17:11 21 December 2004 * Andy Coghlan An in-depth analysis of bacteria in US turkeys has revealed that high proportions of bacteria found in the birds are "superbugs", resistant to many of the antibiotics used on farms and to treat people. The study sampled over 1000 turkey carcasses from two undisclosed turkey-processing plants in the US Midwest. Of these, 94 birds "were found to contain strains of both Campylobacter and Salmonella", says Catherine Logue, head of the team at North Dakota State University in Fargo, US, which conducted the study. It is well established that commercial poultry - including turkeys - can contain bacteria that cause serious gastrointestinal upsets if it is not cooked properly. But this latest finding raises the possibility that antibiotic-resistant bacteria might find their way from turkeys into the human food chain, and possibly into hospitals. Each year, Campylobacter and Salmonella make 2 to 4 million US citizens ill, and could prove much more difficult to treat if they become resistant to clinical antibiotics, such as erythromycin, ciprofloxacin, gentamicin and tetracycline. Gene scavenging Of the Salmonella samples grown from the infected birds, many were resistant to several antibiotics - 88% of Salmonella samples from one plant were resistant to tetracycline, and 35% from the other. Around 45% of the samples from one plant were simultaneously resistant to four antibiotics. Logue says that resistance in Salmonella may be so abundant because 68% of the strains her team grew had genes for making class I integrase. This enzyme enables bacteria to scavenge "cassettes" of genes that confer resistance to antibiotics, either from the environment or from other bacteria. Of the Campylobacter samples, 58% from one processing-plant were resistant to at least one antibiotic, while more than 10% of samples from the other plant were resistant to no less than 8 antimicrobials. Although no Campylobacter had the class I integrase gene, more than a third had "efflux pump" genes which enable bacterial cells to survive by ejecting antibiotics. Faster fattening Logue says that the scale of the risks posed by resistant bacteria in turkeys is difficult to assess. In a previous study by her group published in 2003, she found that around 17% of processed birds were infected with Salmonella, while a parallel study found that 35% of birds carried Campylobacter. Antibiotics have been routinely given to turkeys to fatten them up faster and keep them healthy. But this practice pushes the bacteria to evolve resistance to the farmyard antibiotics, and also to related drugs used in human medicine. Europe banned a group of antibiotic growth promoters a decade ago to try to curb the rise of resistance. The US Food and Drug Administration is worried too, and in March 2004 upheld a 2000 decision to stop farmers giving poultry enrofloxicin, an antibiotic similar to the medically important fluoroquinolones. Whatever the risk that resistant bacteria will spread from turkey farms to people to hospitals, Logue says that turkey is safe to eat provided it is thoroughly cooked. "Just make sure it's thoroughly de-frosted, and that you cook it right through, all the way to the core," she says. Journal reference: Food Microbiology (vol 21, p779) Related Articles * Antibiotic-boosting drug kills superbugs * http://www.newscientist.com/article.ns?id=dn6522 * 15 October 2004 * Resistance DNA in antibiotics * http://www.newscientist.com/article.ns?id=mg18224420.300 * 10 April 2004 * March of the superbugs * http://www.newscientist.com/article.ns?id=mg17924046.600 * 19 July 2003 Weblinks * Catherine Logue, North Dakota State University * http://www.ndsu.nodak.edu/instruct/nolan/cmbid/cathy.htm * US Food and Drug Administration * http://www.fda.gov/ * Food Microbiology * http://www.ingentaconnect.com/content/ap/fd;jsessionid=ijiywta4jtq.henrietta? Drug brings relief to big spenders * 12 November 1994 * ROSIE MESTEL IN the weeks before Christmas, millions of people around the world will be caught up in an orgy of spending. For most of us, the madness is only seasonal, but for some unfortunates, the obsessive urge to shop lasts all year long. Now, say American researchers, there may be drugs that can cure compulsive shoppers of their incessant need to spend. Donald Black, a psychiatrist at the University of Iowa College of Medicine, and Susan McElroy, psychiatrist at the University of Cincinnati, have both conducted pilot studies with compulsive shoppers - people who cannot stop shopping, even though they know that their behaviour is causing serious problems. Such people routinely spend the bulk of their pay cheque on personal items, and spend hours each day planning their next trip to the shops. They are often in debt for thousands of dollars, frequently write cheques that bounce, and exceed credit limits on multiple credit cards. They may even be forced into bankruptcy. "People sort of joke about it, but the problem is really extremely disruptive," says McElroy. Compulsive shopping is probably closest in nature to a series of psychiatric complaints known as impulse control disorders, which include uncontrollable urges to light fires, steal or pull out one's hair. But it also resembles obsessive compulsive disorder (OCD), a strange complaint that causes sufferers endlessly to repeat pointless tasks like washing their hands, or to hoard obsessively. The similarity to OCD led Black to test the drug fluvoxamine on compulsive shoppers. Fluvoxamine is already being used to treat people with depression in Britain, and is awaiting approval by the Food and Drug Administration for treatment of OCD in the US. In Black's study, patients take the drug for eight weeks, and the effect on their shopping urges is monitored. Then the patients are taken off the drug and watched for another month. In the seven patients examined so far, the results are clear and dramatic, says Black: the urge to shop and the time spent shopping decrease markedly. When the patient stops taking the drug, however, the symptoms slowly return. "The results are exceptional," says Black. "I think the drug has true promise." Fluvoxamine is not the only drug that could help out-of-control shoppers. In a study of the medical histories of 20 compulsive shoppers, McElroy and her colleagues found that antidepressant drugs such as fluoxetine (Prozac) and sertraline (Zoloft), used to treat both depression and OCD, seem to help compulsive shoppers. The drugs in McElroy's study group were originally taken for depression, which often afflicts compulsive shoppers. But 9 out of 13 shoppers reported that their urge to buy also diminished while on medication. Both Black and McElroy point out that their findings are only preliminary, and that larger trials are needed before they can draw any firm conclusions. They are both planning such studies. And even though some estimates suggest that between 1 and 6 per cent of the US population may have shopping problems, that does not mean that 15 million Americans should be dosed with drugs to cut down on their spending sprees, says Black. "l think that the number of people needing medication would be tiny," he says. Christmas trees provide pollution solution * 24 December 1994 * David Bradley FOR many, pine needles under the Christmas tree are the curse of the holiday season. But Swedish chemists have turned a yuletide nuisance into an all-year-round blessing. Henrik Kylin and his colleagues at Stockholm University have found that pine needles can be used to detect as maintenance-free sensors of pollutants in the environment, such as chlorine-containing pesticides and polychlorinated biphenyls (PCBs). The chemists hope to use pine needles to draw up a pollution map for Europe. The idea of using pine needles to detect pollution isn't new. It was proposed in New Scientist in 1966, the year the magazine first reported that PCBs were a potential environmental hazard. However, before the technique would become viable analytical chemistry had to catch up. The protective waxy surface of many leaves absorbs traces of organic, fat-soluble compounds from the air. According to Kylin, finding pollutants on such leaves in trees in remote areas show that organic compounds can travel through the air over large distances (Airborne Lipophilic Compounds in Pine Needles, Stockholm University). The team began drawing its pollution map by collecting pine needles from Scots pine (Pinus sylvestris) in western and northern Europe. The researchers took samples from trees that were at least 20 kilometres from any city or industrial area and at least 2 kilometres from any road. Needles normally grow for three years and Kylin's team divided them into classes by year. There are several techniques for detecting PCBs in needles but impurities tend to obsure the results. To overcome this problem the team dissolved the needles' wax coating and carried out a rough separation to produce a solution containing all the wax compounds. They cleaned up the mixture using high-performance liquid chromatography (HPLC). They pumped the solution through a cylinder packed with a material that attracts the impurities but lets the PCB molecules through. By carefully choosing this material the researchers removed the organic impurities. Next, Kylin and his colleagues used gas chromatography to measure the precise concentrations of PCBs. They heated the sample and separated different compounds in a similar way to HPLC. From the position and size of each peak on a "chromatogram" they could work out the identity and quantity of each PCB. Kylin and his colleagues say this is a simple and effective way of mapping airborne PCBs and identifying very polluted areas in Europe. However, such a map will not provide absolute concentrations of pollutants in the air at any one time. Instead, it will show deposition over a length of time. How to maximise your Christmas presents * 25 December 2004 * Emma Young IT'S the countdown to Christmas, and if your tree isn't already surrounded by a stack of bumper-sized gifts bearing your name, now is the time to take action. But to ensure you get the best ever present this year, first you'll need some help working out who to target with your charm - and your wish-list. The right choice isn't always as obvious as it might seem. Grandparents are usually a good bet, but the richest of them won't necessarily be the most forthcoming because there's a fair chance they got that way thanks to their Scrooge-like qualities. So if personal wealth isn't the ideal indicator of gift-giving potential, what is? The key, it turns out, is to identify the grandparent who has the most certain, and most exclusive, genetic link to you. And before you say, "well, surely I am equally related to all my grandparents", remember this chilling statistic: an estimated 10 to 15 per cent of children are not fathered by the man whose name appears on their birth certificate. All of which goes to explain some intriguing findings due for publication early next year. A team led by Bill von Hippel at the University of New South Wales in Sydney, Australia, discovered variations in emotional closeness between grandchildren and grandparents, which, they argue, has a biological rather than a social explanation. Maternal grandmothers emerged as having the closest relationship with their grandchildren, followed by maternal grandfathers then paternal grandmothers and finally paternal grandfathers. The uncertainty principle "A woman always knows that a child is her own, but a man has some uncertainty about his paternity, and for grandparents the issue is compounded," says von Hippel. The pattern that emerged reflects the degree of uncertainty involved. But even the researchers were surprised that this showed up so clearly in the study. "There are so many reasons to feel close or not close to a grandparent - like health, distance, cultural differences, personality, and relationships between the parent and the grandparent," adds von Hippel. On the basis of this study, at least, your maternal grandmother looks like the softest touch for that Christmas gift. And other research supports this. In a study accepted for publication by the journal Human Nature, Todd DeKay of Albright College in Reading, Pennsylvania, and Rick Michalski of Hollins University in Virginia surveyed more than 200 grandparents living in retirement communities in south Florida. "We found that, yes, grandmothers invest more in grandchildren through daughters than through sons," Michalski says. In this particular instance, they rated investment according to emotional closeness, the time spent per week with a grandchild, and the money spent on the child every month. But earlier work by DeKay is even more telling: he found that maternal grandmothers do indeed give bigger presents to grandchildren than do paternal grandfathers. Case closed? Well, not quite. Before you direct all your attentions at your maternal grandmother, you should probably consider your individual circumstances. Von Hippel's work shows that if your paternal grandma has no grandchildren through daughters, she is likely to feel as predisposed to you as is your maternal grandfather. OK, so that's not up there with maternal granny, but if you have lots of cousins on your mother's side of the family and few on your father's, his mother is likely to give more to you. "Grandparents must spread their time and resources across grandchildren," says von Hippel. So your paternal granny might be the best target if she happens to be joining you for Christmas and so perhaps feels more obliged to bring an impressive gift. Of course, circumstances may dictate that granny is no go. You may, for example, have upset her by pointing out some of this research and demanding a Lamborghini as your genetic right. If so, you may have no choice but to butter up a grandpa. But remember, your father's father will be the toughest challenge. In this endeavour, you might think it wise to stress your similarities. If your grandfather is nasally well-endowed and a gifted artist, now could be the time to make the most of your own large nose and propensity to doodle. But don't count on it, because the scientific results are not clear. "There is an earlier study that found an effect of similarity, and we predicted that paternal grandfathers would bias investment towards more similar grandchildren," says Michalski. "In fact we found that maternal grandmothers bias their investment most based on similarity. That was a quirk that we didn't expect to find." You may be thinking, these days there is no need to suffer the effects of doubt over paternity. But arranging DNA tests for your father, grandfather and yourself is perhaps going a little too far. Besides, it is a very risky strategy. David Bishai at Johns Hopkins University in Baltimore, Maryland, is investigating what happens to grandparent-grandchild relationships following paternity tests ordered by Maryland courts. The results of the study will not be in for a few months, but it is already clear that when paternity is disproved, the non-biological father/child relationship can become very sticky. Perhaps best then to stick to grandmas unless absolutely necessary. In which case, your main problem will be deciding what sort of present to try for. Here, you will do best to consider the evolutionary significance of post-menopausal women. Luckily, Virpi Lummaa and Mirkka Lahdenper? of the University of Sheffield, UK, can help. In a paper published in March, they describe a study of almost 3000 women living in Finland and Canada in the 18th and 19th centuries, showing that the longer a woman lived after the end of her reproductive years, the more successfully her children reproduced. On average, women gained two extra grandchildren for every 10 years of life after menopause (Nature, vol 428, p 178). Speculate to accumulate So, women live far beyond menopause to offer child-care help and support in a bid to forward more of their genes to the next generation. But of course granny's desire for genetic propagation does not stop with you. What she really wants is to become a great-grandmother, as many times as possible. Which means that to secure the best possible Christmas gift, you should explore the frontier where her fundamental desires coincide with yours. Von Hippel offers the following advice, with a disclaimer that it's not strictly based on scientific findings: "The next step would be to emphasise the gift that will increase your own chances of having lots of offspring," he says. "Gifts that help you attract or keep the ideal mate, or gifts that help you raise a lot of kids, ought to be the best bet." Lummaa agrees: "At least that sort of present should please the granny most," she says. A new computer probably won't pass muster unless you are single and you really can convince granny of your intention to join an internet dating service. But she just might be willing to splash out on expensive perfume, designer clothes or - if she's astonishingly wealthy - even that Lamborghini. As the research shows, all gran really needs is the right encouragement. Ask for a flashy gift that will impress the opposite sex and you can tell yourself you are not being selfish this Christmas - you're just helping your nearest and dearest to get her own heart's desire. E-gift vouchers: Whose money is it anyway? * 24 December 2005 * Dana Mackenzie WHAT a lovely gift. Not for you, of course - you don't really want to venture inside the local Acme tattoo and piercing parlour, let alone make use of its services. But because you are never going to use it, Uncle Derek's desperate last-minute purchase, the prepaid electronic gift card entitling you to an Acme shoulder snake or tongue stud, is a great gift to the store: free money. Gift vouchers - the low-tech, paper version of the gift card - have always been a boon to retailers. We mutter thanks to whoever bought them, surreptitiously try to find someone willing to swap them for cash and, having failed, stick them in the back of our wallet, from where - all too often - they never emerge. If they are issued with an expiry date, so much the better for the retailer: then they know exactly when your grandmother's money is theirs to keep. Somehow, the disappearance of this cash into retailers' pockets seems innocuous, our own fault. It's practically a holiday tradition. But with the growth of electronic gift cards, the cash is disappearing faster than ever - and creating a legal headache. Imagine presenting your paper voucher to a retailer and being told the face value has been depleted by a "service charge". Or that the voucher is now worthless because it has a hidden, electronically encoded expiry date that Grandma Rose forgot to tell you about. Raise a glass to progress: walking away with your money is so much easier when the cash is digital. Once a sort of holiday afterthought, pre-paid electronic gift cards are now everywhere. According to TowerGroup, a Massachusetts consulting firm, American consumers are expected to spend $55 billion on gift cards this year. On the European side of the Atlantic, gift cards have been slower to get started, but they are on their way. One of the first to adopt them in the UK was the department store Harrods, which introduced the cards in November 2004. Already, Harrods sells twice as many gift cards as traditional paper vouchers. "People seem to spend more money on the cards than on the vouchers," says Harrods spokeswoman Valentine Labriffe. There is good reason for that. A card fits nicely in your wallet and doesn't get torn or bent. It is a more tasteful present than cash, and somehow more substantial than paper vouchers. Like postage stamps, gift cards with attractive designs or portraits of your favourite singer are becoming collectables. Gift cards offer benefits for merchants too. For a start, they can levy a service charge. In the US, some card issuers are charging fees on cards that go unused for several months. This practice already has consumers and their advocates crying foul. "When I talk with people about it, they say, 'This is stealing!'" says Dan Horne, a professor of marketing and "gift-card guru" at Providence College in Rhode Island. "People are very surprised to find out that the issuers can get away with it." And then there's the not-so-obvious expiry date. While paper vouchers with an expiry date have it printed or written on them, plastic gift cards don't. It's usually in the terms and conditions, which may only be available on the seller's website. Up to 10 per cent of the funds on gift cards may vanish into the store's coffers. "It depends on the category of merchant," Horne says. "For Macy's, it's less than 5 per cent, for a grocery store it might be 2 per cent. For a specialty store, a tattoo studio for instance, it may be 10 per cent." Whatever the percentage, this "breakage" amounts to free money for the retailer. Consumer complaints have led several US states to ban or restrict expiry dates and service fees. The first such law was passed in California, which banned expiry dates on gift vouchers in 1996, when cards were still just a blip in the market. The state then banned service fees on gift cards in 2004, with one very limited exception: cards that have been inactive for 24 months, with a balance of $5 or less, may be assessed a $1 monthly fee. But there are still huge gaps in the laws concerning fees and expiry dates. Gail Hillebrand, an attorney for Consumers Union in California, says that 32 states still have no legislation at all. The federal government has shown very little inclination to act. In 2004, New York congressman Chuck Schumer introduced a bill in the House of Representatives modelled after California's law, but it went nowhere. Some companies are making a pre-emptive strike against the bad publicity gift cards have already stirred up. Sears, for instance, eliminated expiry dates on its gift cards in December 2003. In the UK, the gift card phenomenon is about seven years behind the growth curve in the US. The UK's Office of Fair Trading says that although it has so far received few complaints about unfair charges or unexpected expiry, there are no regulations over service fees and expiry dates yet. The WH Smith gift card, for example, does not have an expiry date, but the Harrods card does, although it only expires after two years with no activity (even a balance query counts as activity), and Harrods will replace expired cards anyway. The only way you can lose your money, as with cash or paper vouchers, is to lose the card. Not everyone is so accommodating, though. If a store won't take your gift card, Horne says the best idea is to "stomp your feet, kick and scream" - eventually the manager will probably issue you a new one. In the end, he says, consumer opinion and experience will be the decisive factor for the gift card market. "If I get a gift card this year and like it, then I will give it to two people next year. Then each of them will give a card to two more people. That's the way the business grew in the US, and the way it will grow in Europe." Of course, if the growth of gift cards is unstoppable, you'll need a strategy to cope with unwanted cards, just as you did with unwanted paper vouchers. Fortunately, the digital era makes this easier. Where you once might have had to barter unwanted vouchers with family or friends, you can now auction them to the whole online community. Online auction house eBay lists a couple of thousand gift cards for sale at any given time, and you can expect to find even more after the holiday. Don't expect to sell your card for face value, though. For a better deal, you might want to try cardavenue.com or swapagift.com, where, for a small listing fee, you can trade cards with other unhappy gift recipients. It's worth a try; there's got to be someone out there who'd just love an Acme tongue stud. Bah Humbug * 23 December 2000 * Sidney "Scrooge" Perkowitz ONLY a Scrooge could resist the cheer of Christmas. But let's face it, the holiday does have its darker side. Every year you have to buy gifts for people that you rarely see, or don't even like. Gifts for relatives like Aunt Tilda, in return for the hideous vase she gave you last year, and those you swap with bosses, co-workers or business acquaintances, for whom you feel little kinship at best, and a touch of resentment at worst. Unfortunately, you tend not to get away with giving out lumps of coal-not even to naughty nephews. So how can you ignore the Christmas spirit without anyone noticing? What you need are gifts that carry a secret Scroogian undertone, like an uncharitable thought concealed behind a socially acceptable smile. Fortunately, today's technology provides the answer: an array of sleek gifts that radiate glossy desirability, yet also give you the means to express that hidden message. In fact many gadgets make fine Trojan horses-the perfect present to give when your heart isn't really in it. To choose the right, slightly malevolent, tech-based gift, do as you would for any gift: consider the recipient. Is he or she a cutting-edge type who can't live without owning the latest and the fastest gadget? Then your path is clear. Give him or her a piece of gear that is just slightly out of date: last month's computer with a processor that runs at only 700-megahertz rather than the 1-gigahertz chip of this month's machine, an obsolete personal organiser too bulky to slip into a pocket, or a movie on videotape rather than DVD. Then watch the recipient gnash teeth at the realisation that the gift confers no bragging rights whatsoever. Alternatively, your giftee might be someone who can barely manage to set the correct time on a digital watch. For the technophobe, choose something overly complex, a cellphone so packed with features that its buttons have multiple functions, say. This makes it nearly impossible to select the right option, especially when in a hurry. And if the keys are tiny as well, the frustration level can reach fever pitch. Similar amusing effects can be attained with some home stereos and car radios, such as the one in a certain top of the range sedan, where operating the radio is akin to piloting a jumbo jet. There are other possibilities for every type of person. Take someone with a simple, frugal lifestyle, perhaps out of concern for the Earth's resources and the environment. Nothing galls such an idealist more than a device that uses ridiculously sophisticated technology to complicate what was once straightforward or fills a need generated only by other technology, such as a waterproof case for when you take your cellphone boating. Even better, how about a device that works exceedingly well but does something really trivial, such as an analogue wristwatch with a built-in laser light show? Under the general heading of "Things nobody really needs", a good stocking-filler is a digital pressure gauge for car tyres. Instead of an old-fashioned, simple, reliable mechanical gauge, clearly marked in pressure units and needing no power source, the "improved" version is battery-driven, must be calibrated before each use, and presents tyre pressure on a tiny, barely readable LCD display. Another pointless gift is a motorised tie-rack that holds up to 80 ties and slowly rotates them into view, as if it took incredible energy and determination to riffle through one's ties by hand. Even when business wear was comparatively formal, how many men owned this many ties? This gadget has the weird distinction of being simultaneously over-teched, under-useful, and outdated-a nice violation of the principles of simple living that is sure to raise hackles. Possibly the champion cool-but-not-that-useful gift, however, is a hand-held global positioning system (GPS) sensor to tell you where you are on the Earth's surface to within a few metres. Unless you are a sailor blown off course or are in a mountain rescue team, knowing your exact latitude and longitude is of limited use. To fill this vacuum, GPS addicts have even invented their own sport: geocaching. It consists of placing something of low value-like a can of beans-in the middle of nowhere; recording its exact coordinates via GPS and posting these on the Internet so other intrepid GPS-bearing explorers can hike for hours to find the treasure. A GPS sensor is well suited to a tech-oriented business rival, where time spent on this pointless exercise might well flatten his or her career trajectory. Time is also a major issue for any recipient who-like many of us-juggles a multitude of personal and career responsibilities. Why not get him or her a radio-controlled clock that uses radio signals from a centralised atomic timekeeper to constantly update the display with an accuracy of nanoseconds. Such a clock allows absolutely no leeway for lateness and can only add to the stress of a busy life. (If you're feeling mischievous, buy an analogue version and just before you wrap it, loosen the nut that locks the hands in place. Then move the minute hand back by five minutes and retighten the nut. Now the clock will faithfully keep the wrong time, no matter how often the infuriated owner resets the hands.) If you don't know your recipient that well, you can always rely on food and drink. These offer fruitful possibilities for gifts that deliver a subtle not-so festive greeting, from digital toasters and fuzzy-logic rice cookers to intricate designs for fool-proof wine-cork pullers that pinch your fingers every time you use them. There are, heaven help us, even items just right for the child unlucky enough to qualify for the traditional lump of coal. Your annoying nieces and nephews will be delighted to play the latest computer games machine. With incredibly realistic displays and hand-held controllers that rival the cockpit of a jet fighter, you enjoy the peace and quiet, safe in the knowledge that only the sturdiest child will avoid full visual and muscular lock-up. Of course, this all goes to show how far we have come since Scrooge. A lump of coal is mostly carbon, and carbon is chemically similar to the silicon used to make electronic chips. Nowadays, instead of giving a dirty chunk of coal to make a point, we can achieve the same effect with a tiny, ultra-clean but sneakily subversive piece of silicon. Review : Dickens of a book * 29 June 1996 In Scrooge's Cryptic Carol: Three Visions of Energy, Time and Quantum Reality by Robert Gilmore (Sigma Press, ?9.95, ISBN 1 85058 531 8), Scrooge is visited by spirits who take him, willy-nilly, through past, present and weird physics (future not being available). He is a changed and better educated man at the end of it. So should you, the reader be, as long as you can swallow the Christmas Carol conceit. The spirits are relentlessly didactic, carrying on as if they have a phantom script. The light-hearted drawings help. Europe's last wild reindeer herds in peril * 11:04 19 December 2003 * Andy Coghlan Europe's last remaining population of wild reindeer is in peril. Its survival is being threatened by the building of dams, mountain cabins and hydroelectric schemes across their natural habitat in southern Norway. Conservationists warn that human activity in wilderness areas is growing so rapidly that both wild and farmed reindeer, or caribou, may one day suffer a similar fate in their strongholds across the Arctic tundra and taiga. Christian Nellemann of the United Nations Environment Programme in Arendal, Norway, and colleagues have documented the steep decline of the Norwegian reindeer. To flee human construction projects, animals crowd into ever smaller areas, with ever scarcer supplies of the lichen on which they feed (Biological Conservation, vol 113, p 307). "The situation in Norway is quite critical," says Nellemann. "They've lost 50 per cent of their habitat in 50 years." Monitoring the reindeer before, during and after a decade of major infrastructure projects between 1977 and 1987, Nellemann's team found that reindeer retreat dramatically from anywhere that lies within four kilometres of new roads, power lines, dams or cabins. Plummeting density Summer population densities in these zones now fall to 36 per cent of what they were before the building projects. Instead, herds crowd into remoter areas, where densities have increased by 217 per cent. In winter the effect is even more extreme - reindeer move away from developed areas in such numbers that herd density there falls to just eight per cent of its natural level. Nellemann says that human-built obstacles such as roads, power lines, reservoirs and dams serve as frontiers that reindeer herds are reluctant to cross. So many exist that the 30,000 remaining animals - down from 60,000 in the 1960s - are fragmented into 24 isolated groups. At this rate of decline, Nellemann says there will be room for just 15,000 animals by 2020. Breeding collapse Overcrowding and fragmentation have led directly to overgrazing and a collapse in breeding rates. "In some of the worst-hit areas, only one in three reindeer females is having a live calf," he says. This compares with the expected calving rate of between 80 and 90 per cent. And the problem is likely to spread. "By 2050, the UNEP expects 70 to 80 per cent of the Arctic to be developed with infrastructure, so Greenland, Canadian, American and Russian reindeer will all be threatened," says Nellemann. "It's the last true wilderness, and food and resources are concentrated in very small areas, so unbridled development can have a huge impact on reindeer survival." One solution in Norway is to extend national parks to reopen vital migration routes severed by infrastructure projects. But first the Norwegian government needs to stymie further development, such as the unregulated building of cabins, says Nellemann. Related Articles * Sea birds drop radioactivity on land * http://www.newscientist.com/article.ns?id=dn3220 * 4 January 2003 * Viagra gives wildlife a boost * http://www.newscientist.com/article.ns?id=dn2972 * 25 October 2002 * Caribou census may argue against Alaskan oil drilling * http://www.newscientist.com/article.ns?id=dn681 * 30 April 2001 Weblinks * UNEP, Arendal * http://www.grida.no/ * Reindeer, Arctic Studies Center * http://www.mnh.si.edu/arctic/html/caribou_reindeer.html * Biological Conservation * http://www.sciencedirect.com/science/journal/00063207 In search of Schrodinger's reindeer * 23 December 1989 * MATTHEW DAVIES and MARTIN SLAUGHTER WITH the festive season upon us, many scientific minds will yet again be attempting to solve that perennial chestnut, the Travelling Santa Problem (or TSP). This problem was first brought to our attention by the child prodigy, Vernon P. Templeman, in his seminal paper 'Please may I have a bike for Christmas, Daddy' (J. Appl. Window Shopping, December 1988, vol 7, p 1-122). In simple terms, the problem boils down to one of speed. How can Father Christmas visit the homes of all the children in the world in a single night, albeit 24 hourslong? Templeman demonstrated that theclassical (sequential) explanation forces usto invoke faster-than-light travel, which issomewhat at odds with current thinking. Thus, he argued, we should infer thatthe Father Christmas effect does not reallyexist. This contentious hypothesis wasthe subject of much debate at a recent symposium held at the Santa Fe Institute forPresent Research. Our initial thoughts were that Templeman had over-estimated the size of the problem, forgetting that Santa only visits good children. This would reduce the number of visits by a factor of order 10**9. However, a simple back-of-the-lab-coat calculation shows that this renders the problem no more tractable. This threw suspicion on the use of classical physics. At this stage, the teachings of our old mentor, Erwin Schrodinger, came back to us ('Famous people what we claim to have known, honest', by Matthew Davies and Martin Slaughter, Annals of Physics, 1983, vol 12, pp 379-381). From a detailed study of reported phenomena, it became apparent that Santa shared many of the characteristics of elementary particles, suggesting a quantum mechanical interpretation of his behaviour. We have since developed this theory, and are confident that a quantum mechanical model of Santa Claus allows many of his observed properties to be explained, and several interesting predictions to be made. Clearly, viewing Santa as a waveform removes the apparent paradox of his 'presence' being measured in several locations within a short interval of time. As the waveform collapses down in a specific location (attracted, we suggest, by the Goodness Quantum number of the recumbent child) it becomes perfectly valid to state that a 'visitation' has occurred. However, our calculations suggest thatthe process of measurement (for example,turning on the bedroom light) will almostcertainly lead to a localised, space-timeinstability which, in turn, will cause thewaveform to relax and render detectionalmost impossible. Once again, this ties in with the experimental evidence that Father Christmas is rarely caught delivering. Indeed, on those few occasions when a sighting has been claimed in the literature ('Mummy, mummy, there's a strange man in my bedroom', by S. T. U. Peedo, Journal of Sleepless Nights, 1979, vol 5, p 35), closer scrutiny has often revealed it to be an imposter wearing a red cloak and beard. Moreover, the quantum mechanical model predicts that the energies involved in a waveform collapse will result in the emission of a jet of sub-atomic particles. Studies of bedroom carpets in the vicinity of alleged sightings, using an X-mass spectrometer, have often revealed evidence of mince pion activity; though these have usually been Hoovered up. One of the most appealing aspects of our theory is the manner in which it allows the most likely sites for visitation to be estimated. These may be identified from the first derivative of the expectation value as: d (Spot) ] -------------] d (Fireplace)]night It turns out that the distribution of household chimneys is exactly that required to act as a diffraction grating for objects of Santa's predicted wavelengths, focusing the zeroth order onto the bedroom floor below ('Chimchimmeny, chimchinny, chimchin cheroo', by Bert, Mar. Popp. 1969). Yet another predication which agrees with commonly reported observations concerns the Christmas Stocking effect. Within the general theory, the stocking would be expected to act as an infinite potential well, momentarily capturing the Santa waveform. The resonance within the stocking is predicted to transfer energy from any batteries within the well (causing them to run out by Boxing Day) before collapsing back down to a new ground state characterised by a tangerine in the toe. Apart from the successes reported above, the theory makes a number of predictions about rather low probability events; that is, events expected to occur in fewer than one hundred homes in the world each year (for example, a full night's sleep for parents of under-8s; no clothes given as presents; fairylights still working from last year). In order to collect the huge volume of data needed to assess these rare events, we have decided to appeal to the scientific community for help. Well as the few observations available fit the theory, a detailed experiment to provide quantitative support is now necessary. This will require a vast amount of data to be collected with observations from as many global locations as possible. New Scientist's readers are, therefore, asked to maintain a Yule log of the events in their domestic laboratories and to send their results to the authors via the magazine. Participants are requested to make a note of the following: (1) Their children's Goodness Quantum number; (2) The approximate dimensions of their bedroom; (3) Whether Santa visits and, if so, at what time; (4) Their address and galactic 4-space coordinates (or postcode); (5) Any evidence of Charm or Strangeness; (6) Whether Santa is seen to be spinning (needed to check the 'No L' theory); (7) The number of presents left; (8) The colour of his reindeer's nose (often quoted as red when seen moving away at speed, but unknown in its rest frame). On a note of caution, participants are urged not to try to localise Santa as theDp. D x - h relationship suggests that the energies involved could demolish a timber frame building. At a time when Europe is leading the world in fundamental physics research we hope that this knotty problem can be resolved with this experiment. The Americans are not far behind, with Senate approval for the $12 trillion Turkey/Anti-Turkey Synchronous Santatron. Let us make sure we cook their goose before they foil our efforts. Matthew Davies and Martin Slaughter are physicists working in the computer industry. Wormholes in Wonderland * 24 December 1994 * Ian Stewart THE NORTH POLE, A DECEMBER SOMEWHERE NEAR YOU: "My customer base has gone up fivefold in less than a century," Father Christmas explained to his assembled subordinates. "If it wasn't for the new X/MAS ultra-paragigaprocessor, I'd never be able to maintain my extremely tight delivery schedule. But now, at last, all my problems have been solved." A loud explosion echoed around the underground ice caves. A dishevelled Vice-Gnome for Computing rushed in. "Major hardware failure in the X/MAS ultrawhatsit, Santa," he said, panting. "What happened?" "Algorithmic gridlock. We were running the travelling salesman program to optimise the delivery route, and the flash memory flashed. The computing requirements are growing exponentially with the number of PASCUs we have to visit." "PASCUs?" asked Santa. "Personal Activity System Consumer Units," the Vice-Gnome for Marketing answered. "Previously known as, um, let me see, children." Santa shook his head sadly. "I should never have sent you gnomes to business school. Why do we need to optimise deliveries, anyway?" "The sleigh is running perilously close to the speed of light as it is. Unless we choose the shortest route, we'll never be able to visit every PASCU on Christmas Eve." Computing flopped into a chair. "That's not the only problem," the Vice-Gnome for Distribution added. "The sleigh is travelling so fast that the reindeers' energy requirements are skyrocketing, and Rudolph's nose has turned blue with cold because his metabolism can't keep up. I really do urge you to consider my memo proposing that we stagger deliveries over several weeks. Our business is ridiculously seasonal." "I agree that something must be done," said Santa. "But I refuse to abandon my traditional role." "Well, there's always Finance's plan to change our corporate structure to a chain of subsidiaries owned by Claus Holdings plc, based in the Cayman Islands. That way the group as a whole can avoid taking responsibility for late deliveries." "No," said Santa, with a nasty glint in his eye. Marketing backtracked rapidly. "Right, I can go with that. Preservation of corporate image, yes, yes, very important, yes, of course." "Desperate times," said Santa, "demand desperate measures. "We must leapfrog to an entirely new level of technology, one with the capacity to solve all of our problems for the indefinite future." The Vice-Gnomes for Production, Marketing, and Finance nodded. "I like the idea," said Marketing. "But what exactly do you have in mind?" "I'm dreaming of a relativistic Christmas," said Santa. "I've been keeping an eye on the physics journals, and over the past few years a lot of potentially useful new concepts have appeared." "We're a product-oriented corporation," said Finance, worriedly. "We're geared up for manufacture of Personal Activity Systems, not for R&D." "On this occasion, as an exceptional measure," said Santa, "I am willing to bring in outside consultants. I shall secure the services of Hawkthorne Wheelstein, Chartered Relativists." Santa dismissed his Vice-Gnomes and pulled a cellphone from his beard. THE NORTH POLE, SOME DAYS LATER: Amanda Banda-Gander, Hawkthorne Wheelstein's salesperson, stabbed at the brochure with a beautifully manicured fingernail. "l recommend a boundary across which no matter or energy can return. It's called a black hole." "Because it's black and things fall into it?" Marketing hazarded. "More or less. Things do fall into it, and can't then escape, but it actually gives off radiation, and looks red to someone on the outside." "And a white hole?" "Like a black hole but reversed. Matter comes out, but it can't get back in." "Ah, I see. And when you join the two together, you get a one-way tube?" "Known as a wormhole, yes. What's more, it is a tube that goes outside the normal Universe altogether - a cosmic short cut, connecting two regions of space like the handle on a briefcase. So Mr Claus can carry the black end, as we call it, on his vehicle, and arrange for the white end to materialise inside each dwelling that he visits. No more sooty chimneys, and no trouble at all getting stuck inside central heating systems." "Fantastic," said Santa. "But how do I get out again?" Amanda smiled. "By using a second wormhole, of course. And if you buy the black and white holes separately, together with a special linking module, then you can make a substantial efficiency gain. Just as black holes constantly suck matter in, so white holes constantly spew matter out. We can customise your white holes so that they emit an endless stream of toys." "Personal Activity Systems," said Advertising. "Sorry. And wrapping paper and ribbons, thereby solving all your manufacturing problems at a stroke." "I like the sound of that," said Finance. Production was less sure - it looked like she might be out of a job. "Conversely, black holes can solve forever the problem of disposing of unwanted wrapping paper and ribbons." "And unwanted Personal Activity Systems," said Marketing sagely. "They all end up in the dustbin eventually." "True. You could recycle almost everything if you wanted to." "This is all very well," said Santa, "but our most urgent problem is one of time. The faster-than-light sleigh has not yet been invented." "No, but the warp-drive sleigh has. Our Enterprise model has proved to be especially popular." "Warp drive?" "According to relativity theory, matter can't move through space faster than the speed of light. But, and this is the neat bit, there's no limit on the speed with which space itself can move. So here's what I suggest - the sleigh can sit at rest in a small bubble of space, and we will arrange for the bubble to flow at superluminal velocities through normal space. I know it sounds like science fiction, but Miguel Alcubierre at the University of Wales College of Cardiff has recently shown that it is entirely consistent with modern physics. Admittedly, it does violate the 'weak energy condition' that requires all energies to be positive. But that's a purely technical difficulty that can be overcome using the Casimir effect, which creates negative energies between two parallel plates in a vacuum." She waved her hands dismissively. "Supersonic flight produces sonic booms," said a gnome from the Legal Department, wary of possible third-party lawsuits. "Does a warp drive produce gravitic booms?" "No, it's guaranteed free of gravity-shock-wave emissions," said Amanda, rather too glibly. "I don't like it," muttered Legal's gnome to Santa worriedly. "Even if Hawkthorne Wheelstein indemnified us, we could be held responsible if they go out of business." "That's a good point," said Santa. "Do you have any alternative solutions to our scheduling difficulties?" Amanda pursed her lips. "Well ... there's our latest range of products. Time machines." She waited as the idea soaked in. "That way, you can stagger deliveries throughout the year, while making sure that every single item is delivered on Christmas Eve." "How does this time machine work?" "We have several models. The simplest is the moving wormhole, invented by Michael Morris, Kip Thorne and Ulvi Yurtsever in 1988, and based on the twin paradox of relativity theory. If an object travels very close to the speed of light, then time, as experienced by an observer moving with the object, slows to a crawl relative to that experienced elsewhere. Imagine two identical reindeer, Donner and Blitzen. Donner remains on Earth, and Blitzen heads off into space at nearly lightspeed, returning forty years later as measured by an Earthbound observer. Donner has aged forty years, but because of this time dilation Blitzen has aged only five, say. "Morris, Thorne and Yurtsever realised that by combining a wormhole with the twin paradox, they could get a time machine. The idea is to leave the white end of the wormhole fixed, and to zigzag the black end to and fro at just below the speed of light. Seen from inside the wormhole, both ends age at the same rate. But from outside, the black end ages more slowly because of its speed. This time differential means that the passage of time is different if you go from one end to the other through the normal Universe, or through the wormhole itself. In fact, if you travel through normal space to the black end and then dive through the wormhole, you end up in your own past. You can read all about it in New Scientist, 28 April 1990, if you don't believe me. "There's another approach that we're working on at the moment, which should be operational very soon. It was invented by Richard Gott in 1991, and it involves using two cosmic strings - thin massive objects whose existence was first predicted by some grand unified theories - that pass very close to each other at near lightspeed," said Amanda. "And we've got some new methods coming along that don't need any singularities at all." "Brilliant," said Santa. "We'll take ten of everything." THE NORTH POLE, DECEMBER 2994: Santa sat at his computer monitor, studying the time travel schedules, frowning. A thousand years had passed, and still deliveries for 1994 had not been completed. There had been teething troubles. The wormholes were always being coned off for repairs, and the contraflows were a nightmare. The sleigh was getting totally clapped out and spent most of its time in the garage; sleigh rides were constantly being cancelled because reindeer were getting time-travel sick. Leaves kept blowing into the wormholes - the wrong kind of leaves, apparently. But what the heck. Santa relaxed and cracked a smile. There was literally all the time in the world to get the system working. He sipped at a glass of sherry and nibbled a biscuit. Suddenly his peace was shattered. "Emergency! Santa, quick, do something!" "What's happened?" "I know we've only a few time machines here and now, but because all of our deliveries happen on the same Christmas Eve there are billions of white holes, accumulating all the time, on Earth in 1994. The topology of spacetime is becoming so entangled that baby universes keep budding off, and I'm worried that one of them may take the Earth with it." Santa suddenly realised that he had never seen this particular gnome before. "Where's your superior?" "Um, gone on a course about intellectual property rights." "Don't lie to me. Bad little gnomes don't get their Christmas presents, remember?" "Oh, all right. Er, he's gone to see the birth of Christ." "WHAT?" "It's become a very popular tourist attraction since the invention of the time machine, Santa. All the major historical events have. The Sacking of Rome, the Battle of Hastings, the signing of the Declaration of Independence." "You're telling me that my gnomes are using my time machines to bunk off work and go see the historical sights?" yelled Santa. "Um, yes." The gnome looked embarrassed. "They do take Him presents." "Idiots! The inn at Bethlehem will start to resemble a football match with all those gawping gnomes piling up! Do you recall any reports in the Bible about thousands of gnomes paying their respects to the infant Jesus? Bringing gifts of gold, frankincense, and My Little Pony?" The gnome hung her head in shame as Santa raged. "You mindless incompetents have created a cumulative audience paradox!" He paused, reflected, simmered down. "Except that in the real Universe, you don't get paradoxes. Hmmm." Santa chewed on the problem for a while. "Of course! The many worlds interpretation of quantum mechanics must be valid. Each time shift carries us into a new version of the Universe, coexisting with the original but separated from it along some totally new kind of dimension." "Oh. So that's alright, then." "Well, I don't see why it should cause us any serious difficulties. But until science has fully mastered the intricacies of parallel universes, well, jaunts to see the Nativity are out. Do you understand?" The gnome hurried off, relieved. Santa returned to his schedules with a vague feeling that he was missing something. A BILLION PARALLEL NORTH POLES, A BILLION PARALLEL DECEMBERS: "My customer base has gone up fivefold in less than a century," a billion parallel Father Christmases explained to their assembled subordinates. "If it wasn't for the new X/MAS ultraparagiga processor, I'd never be able to maintain my extremely tight delivery schedule. But now, at last, all my problems have been solved ..." Grottoes new * 25 December 1999 * Justin Mullins IT isn't easy being Santa Claus. In his grotto at the North Pole he faces freezing temperatures, howling snow storms and recalcitrant reindeer. But if things seem bad now, just wait until we colonise the Solar System. When Santa looks for a new base among the planets, the conditions are going to be much, much worse. Still, if he chooses wisely, he could be in line for some of the most spectacular views in the Solar System. So Santa, if you're reading, here is a quick guide to the poles on offer. The north pole of any planet is defined by a trick known as the right-hand screw rule. Make your right hand into a "thumbs up" shape. If the planet's direction of rotation matches the way your fingers curl, your thumb points towards the north pole. Try it with the way the Earth rotates (the Earth's rotation is from West to East, which is why the Sun appears to move from East to West). The north pole on Venus is underneath the planet as seen from Santa's home on Earth, because Venus spins in the opposite direction from every other planet in the Solar System. Not that Santa or anybody else standing there would know the difference. Venus has a dense atmosphere of carbon dioxide filled with lemon-yellow clouds of highly concentrated sulphuric acid. The atmosphere would allow only a murky yellow light to penetrate to the pole and Santa would never see the Sun move across the sky. He would be able to see for several kilometres at the surface, though, and pictures of Venus show a very flat rocky terrain. "It's a bit like the bottom of the ocean," says Andrew Ingersoll, professor of planetary science at Caltech in Pasadena. There'd be no need for a heavy beard or thick red woollen clothing to keep warm. At more than 450?C, the temperature at the surface is hot enough to melt lead and the pressure is almost 100 times that on Earth's surface. The weather patterns on Venus are likely to be very consistent throughout the Venusian year, which lasts for roughly 224 Earth days. This is mainly because the atmosphere is so thick and moves so fast that any heating or cooling gets mixed around very quickly. "You'd hardly know whether it was summer or winter," says Ingersoll. A more spectacular place to set up base might be the north pole of Mercury. Mercury is a small, barren planet pockmarked with craters and no atmosphere worth mentioning. Conditions there are particularly harsh: because it is so close to the Sun, the maximum temperature is around 400?C. But the planet has no atmosphere to retain the heat so the temperature is less than - 150?C in the shade. Most of the planet experiences both extremes of temperature during each long `day', which lasts 176 Earth days. And yet the poles might be to Santa's liking. Because of the way the Earth's orbit is inclined relative to Mercury, astronomers can sometimes see Mercury's poles. "We've had a pretty clear look," says Ingersoll. By bouncing radio waves off this part of the planet, they have spotted signs of ice. Astronomers suggest that ice could only survive the fierce daytime sunlight at the bottom of craters that are permanently in shadowa situation that can only exist because Mercury spins with its axis almost perpendicular to its orbital plane. These craters would be extraordinary places, says Ingersoll. Deep inside them, the sky would appear black but the landscape would not be entirely in the dark. While the Sun's body would be hidden from view, its outermost atmospherethe coronawould still show above the mountain tops at the crater's edge. The shimmering corona would cast an eerie blue light across the crater floor, and every now and then a huge arc of plasma known as a coronal mass ejection would burst into view above the horizon. The pale blue light would reveal a strange landscape. Nobody knows what form the ice on Mercury takes but it is almost certainly left over from comets that have collided with the surface. Ingersoll guesses that it could be in the form of dirty boulders of frozen water, carbon dioxide and even noble gases such as argon. The ice would not only provide a spectacular setting but would be a useful resource for anybody setting up on the planet. A supply of water, for example, would mean that Santa would not have to take his own. The north pole of Mars is in some ways similar to Earth's. It is currently pointing away from the Sun and so is in the middle of a harsh winter, much of it spent in total darkness. It also has a polar cap consisting mainly of water ice. If Santa wanted to reward his elves with a relaxing beach holiday at the end of the season, Mars would have a distinct advantage. "The permanent ice cap is surrounded by sand dune fields like those in North Africa," says Ronald Greely, a planetary geologist at Arizona State University in Tempe. The dunes range in size from just a few metres across to 100 metres. In winter they become lightly dusted by a bright frost of dry ice. And since dry ice turns directly from solid to gas, this frost would disappear magically under the eyes of the elves when they embarked on their summer break. Possibly the most interesting place to set up a base would be Titan, Saturn's largest moon. Titan's atmosphere is so thick with methane and ammonia that astronomers have difficulty seeing the surface. Methane exists as a liquid, gas and a solid on Titan, just as water exists in all these forms on Earth. The few tantalising glimpses astronomers have had of the surface indicate that it may have oceans of liquid methane as well as solid land masses. Nobody is quite sure what exists at the poles but it cannot be a floating mass of ice as it is around Earth's North Polemethane ice does not float on liquid methane. Astronomers hope to get a better idea of what exists beneath the atmosphere on Titan in 2004, when the Cassini spacecraft, now en route to Saturn, will drop a probe onto this wintry moon. If it's seriously cold conditions that Santa is after, the outer Solar System is by far the most promising destination. The north poles of Saturn, Neptune and Pluto are all pointing away from the Sun and so these places are in the throes of winter. "The Solar System is a very wintry place at the moment," says Ingersoll. Hear that, Santa? You'll be spoiled for choice. Track Santa's progress online * 25 December 1999 * Barry Fox `TWAS the night before Christmas deep inside Cheyenne Mountain in Colorado, the Combat Operations Center of the North American Aerospace Defense Command. Protected from nuclear attack by blast-proof doors and the tonnes of rock around them, a thousand NORAD personnel were winding down, looking forward to the holidays. Suddenly, something stirred. On a giant map in the operations room, a red light began to flash ominously over the Arctic. NORAD's job is to scour the skies for signs of air attacks against the US and Canada. To this end, the cavernous operations centre is connected by blast-hardened cables and antennas to a worldwide network of radar dishes and satellites. The red light signalled that something potentially threatening had taken to the sky. Far to the north, fighters scrambled to intercept it. As the jets approached the target, the tension rose inside Cheyenne Mountain. Then, came relief. "It's OK," squawked a pilot over the loudspeaker, "It's Santa. He's heading south." Don't laugh. According to NORAD experts, every year, Santa wakes on Christmas Eve, clambers into his reindeer-powered sleigh and sets off from the North Pole to New Zealand before working his way westwards to the Americas. Once he's finished with Canada and Alaska, he crawls back into bed for a year's well-earned rest. If you don't believe the experts, you can see for yourself. With help from IBM, NORAD will post Santa's progress on the Web (www.noradsanta.org). Even if you are one of those sceptics who does not believe in Father Christmas, the NORAD Santa website is worth a visit, because it gives an insight into the future of e-commerce. The technology IBM is putting at NORAD's disposal is the same that companies will need to sell goods and information successfully on the Internet. NORAD is anxious to point out that all communication into and out of Cheyenne Mountain is securely encrypted. This ensures that there is no chance of a schoolboy hacking in and starting World War Three-contrary to what you may have seen in the movie War Games. It is, then, a complete coincidence that NORAD's Santa-tracking tradition began with an accident on an insecure telephone line. Phone fluke In 1955, NORAD's predecessor CONAD (for Continental Air Defense System) started receiving calls from Colorado children on Christmas Eve. A local store had placed a newspaper advert for a telephone hotline to Santa Claus, but the paper misprinted the phone number, by chance hitting on the unlisted number for CONAD's Commander-in-Chief. The officer on duty, Colonel Harry Shoup, thought on his feet and joked with the children, telling them he could see Santa on the radar screen. Later, seeing the public relations value of the fantasy, NORAD turned the hotline into an annual tradition. So every Christmas since, staff at Cheyenne Mountain have given children updates on Santa's progress. Two years ago, NORAD switched to a website. Millions of people logged on and overloaded the system. So last year, NORAD teamed up for the first time with IBM, and between them, they handled 28 million hits in 24 hours. This year, NORAD is expecting still more visitors to its site, which will exploit the latest Net sound and vision technology. The serious side to all this is that NORAD's experiences mirror those of countless consumer-orientated companies. The days of telephone hotlines and telesales are numbered. Phone lines get swamped too easily. Witness the shrill tones of opera buffs who failed to get through to buy tickets for performances at the revamped Royal Opera House in London. The cacophony died down only a couple of months ago when a Web-based booking service opened. Logjam Although Net selling is the way of the future, websites can also overload. If the system runs so slowly that it infuriates customers, all the company is going to earn is a bad name. Research by chipmaker Intel found that the throughput of calls to sites tends to drop as soon as they handle anything more than simple text, low-resolution pictures and insecure transactions. If visitors are buying goods using an encryption system-to protect their credit card details-or are downloading moving pictures with sound, then the rate at which their calls are handled can decrease by 95 per cent. The solution, then, is to increase the number and speed of processors that serve a site, so they can simultaneously handle thousands of different requests. The amount of extra speed and capacity should not be underestimated. In October, The NetAid website was supposed to feed live video of pop concerts via 1500 servers at 90 locations round the world. They were advertised as being capable of handling 125 000 simultaneous "hits". But even a top-range PC and digital connection to the Net could display only postage-stamp-sized moving pictures which frequently collapsed into blurred still images. Only a few of the biggest high-tech companies have overcome this problem, by throwing together mammoth amounts of server power. Intel has one of these "server farms" near Santa Clara in California and is planning replica sites in Europe. The Santa Clara site has a thousand or more servers, each powered by a gang of four-and sometimes eight-Pentium Zeon microprocessors running at 550 megahertz. Each server can handle at least 10 000 access calls. IBM has three farms in the US, which it used to provide online coverage of last year's Nagano Winter Olympics, and this year's Grammy entertainment awards and Wimbledon tennis tournament. The main farm, on the outskirts of Washington DC, is linked to the others by high-speed fibre-optic cables and can pass on requests virtually instantaneously. Like Intel, IBM uses its farms to host its own website but has massive capacity to spare. It is this spare capacity that NORAD will use this Christmas. The first year NORAD tried to put Santa on the Net, it was swamped by 20 million hits in 24 hours, sometimes running at 30 000 a minute. This is chicken feed for the IBM farms, each of which can comfortably handle 300 000 hits a minute. Peak demand at this year's Wimbledon site exceeded 400 000 hits a minute, and delivered live video coverage from the courts. All this technology is fine for relaying Santa's position to the world, but how-you ask-does NORAD know where the jolly old fat man is in the first place? Fortunately, his sleigh can be seen by radar-once it is in range of NORAD's dishes, of course. At other times, NORAD relies on satellite-based infrared detectors. Normally, these detect and track the heat thrown out by rockets or missiles. Cleverly, NORAD has adapted this detection system to register the heat from Rudolph's nose. "Our scientists have not yet been able to calculate the amount of heat," admits a NORAD duty officer. "But the bottom line is that we see Santa because of Rudolph's nose." Before any tinpot dictator or terrorist decides to lob an ICBM onto North America, hoping it will be mistaken for Santa, there is a second line of defence to get past-those scrambled jets. "Because Santa tends not to file a flight plan," explains the officer, "and we have to identify all unknown flying objects, jets are scrambled in the far north of the company to check that the incoming target really is Santa's sleigh." Last Christmas, the website carried images taken by cameras on board the interceptors to relay "actual digitised images of Santa and his reindeer". Weather permitting, there should be more pictures this year, and a few extra surprises. Blue Christmas * 22 December 2001 * Catherine Zandonella * Gail Vines IS THE gloomy weather wearing you down? Does every cheesy, Christmassy jingle send your festive spirit plummeting? Is that health kick abandoned in favour of comfort food and a meaningful relationship with your duvet? Don't worry-you're in good company. It's winter. Perhaps your body's trying to hibernate. Some people get the winter blues on a monumental scale. Sufferers of seasonal affective disorder would rather cosy up to a television set than another human being. They shun sex for some quality time with a pizza, snooze for maybe 16 hours a day, and are often irritable and moody-the holiday atmosphere just passes them by. Small wonder that SAD sufferers compare their condition to hibernation. When animals prepare to overwinter, they slow their metabolism, retract their gonads, and hunker down in cosy dens, surviving till spring on a comforting layer of fat. The theory that seasonal depression is an atavistic form of hibernation has been doing the rounds for years. Most hibernation researchers and psychologists agreed it was rubbish. But recent research has reawakened interest in the theory. It's true there are big differences between seasonal blues and other forms of depression. Clinically depressed people usually lose interest in food, finding it tasteless or even unpleasant. They often shed weight and have great trouble sleeping. Sufferers of seasonal depression are just the opposite, eating and sleeping with gusto. While SAD affects just a few per cent of the population, many researchers believe that most of us are susceptible to seasonal overeating, oversleeping, and a general bodily go-slow. Some even say it's the extreme end of a spectrum of adaptive responses to winter weather. "We want to establish whether SAD is part of our genetic background," says George Wilson of the University of Tasmania in Hobart. "It could be a programmed reaction to shorter daylight hours in winter." Two recent studies have uncovered hibernation-like physiology in people with SAD. Margaret Austen, one of Wilson's colleagues in Hobart, where winter nights average 15 hours long, looked at SAD-related changes in the autonomic nervous system. These nerves regulate the functions we don't think about, like breathing and heart rate, and are deeply involved in hibernation. And it turns out that they could be just as prominent in seasonal depression. There are two parts to the autonomic nervous system, which work in opposition to control bodily functions. The "sympathetic" system boosts metabolism, while the "parasympathetic" system damps down bodily functions. Just before animals hibernate, they experience a spike in the activity of their parasympathetic nervous system, which slows their heart rate and decreases their body temperature and metabolic rate. Austen found a similar parasympathetic response in people with SAD. As a result, her patients had slower heart rates and low energy levels. "Animals prepare for winter by fattening up and then sleeping through it," says Austen. "In humans that is not practical, so instead we eat more and gain weight through the winter, and we lack energy and sleep more." Another study by Arcady Putilov, a researcher at the Russian Academy of Sciences in Novosibirsk, Siberia, also found hibernation-like activity in his SAD patients. They consumed less oxygen and had lower resting metabolic rates than people who weren't depressed-very similar to the slowed physiology of hibernating animals during the winter. Putilov says there is no doubt that the symptoms of SAD are our way of coping with winter. The binge eating, the fat deposits that form on your thighs, the feeling you could sleep for 24 hours-all are signs of an adaptive mechanism aimed at conservation of energy. A fizzling winter sex drive is an adaptation to the winter chill too, says Thomas Wehr of the National Institute of Mental Health near Washington DC. Historical and experimental evidence shows that human responses to seasonal changes may have been more pronounced before electric lighting was common. Low sex drive in winter could have served both to conserve energy through the winter and ensure that your offspring are born at a time when food is available. Babies conceived in winter would be born in autumn, when food is starting to become scarce. Babies conceived in summer would be born in spring, when food is starting to be plentiful. "There are definite parallels between SAD and seasonal influences on human reproduction," says Wehr. An exploration of the genes involved in hibernation indicates that humans certainly possess all the necessary machinery to hibernate. Matthew Andrews at the University of Minnesota at Duluth discovered two genes responsible for shifting metabolism to burn fats from reserves rather than carbohydrates, a vital process for kick-starting hibernation. A host of other genes are involved too. "Almost every gene we've looked at so far [in animals] is found in humans," says Andrews. It seems that these genes are common in mammals but that hibernators have found particular ways of harnessing them to ensure their survival under extreme conditions. Although each animal follows slightly different cues that tell when it is time to hibernate, day length is always a key factor. The shortening period of light tells the body's circadian clock that winter is approaching. Even ground squirrels, which seem to have an inbuilt annual clock and go into hibernation regardless of day length, use it as a way calibrating their annual timekeeper. Shorter days also trigger SAD. Treatment of SAD usually includes sessions in front of a strong light source each morning. This phototherapy works by tricking the circadian pacemaker in the brain, and it works well at improving mood and reducing lethargy and food cravings. But do these many hibernation-like adaptations mean our ancestors actually hibernated? Is SAD an evolutionary leftover? Andrews says that the existence of the genes alone is no proof that humans once hibernated. Still, he says, "If there were any vestige of hibernation in humans, it makes sense that it would be something like SAD." But if humans had hibernating ancestors, shouldn't we all get SAD in the winter? Wehr believes that all humans have the potential to succumb to seasonal affects, but that most of us can ignore changes in day length because we live in a world of artificial lights. People with SAD don't seem to be able to use artificial light to set their circadian pacemaker, according to recent work by Wehr. "I suspect what we call winter depression has its origins in evolutionary biology," he says. "The symptoms of winter depression might well have been normal behaviour but now we view them as extreme." Despite the many biological similarities between hibernation and seasonality in humans, many researchers are far from convinced. Hibernators gorge themselves before they retreat into their dens, not during winter as we do. Hibernating squirrels can drop their body temperatures to just above freezing for weeks at a time, something we couldn't dream of. Even bears, much closer to us in size, are capable of surviving up to five months on their own body fat, something few of us could muster. But lots of non-hibernating animals make it through winter in a somnolent torpor, reducing their body temperatures and whittling their metabolisms down to a minimum. Madagascan lemurs retire to a tree-hole during the winter, where they sit like zombies for days on end. It's lack of food, rather than cold, that drives the lemurs to hibernate. Since humans evolved in the equatorial climes of Africa, perhaps our hunter-gatherer ancestors may have evolved a similar ability to survive long periods without food. Nowadays we just have to survive long nights. Luckily we can load up on holiday sweets and spirits to help us make it through the winter. Crawl under the covers, get the candles lit and the fire roaring, hit the remote control, and let those holiday party invitations pile up. Don't feel guilty. After all, you are just doing what comes naturally. * * * Let there be light THAT we are creatures of light is never more obvious than at Christmas. The first people to colonise the inhospitable north got through the long winter nights by inventing festivals of light and of fire. Bonfires, torch-lit processions, roaring hearths and burning candles all lightened the gloom of midwinter. So it's not by accident that our biggest national festival comes slap bang in the middle of the northern winter. For millennia, it has been the perfect occasion to brighten and warm away the seasonal gloom with one helluva party-ever since humans first made it to the temperate zones, anyway. The timing of the Christian festival of Christmas gives the game away. The shortest day of the year, the winter solstice, falls tellingly close, on 21 December. In the old days, anyone keeping an eye on the solar calendar must have thought-hallelujah, lighter days are on the way, homebrew all round! Besides the booze, bonfires were the Big Event for midwinter's day from prehistoric times, according to the pioneering folklorist James George Frazer, who in the 1920s penned a two-volume study of ancient fire festivals. Apparently, bones were tossed into the flames to create foul odours that would ward off evil spirits. The word "bonfire" comes from bone fire. Nowadays we have office parties instead. When Christianity took hold, traces of these ancient practices lived on. In Britain, the 11th-century Danish rule over England introduced the colloquial Scandinavian term for Christmas, "Yule". In medieval times, the Yule log-the largest possible that could be communally dragged into the hearth-was ceremoniously lit on Christmas Eve. Today, the mighty Yule log is remembered by the chocolate Swiss roll cake. It seems to have lost something in the translation. All the same, today's urbanites still yearn for light, greenery, warmth and joy in midwinter, says Ronald Hutton, professor of history at the University of Bristol and a leading scholar of modern festivals. We will happily flash our credit cards to procure the means to create domestic versions of our ancestors' unruly bonfires. Functional fireplaces and wood-burning stoves are fashion statements these days. Even the humble candle is big business. John Terrell Fry of Little Rock, Arkansas, the author of The Candlelit Home, now works as a "candle consultant" all over the world. Meanwhile, in Sydney, Australia, Amanda Hammond, author of Illuminate Living with Candles, reminds us that these small incendiary devices may no longer be a practical necessity but remain a "powerful symbol of inner enlightenment". The electric light bulb, invented in 1879, soon drove candles to near extinction. But as the 20th century drew to a close, candles miraculously revived, undergoing what Hammond calls an "unprecedented renaissance". Today, she opines, candles are widely appreciated as "a symbol of relaxation, celebration, romance and ceremony". Such talk is calculated to irritate fire ecologist Stephen Pyne of Arizona State University. This is "sheer symbolism", he says. Our dependence on fire has become hidden in machines and delivered along electricity wires. In Europe, burning candles once bedecked Christmas trees-to furnish light, it was said, for the woodland spirits sheltering in evergreens after deciduous trees have lost their leaves. But the elves and pixies were in for a shock. Electric tree lights first appeared in 1882 in New York, only three years after Edison's invention of the light bulb. By the 1930s, electric fairy lights-with the bulbs hand-blown in Germany in the shape of snowmen, Santas and fairies-were the norm. Now you can buy fibre-optic Chinese-made Christmas lights, or "multi-function" tree lights that operate in eight modes: "combination, in waves, sequential, slo-glo, chasing/flash, slow fade, twinkle/flash and steady on". It's tempting to mock our enthusiasm for such decorations, but might some ancient spirit of midwinter celebration endure all the same? Reassuringly, Hutton thinks it does. "When all is said, a vigorous seasonal festive culture survives and continues to develop among the British," he argues. Yet what's most noticeable about our celebrations these days is their privatisation, says Hutton. A century or two ago, the great annual celebrations revolved round community groupings-the clan, say, or the great household, the manor, the parish and the church. But today's festivals centre on the family or the couple, and are typically celebrated at home or at private parties. We still do notice changing day lengths, but that natural fact no longer acts as the prime signal for communal celebration. Our lives revolve around our closest associates-our families, lovers, friends and workmates. "Humanity has come to replace the natural world at the centre of the wheel of the year," says Hutton. This state of affairs has its advantages: now we can party whatever the season. All the same, we still find ourselves staring at the light from open fires and naked flames-in recognition, perhaps, of what we have lost. Fire rites evolved out of fire's practical biology, Pyne says-its capacity both "to purge and to promote" in the living landscape. But in the industrial city of the 21st century, fire rites have shrunk to votive candles and eternal flames over memorials. "What has been lost is the daily interplay between people and flame," he laments. For as we've hidden fire's ecology in machines, we've gradually lost the knowledge that our livelihoods ultimately depend on the energy of combustion. Once, he argues, humans knew that what made our species unique was our ability to control fire. Now, we risk the very future of the world's biosphere in a profligate orgy of hidden fire. And to think it all started with those blasted prehistoric bonfires-happy Christmas, everyone! Skipping Christmas * 21 December 1996 * Kurt Kleiner DAVID LEE's job is to make seconds. By international consensus, a second is defined as 9 192 631 770 cycles of a caesium atom vibrating at its natural frequency. Every 60 days, Lee measures this frequency, and his colleagues use it to recalibrate the atomic clocks by which all others in the US set their time. The process takes about ten days, and this year one of those days is Christmas. So while the rest of us are tucking into turkey and Christmas pudding, Lee will be hard at work. And he is only one of many scientists who must shepherd laboratory work or research that simply won't take a holiday. Lee works at the US National Institute of Standards and Technology in Boulder, Colorado. To measure 1 second, he passes a beam of caesium atoms through a cavity, and shines a beam of microwaves onto them. He adjusts the frequency of this radiation until a maximum amount is absorbed by the caesium atoms, at which point the frequency of the microwaves is the same as the natural resonance of caesium. Because he must be accurate, Lee has to take a large number of measurements over 10 days, and average them. The atomic clocks at NIST can wander just like any other clocks, so Lee's colleagues use his frequency to adjust their atomic clocks to make sure that they are all ticking in perfect time. The clocks Lee's colleagues are calibrating are accurate to within half a nanosecond per day. At the end of two months, they might be out by all of 30 nanoseconds. So why not skip it just this Christmas? Why not live dangerously and let the atomic clocks get out of whack by 60 nanoseconds one way or the other before resetting them? Lee says that his colleagues wouldn't be happy. They have to transmit the time and frequency signals from radio station WWV at Fort Collins, Colorado. These are picked up by navigation systems which prevent ships sailing off course and planes missing the runway, and by the latest VCRs which can reset themselves. Besides, the US has to regularly transmit its definition of a second by satellite to Europe for averaging with seconds from other countries, to set the Coordinated Universal Time-the global time standard. Lee is not the only one who will work over Christmas. Carol Polanskey's team at the Jet Propulsion Laboratory in Pasadena, California, may have to spend the holidays downloading data from the Galileo space probe which might otherwise be erased. Galileo will fly by Europa, one of the moons of Jupiter, before Christmas. During the closest approach on 19 December, the probe's magnetometer will record any magnetic field it detects from Europa. "This will be a really big discovery if it comes through," says Polanskey. Because of competition with Galileo's other instruments for transmission time, the first chance Polanskey will have to download data from the magnetometer will be 22 December. If it doesn't all come through, which is highly possible, the next chance will be 25 December. Thanks to the Internet, the data can be downloaded from home, Polanskey says. But if she cannot download all the data on Christmas Day, instructions will have to be sent to Galileo to prevent it from overwriting the data. That will mean a trip to the office for one of the team-Polanskey herself hopes to be with her family in Pennsylvania. Michele Arduengo, who recently finished her doctoral research at Emory University in Atlanta, also hopes to spend Christmas at home this year. "The graduate students are the ones you need to talk to," she says. "We're the ones who show up for the holidays." In the past, Arduengo has had to check her research several times over Christmas. The research subjects that couldn't wait were sexually maturing nematodes, microscopic worms Arduengo was using to study a gene that helps control sperm differentiation. She tried to time the breeding so she didn't have to work on holidays. "But a lot of times the worms didn't celebrate Christmas," she says. The nematodes take about three days to develop from egg to adult, and Arduengo had to catch them after their sex became apparent, but before they reached maturity. Because hatching and maturing rates naturally vary, Arduengo had to go in over Christmas armed with her platinum worm scoop to prise apart any amorous nematodes trying to mate at random and ruin her experiment. Whether artificial life celebrates Christmas or not, Andy Pargellis, a computer scientist at Bell Laboratories in Murray Hill, New Jersey, plans to take his research home with him over the holidays. Pargellis uses a computer to produce sequences of computer code, analogous to the nucleic acids thought to have existed millions of years ago on Earth before the start of life as we know it. He installs a program which mimics evolution by altering bits of code at random, in the same way that mutations occur in nature. He then lets the program to run until some codes begin to replicate. The resulting codes are like artificial life forms, and by analysing them Pargellis thinks he can gain insights into how evolution works and how life began on Earth. Pargellis doesn't have to work over Christmas, but is so attached to his evolving "organisms" that he brings them home to breed while he tucks into his festive fare. It doesn't matter if the program runs in his lab or on his home computer. "The great thing is I can have a lot of fun with the family, and still get something done," he says. While Pargellis is having fun at home, Nancy Love will probably be baby-sitting her bioreactors on Christmas Day. Love's research, at Virginia Polytechnic Institute and State University in Blacksburg, is looking at new treatments for industrial waste water. Her bioreactors use bacteria to break down some of the organic chemicals found in industrial effluent. To prevent the bacteria dying from lack of nutrients, Love must run her machines 24 hours a day for months at a time. Usually her students look after the reactors. "But I often get pulled in to take care of things around Christmas," she says. "Anyone who runs continuous flow bioreactors understands. They are like babies-they are very dependent and must be tended to." Bioreactors are not alone in requiring constant attention. Although seismologists from the University of California at Santa Barbara will be able to check their instruments via their home computers, they will spend Christmas Day on alert for signs of an earthquake. The team from the university's Institute for Crustal Studies looks after a dozen automatic monitoring stations near Palm Springs, and if all is calm and quiet there, they can start unwrapping their presents. But if a big earthquake hits at Christmas, the researchers will have to leave the socks and aftershave and rush into the field. "We really don't want to miss a large earthquake. We might only get one chance," says Ralph Archuleta, associate director of the Institute. "If it's magnitude six or bigger, all of us know we have the responsibility to get instruments into the field. We know that the big aftershocks occur almost immediately after the big shock, and it can be over quickly." By contrast, subjects for research can be almost guaranteed on Christmas Day at the Savannah River Ecology Laboratory near Aiken, South Carolina, run by the US Department of Energy. Whit Gibbons is in charge of a project that monitors the health of wetlands in the area, a study that includes monitoring the animals that migrate in and out during the year. Gibbons uses pitfall traps to capture animals such as salamanders, frogs and mice that pass through the area. Every morning a researcher has to count the animals and release them. Some Christmases, Gibbons himself checks the traps. He takes along his kids and makes it a family outing. "Somebody has to get out into the field Christmas and every other day," he says. The team doesn't decide until quite near the time who'll be emptying the traps on Christmas Day, but according to Gibbons a volunteer normally emerges. "It's usually whoever has the most in-laws visiting." Be happy * 20 December 2003 * Penny Lewis WHAT do the Christmas holidays mean to you? Maybe it's happy memories of long, lazy walks in the snow, excited children, crackling fires and the comforting smell of home cooking. Or maybe you're the misery-guts in the corner grumbling about all the rubbish on telly, the naff music, inflated prices and packed shops. Why do some people have such a rosy picture of the festivities, while others share Ebenezer Scrooge's view on life, even when their experiences might be near-identical? One word. Mood. We all know that our mood affects how we see life. But did you realise that your mood affects far more than just the experiences of the moment? In fact, your mood during Christmas present will alter what you remember of Christmases past and could even distort your feelings and thoughts about Christmases in the future. "Mood is an internal state which filters external information," explains Klaus Fiedler, a psychologist from the University of Heidelberg in Germany. Importantly, it can even influence the way you learn, and how you think about things long after your state of mind has changed. We are all familiar with the way places, smells or music we used to know well can bring memories flooding back. Mood can do the same. Edmund Rolls from the University of Oxford calls this context-dependent memory. When you form a new memory, he says, information about the context is also stored. It is linked to the main memory in a kind of network. When you try to remember, activity in any part of this network makes it easier to pull the target information off the memory shelf. Your mood is part of the context, says Rolls, so being in a good mood means you'll recall memories stored during previous good moods more easily. According to Fiedler, not only do you remember things you learned in the same state of mind more easily, you also recall more generally positive things when in a good mood, and more gloomy things when you're feeling down. This selective memory has been demonstrated many times: psychologists use music, hypnosis or other techniques to alter someone's mood, then ask them to recall events from the past. People who are feeling happy tend to remember more positive episodes and those who are feeling miserable tend to remember more negative ones. For the Scrooge-like among us, just being in a dejected state of mind means you'll remember unhappy times from your past, and the worst bits even from more joyful occasions. But there's even worse news for the non-festive. New findings suggest that feeling unseasonably wretched over the holidays not only dredges up past Christmas misery, it could also permanently sour future thoughts of the festivities. On the bright side, however, just one really fantastic Christmas might ensure that the sight of tinsel and fairy lights gives you a warm, cosy feeling forever. Michael Rugg and his team from University College London looked at how the emotional context in which we learn something affects brain activity when we remember what was learned. In one study, for example, he asked people to read a series of emotionally laden sentences such as "the farmer was shredded when he fell into the corn grinder". The volunteers were later shown emotionally neutral words such as "corn", and asked to indicate whether or not they remembered seeing them in the sentences. Surprisingly, correct recognition of the neutral words led to activity in core emotional regions of the brain. Other experiments along these lines have pointed to the same thing: if you learn neutral information in an emotionally charged context, remembering it triggers an emotional response. Rugg thinks this effect may be akin to the network idea - recalled words act as clues or contexts prompting people to dredge up their memories of the emotional sentences from storage, whether they want to or not. The association between the emotional and the non-emotional could influence our everyday memories, suggests Rugg. It could also be part of the problem for people suffering from post-traumatic stress disorder who can't help continually reliving emotionally distressing memories. "Emotionally neutral cues in the environment somehow serve as reminders and cause these distressing memories to be retrieved," he explains. There are other signs, too, that our mood or emotional state affects how we learn. In a recent study, Susanne Erk and Henrik Walter of the University of Ulm in Germany used a task similar to Rugg's, in which people were asked to learn words presented just after they had viewed emotionally charged images. The researchers found that when people learned words in a negative context, activity in a certain brain region that plays a vital role in emotions was a good predictor of how well they would later remember that information. That brain region is called the amygdala - and Rugg found activity in exactly the same structure when people retrieved information learned in an emotional context. The fact that this emotion-related brain activity was so similar during learning and remembering strengthens the idea that recalling neutral information can evoke emotions very similar to those felt when it was learned. Even more interestingly, Walter found that people remember neutral information better if they are in a good mood when they learn it. A sound lesson for anyone revising for exams - getting grumpy won't help. The intriguing conclusion Walter draws is that state of mind really does alter the way information is coded into your memory. Any new information will be permanently stamped with your good or foul temper. The whole picture looks very bleak for people who are chronically unhappy, like Scrooge. Erk and Walter's work suggests they may learn less efficiently. Even worse, what they do learn and recall can add to the general gloom. This is a real problem for people suffering from depression. Not only do they have a selective memory for sad events, says Rebecca Elliott, from the University of Manchester, UK. They also dwell on negative thoughts and interpret things in a more than usually gloomy way. Once you become depressed, says Elliott, you can get locked into a cycle of concentrating on the negative, getting worse and worse in a downward spiral. But is this really relevant to the rest of us? What of those who get just a little grinch-like at Christmas - should they be worried? Well, maybe, cautions Elliott. Even mild forms of depression can lead to problems with memory and attention. "There have been studies with seasonal affective disorder, which tends to be milder than major depression," she says, "and people have identified cognitive deficits even in those groups." Is it more than just memory and learning that follow the tune of our mood? The answer seems to be yes. Mood also affects how well we can pay attention, the way we take in information, and even how we think. In fact, Fielder goes so far as to suggest that our brain processes information in completely different ways when we are in good or bad moods. Walter agrees: "Mood is associated with a certain cognitive style," he says, "and this cognitive style makes evolutionary sense." A negative mood, he suggests makes you more realistic and more focused on the outside world. You need to deal with information in a direct, rapid and straightforward way. A positive mood, on the other hand, usually means you are not in danger, that you have time to introspect, to be creative, to play around with information - what Fiedler calls a "loosened" cognitive style. So what should you do to ensure a happy mood? The answer is simple enough: get yourself feeling good to start with, and that should reinforce itself. There are plenty of ways to manipulate how you feel. These range from more serious options like antidepressant drugs to the everyday strategies most of us use without thinking: listening to happy music is a reliable one. Exercising, doing something you enjoy, or just smiling are also tried and tested methods. So there's really no excuse for stewing in a miserable funk. After all, even the original Ebenezer Scrooge managed to cheer up when he finally made the effort. Designer snowflakes * 23 December 2000 * Stephen Battersby IT'S SUMMER in southern California, and among the palms and lemon trees a man is making snow. Inside the physics department at Caltech in Pasadena, Kenneth Libbrecht grows tiny, frozen crystals that would melt in a millisecond if they landed on the ground outside. His goal is to find out why no two snowflakes are ever the same shape, and why they form myriad patterns of plates and needles and ferns and elaborate baroque stars. And he seems to have found the answer, or at least a big part of it. Along the way, he's learned how to make designer snowflakes, developed the art of growing ice by electricity, and discovered what happens to snow when you get it drunk. It started in 1997 as an aesthetic quest. "I just thought, I really would love to make the perfect snowflake," says Libbrecht. Over the past few years he has polished his art, perfecting a snowflake incubation chambera steel cylinder about half a metre across with a few tubes attached. Look through a window in the side, and you can see an embryonic snowflake growing on the end of an ice needle. This needle is one of his discoveries, grown at express speed by applying a strong electric field. "Snow crystals are so beautiful because they are both complex and symmetricaljust like sunflowers and seashells," says Libbrecht. By changing the temperature and the amount of water vapour in his chamber, Libbrecht can make any of the fantastic natural shapes that fall on Norway or New Zealand. If he wants a plain, flat plate or a many-fingered dendrite, he just presses a few buttons, waits a few minutes, and there it is. The symmetry of a snowflake, he says, begins at the bottom. In ice, molecules cluster together in hexagonal patterns. That means any surface other than a hexagonal facet will be rough. But water molecules like to wedge themselves into crevices, where they can form plenty of bonds with their neighbours, so these rough surfaces quickly flatten out into smooth facets. The smallest snow crystals, and most of those that grow slowly in the very cold air of Antarctica or the stratosphere, are simple hexagonal plates or columns. But if a crystal grows quickly, it will soon turn from a hexagon into a star. If it sucks up a lot of water from the air, water molecules become scarce near the surface of the ice. Then any bump in the surface will reach out into a relatively rich water supply. With more water available it grows faster than the surfaces below, so it sticks out more, so it grows even faster . . . Before you know it, an arm has formed, stretching out from the original bump. A hexagonal crystal has six corners that act just like ready-made bumps, so it will sprout six arms. But why are there so many different shapes? Why are no two snowflakes the same? Part of the answer came back in 1936 from the Japanese scientist Ukichiro Nakaya, who grew snow crystals in his lab at Hokkaido University. Nakaya discovered that if he changed the temperature of the air just a little, he changed the shape of the snow crystals that grew in it. Columns, thin plates, thick plates, sectored plates, hollow needles, dendrites . . . they all form within a few degrees of one another. "Snow crystal growth is real touchyreal sensitive to external conditions," says Libbrecht. Most importantly, the rate at which ice crystals grow zooms up and down by factors of a hundred or more when you tweak the temperature by just a few degrees. And changing the growth rate changes the shape too remember that fast growth means sprouting arms, slow growth means hexagons. This is the key to snow's kaleidoscopic variations. As a snowflake flutters about in a cloud, it encounters patches of warmer and cooler air. Each arm of the snowflake experiences an identical sequence of changes, so while the flake becomes ever more complicated, the sixfold symmetry is preserved. And each snowflake will have a unique temperature history, so it ends up with a unique structure. This explanation gets us further forward, but in a way it only replaces one mystery with another. Why is snow-crystal growth so exquisitely sensitive? To try to answer that, Libbrecht built a snow cloud in a can. A big copper cylinder lagged with polystyrene, it looks rather like a cheap boiler. At the bottom is a heated pan of water, which evaporates and mixes with cooler air above, producing a mixture that is supersaturated with water. Like the air in a cloud, this mixture is eager to shed its burden of water molecules. But to do that, it needs a triggera nucleation site that can kick-start the growth of snow crystals. When Libbrecht drops a flake of dry ice in at the top, it seeds a cascade of snowflakes which fall freely though the air, growing as they go. Finally, they land on a glass plate where a camera records their size and shape. With this equipment, Libbrecht hoped to test an old idea about how snowflakes get their diversity. In 1982, two physicists came up with a model for why the growth rate of a snowflake changes so dramatically with changing temperature. Toshio Kuroda of Hokkaido University, and Rolf Lacmann of the Technical University in Braunschweig, Germany, suggested that tweaking the temperature switches the growth from one mechanism to an entirely different one. For snowflakes in a cloud, the theory goes, the critical change happens at around -15 ?C. A little way below this temperature, the molecules have enough energy to compete with the forces binding them to the crystal below, so the surface scrunches up into a slightly jagged, disordered state. Then incoming molecules find a welcoming crevice wherever they land, and growth is fast (see Diagram). Warm the ice a little, though, and the top few nanometres of the crystal melt into a disordered, liquid-like layer. No one knows quite why it happens, but the existence of this "quasiliquid" layer is well established. Without it, ice crystals wouldn't stick together so easily, and you wouldn't be able to make snowballs. Now there are no jagged peaks and troughs, just a smooth ice surface topped with a quasiliquid layer. Water molecules from the quasiliquid layer may stick briefly to the ice layer beneath, but because they can form relatively few bonds with this flat surface, they have enough energy to pop off again and rejoin the quasiliquid. The crystal doesn't quite stop growing, though. Every now and then, a molecule will linger on the surface long enough to be joined by one or two more. United, the group is a little less likely to leave, and more likely to stay. Once a patch of molecules has reached a certain size, it becomes stable. New arrivals can nestle into the edge of the patch, which therefore spreads across the surface of the crystal. So the crystal grows slowly, in skins. If the flake teeters back and forth between these two temperatures, it will grow fast then slow, as the arm-growing mechanism turns on and off. The resulting complicated, unpredictable series of branchings eventually becomes a beautiful, classical snowflake. To make matters more complicated, these two processes occur at different temperatures on the different crystal faces. At -5 ?C, for example, the top and bottom of the crystal grow fast and the side facets slowly, so you get columns. At -15 ?C, where the opposite happens, you get plates. At least, that's the theory. But it is almost untested. Scientists disagree about the temperature at which the quasiliquid layer appears, how thick it is, and how it affects the growing ice. "Crystal growth in the presence of surface melting is hard to work out, and very few experiments have been done," Libbrecht says. Now, he has found a strong hint that Kuroda and Lacmann were right. In a paper submitted to the Journal of Crystal Growth, he and his colleague Haitao Yu looked at another of the model's predictions. They describe how growth rates changed as they held the air in their convection chamber at a steady -5.5?C, and dragged down its moisture content. Without a quasiliquid layer, growth should slow down early on as the moisture supply to the surface is reduced. But this doesn't happen, they found. The crystal continues to grow until the air is very dry indeedas should happen if the ice surface is sheathed in a water-like layer. So quasiliquid theory seems to be on the right track. But there are still plenty of puzzles in the intricate world of the snowflake. Why, for example, are most plates "sectored", with ridges running out from the centre? And what causes some of the rarer shapes such as triangles and pyramids? Libbrecht still has plenty of work to do, if work is the word. Next he wants to investigate the effect of putting different gases into the air. He reckons that some gases, such as carbon dioxide, dissolve in the quasiliquid layer and radically alter crystal growth. Another powerful contaminant is alcohol. "With just a part in a million of alcohol, growth changes in wild and crazy ways. If you can smell it, that's enough." So keep your mulled wine covered this Christmas, or you might be corrupting a million young snowflakes. Libbrecht is amused by the ways people from different parts of the world react to his work. "I show this stuff to people from tropical countries, and they say `OK'. Europeans and Americans think it's pretty good. Canadians say `Oh Wow!'." And what about Libbrecht himself? Snow studies are only a sideline. His day job is developing sensitive detectors for LIGO, the huge US experiment designed to catch gravitational waves from far across the cosmos. So why did he decide to grow snowflakes in Pasadena? "I have to admit," he says, "the reason it caught my eye was my background." Ken Libbrecht, it turns out, is not a native of southern California. He comes from Fargo. Which kind of snow? Freezer teaser * 20 December 2003 * Valerie Jamieson TIRED of mini umbrellas and sick of olives on sticks? If the cocktails at your Christmas party are in danger of looking a little pass? ths year, Ken Libbrecht has just the thing for you. Why not try adorning your guests' drinks with "ice spikes", gravity-defying icicles that sometimes grow out of ice-cube trays. "They're bizarre things," he says. "People are amazed by them." Libbrecht, a physicist at the California Institute of Technology in Pasadena, has recently become something of an expert in the art of growing ice spikes. Last summer he and his student Kevin Lui spent their days making ice cubes by the thousand. Their goal was to find out why ice spikes only occasionally rise out of freezing water. They seem to have found the answer, or at least a big part of it, and along the way they have learned how to grow the perfect ice spike. The good news is that anyone with a freezer can do it. It all started earlier this year when, out of the blue, someone sent Libbrecht photographs of frozen needles over a centimetre long protruding from a tray of ice cubes. For the past six years, Libbrecht had been growing designer snowflakes in his laboratory in an effort to find out why they form such complex and delicate patterns. Though he had heard about people waking up on cold winter mornings to find the odd ice spike sticking out of their bird baths, he hadn't given the phenomenon much thought. But when he received the photos of tiny ice towers made in a household freezer, he was intrigued, and tried growing them himself at home. At first he had mixed success. Most of the time his ice cubes turned out cubic, as expected. But now and then, a tall spike emerged from his ice tray. After some experimenting, Libbrecht hit upon the secret for making as many as four spiky ice cubes per tray. The trick, he has found, is to use purified water rather than water straight from the tap. But leaving it at that wasn't good enough for Libbrecht. "It bothered me why it worked with distilled water and not tap water," he says. So he and Lui set out to find out exactly what affects the growth of ice spikes. Pop an ice-cube tray filled with water in the freezer and, after about an hour-and-a-half, the surface begins to freeze. This freezing starts at the sides of each compartment of a plastic ice tray because they are covered with microscopic nicks and scratches. Water molecules wedge themselves in these tiny hollows, where they can form plenty of bonds with their neighbours as the temperature falls. And because ice crystals are less dense than water, they float to the surface. The freezing ice then creeps towards the middle until only a small hole remains unfrozen at the centre of the ice cap (see Graphic). At the same time, more ice starts forming around the sides of the cube. And since ice expands as it freezes, the ice below the surface pushes water up through the hole. If the conditions are just right, the meniscus of water forced out of the hole freezes around its rim, forming the base of the spike. As this process continues, the ice spike grows taller until all the water has frozen or, more commonly, the tip of the tube freezes over. Using a video camera shut inside a lit freezer compartment, Libbrecht and Lui found that ice spikes grow to their full height surprisingly quickly, within 3 to 10 minutes. Libbrecht has used experiments like these to work out why spikes won't form readily in ordinary tap water. It is all down to impurities. As water freezes around the top of a growing ice tube, its saltiness increases because the dissolved minerals and metals do not fit snugly into the ice crystal lattice. Libbrecht reckons these impurities quickly build up to such high levels at the tip of the spike that the water there can no longer freeze. Any spikes that begin to form in tap water will just stall. To test just how pure the water has to be for ice spikes to form, Lui compared ice cubes made from distilled water with those made from increasingly salty solutions of sodium chloride. With as little as 0.2 milligrams of salt added per litre of water, the chance of an ice cube producing a spike plummeted from 1 in 5 with distilled water to less than 1 in 20. And since tap water typically contains 100 times this concentration of various salts, it is hardly surprising that ice spikes are so rare. So if you want to grow your own ice spikes, you'll need distilled water like the kind sold in supermarkets for pouring into your steam iron or topping up your car battery. What else do you need to watch out for? Though Lui managed to grow ice spikes in dozens of different kinds of freezers, from those found in labs to those found in student dormitories, he found that temperature does make a difference. At -7 ?C, half the ice cubes turned spiky. If the temperature plunges, however, so does the likelihood of finding an ice spike. Only 1 in 10 cubes will grow a spike at -15 ?C. Libbrecht isn't certain why, but he suspects that in such cold conditions the tip of the embryonic spike freezes shut before it has had a chance to grow. So if your ice cream is really hard, your freezer is too cold to make many ice spikes. Having a modern, frost-free freezer helps too. They hardly ever need defrosting because a fan circulates cold, dry air. The moving air chills the edges of the water droplet perched on top of the growing ice spike faster than in calm conditions. This faster freezing rate at the tip promotes the growth of longer ice spikes. "It also works better if the freezer is empty," says Libbrecht. That's because all those pizzas and bags of frozen peas stop the air from circulating freely. But the good advice doesn't stop there. If you want prizewinning spikes, use a plastic ice cube tray rather than an old-fashioned metal one. When Libbrecht spotted aluminium ice cube trays for sale on eBay, he knew he had to try growing ice spikes from them, but hardly any formed. Metal is such a good conductor of heat that the surface of the water freezes over quickly, completely sealing the hole from which the spike would normally grow. Plastic has just the right amount of insulation for ice spike formation. Soon after they published their findings in October (www.arxiv.org/abs/cond-mat/0310267), Libbrecht and Lui realised they had started a craze. Reports that their tallest ice spike was 5.6 centimetres long were like a challenge for people who had become hooked on ice spikes. Soon Libbrecht received an email from an enthusiast who distilled his water a second time to purify if further. "Using a cereal bowl filled with double-distilled water, he grew an ice spike measuring 10 centimetres long and 1 centimetre across," Libbrecht says. It may sound like a lot of trouble to go to for your Christmas party, but it's time well spent. After all, spikes are the perfect ice-breaker. The sinister side of the holly and the ivy * 25 December 1999 * Lynn Dicks DECK the halls with boughs of holly. Drape the mantelpiece with tendrils of ivy. And hang out a sprig of mistletoe for festive harmony. Bringing some of the outdoors inside is all part of the Christmas tradition. In Western society the leaves of holly, ivy and mistletoe are hugely symbolic. But take a closer look at these traditional festive plants. Were the ivy's jagged heart-shaped leaves really designed to curl so delightfully around your banister rail, and the holly's prickles put there just to pretty up your porch? And are mistletoe's fleshy green lobes nature's invitation to make free with whoever you please? For the real story behind this seasonal foliage, you must look to the harsh outdoors. It is all a question of survival-leaf shape can make the difference between life and death. Leaves are probably the world's most attractive factories. Here the complicated business of photosynthesis occurs, with energy from sunlight harnessed to turn carbon dioxide from the air into sugars. Being thin and flat allows leaves to expose the maximum number of cells to air and light. But they must also be held erect if they are to harvest sunlight effectively, and that's where shape comes in. A leaf is like a sheet of fabric held up to the Sun on a skeleton of mechanical struts: the veins. To make this system efficient, the plant needs to minimise the amount of structural support per unit area of leaf tissue. The simplest solution is to have one main strut-the midrib-running down the middle of the leaf, with secondary struts going out to the edge. Each section of midrib supports the tissue on either side of it. But the further away you go from the midrib's attachment point, the longer the "lever", and so the greater the effective weight. Try holding a heavy book at arm's length, and you'll see the principle. By tapering towards the tip, the leaf can counteract this effect, allowing the entire length to be held erect by a single central strut. Tom Givnish from the University of Wisconsin, Madison, has examined the structural mechanics of leaves. "If mechanical efficiency were the only consideration," he says, "plants would all have triangular leaves." Of course, mechanics is not the only consideration. There are other things a plant has to think about, like how to arrange its leaves so they don't block each other's light. "Triangular leaves can't be held efficiently along twigs to harvest sunlight," says Givnish, "because triangles don't pack. But if the base of the leaf is tapered as well, to a sort of kite-shape, then they can be held close together in a circle, or spiral, without overlapping." What Givnish describes is the basic shape of many leaves. Mistletoe for instance, is a variation on it. So too is holly, if you ignore the spines. So why aren't all leaves this shape? And how do you account for the subtle differences between the shapes of leaves that do fit the model? It all depends on exactly what conditions a plant finds itself in-the climate, the exact location, the life cycle and the risk of being eaten can all heavily influence leaf shape. Take holly, for example. "The holly bears a prickle, as sharp as any thorn," goes the carol. The prickles are certainly a prominent feature of most holly leaves, but have a closer look at those high up at the top of a holly tree. Often they are not spiny. This is a clue that the holly's prickles have evolved to protect it against plant-eating animals such as deer. Only those leaves within reach of browsing muzzles are spiked to give large herbivores a meal they won't forget. But prickles are expensive to produce. To understand why it is worth it for holly, you need to look at the tree's life cycle. Holly is an evergreen, but it lives among deciduous trees that lose their leaves in winter. So for half the year one of the few sources of food for browsing animals is holly leaves. It is also a fairly short tree and easily accessible, so it needs extra protection to prevent itself from being totally stripped of its leaves. "All hollies that live as evergreens in deciduous forests are spiny," says Peter Grubb from the University of Cambridge. "There are deciduous hollies in North America and Asia that are not spiny at all." And if that doesn't convince you, consider the species of oak that are evergreen. Deciduous oaks tend to have soft, lobed leaves, but most evergreen ones have spiny leaves just like holly. Other plants grow sharp thorns on their stems to keep predators at bay, but prickly leaves are a much more direct deterrent. Evergreens have what it takes to develop this defence strategy because their leaves must be tough and thick enough to survive frosts, and heavily waxed to prevent water loss at times when groundwater is frozen. "These stiff leaves form a solid backing for the spears, so they can really inflict some damage inside the mouth," says Givnish. Ivy, on the other hand, makes itself unpalatable with toxins, but its leaf shape is more of a conundrum. With three or five large lobes, it is strikingly different from the basic kite-shaped leaf. "A great number of climbing plants have leaves something like this, with a heart-shaped base," says Grubb. Think of bindweed or grapevines. The crucial thing is the way the plant grows. These are plants that do not provide their own support, but cling to other plants. "A climber needs its leaves to be facing out sideways, to catch light," says Grubb, since the supporting tree will take most of the light from above. So the leaf is held at a right angle to its stem, instead of in the same plane. The most efficient way to do this is for the midrib to originate near the middle of the leaf, with veins going out to the edges like spokes. Small forest floor plants, such as violets, also have this kind of leaf shape, because their leaf stems grow directly out of the ground. The amazing thing about ivy is that when the plant starts to flower, the leaves revert to the standard shape. "As it pushes the flowers out into space to attract insects, the shoots become self-supporting and the leaves are spirally arranged and shaped the same as many other trees," says Givnish. They are held in the same plane as the leaf stem, and often have full light from above. But there is another dimension to this, which may also explain the different leaf shape on the flowering shoots of ivy. The normal ivy shape is only good in shady conditions, where the temperature tends to be low and water loss is not a problem. Plants cannot avoid wasting water because it evaporates from the leaves through the holes that allow air to enter for photosynthesis. "If you look around in a forest," says Grubb, "you'll notice that the climbing part of ivy never grows right to the top of the crown. It always stops short before it is exposed to full sunlit conditions. In fact, no plants in hot, dry places have this broad, heart-shaped leaf form." Givnish believes the explanation lies in the film of stagnant air that clings to any leaf surface. This boundary layer, as it is called, is thicker in a broader leaf. "The boundary layer directly blocks heat loss," Givnish explains, "so in sunny conditions, a broader leaf will get warmer, and more water will evaporate per unit area than from a narrower leaf." Grubb has a different theory. He suspects it is something to do with the difficulty of getting water from the midrib to the leaf's edge, if this is a long way away from the middle. Either way, the pattern is clear. Plants in dry sunny places usually have small, narrow leaves. Under the shady canopy of woodland, plants such as ivy are free to have a shape that is easier to support. The only truly round leaf is the water lily, which holds its leaf perpendicular to the stem, but also never has a problem with water supply because it is bathed in it all the time. Mistletoe is another plant that doesn't bother with water conservation. There are more than a thousand species around the world and all are hemiparasitic-they grow directly on other plants, tapping into the plumbing of their host to extract water and nutrients, but use photosynthesis to produce their own sugars. "Because they are water parasites, mistletoes have to be water spenders," says Peter Bannister from the University of Otago in New Zealand. "They have to produce a water deficit, so that water flows from the tree into them." This explains why mistletoe doesn't have a thick waterproof coating on the top of its leaves, like holly. "Many of them are also more succulent, or fleshy, than their hosts," adds Bannister. Because they keep drawing water even when they do not need it. The shape of mistletoe leaves varies considerably. About half of mistletoe species in Australia and New Zealand have leaves that look like those of their host tree. One, for example, has a eucalyptus-like leaf, while another, living on a tree with needle-like leaves, has very reduced leaves too. A possible explanation is that chemicals such as plant hormones pass from the host to the mistletoe and affect leaf shape. But there is little direct evidence for this, and the weight of opinion is with an alternative idea-mimicry. These mistletoes have to contend with tree-dwelling herbivores such as possums, sloths and monkeys. The animals are implicated in one of the most bizarre effects in plant biology. The mistletoes, it seems, are disguised as the tree to avoid being eaten. At first Bannister was sceptical of this idea. "I thought it was more likely to be to do with the plants experiencing the same physical environment," he says. But his investigations and those of Jim Ehleringer from the University of Utah, Salt Lake City, have shown that the mimicking mistletoes have a special reason to avoid being spotted. They contain higher levels of nitrogen than their hosts. Nitrogen is a very important resource for herbivores, and some animals actively select nitrogen-rich plants. So nitrogen-rich mistletoes would have been under greater pressure to avoid detection. In the northern hemisphere, mistletoe does not mimic the host's leaves. But then look at the trees they choose to parasitise. The Australasian trees are mostly evergreen, whereas in temperate climates the host trees are usually deciduous. There is little point in cunningly disguising yourself as the leaves of your host if in the winter your tree loses its leaves and you are left standing out a mile. So when you're hanging your holly wreath, or kissing under the mistletoe this Christmas, consider the leaves with new respect. To a plant, a leaf is no mere adornment: it is a vital piece of anatomy. And be thankful for the variety of strategies and situations in the plant world. Imagine how dull your wreaths and arrangements would be if all leaves were triangular. Festive decorations drive the plundering of moss * 25 December 2004 * Gail Vines BOUGHT a classy Christmas wreath this year? Then chances are that tucked away amidst the boughs and the berries you will find the desiccated remains of a moss - harvested, perhaps, on the other sideof the planet. "Once you start noticing, you see wild-harvested moss everywhere," says Patricia Muir from Oregon State University in Corvallis. Even that green carpeting under the nativity scene could be "sheet moss", sold by the yard and rolled up like bolts of cloth. The moss's feathery fronds have been glued to a fabric backing, dyed a lurid green and sprayed with fire retardant. And this stuff doesn't just turn up in Christmas decorations; it can be found everywhere from funeral parlours to airports, motor shows and the lobbies of elegant hotels. In the past decade, consumers with an eye for the "natural" have unwittingly fuelled a fast-growing trade in mosses. As moss is not yet grown commercially, this demand can only be met by plundering nature's store. And no one really knows how big the industry is. "That's part of the problem," says Muir. Her latest study shows that it must be worth at least $5.5 million a year in the US, and could be as much as 30 times that amount, given that harvesting carried out under permit is just the tip of the iceberg. Mosses have always been put to good use (see "Not just for Christmas"), but a recent boom in demand threatens woodlands and moorlands worldwide. New Zealand was one of the first countries to cash in. The decorative moss industry took off almost by chance in the 1980s, when a Kiwi promoting his native goods in Tokyo covered his stand with dried sphagnum moss. To his surprise, he soon received a NZ$400,000 (about US$230,000) order - for 12 container loads of moss for the Japanese floral trade, particularly for orchid growers. Overnight, sphagnum became a major export. By 1990 some 700 tonnes of dried sphagnum, worth NZ$10 million, was being shipped abroad every year. In recent years the export trade has been valued at NZ$18 million annually. And some New Zealanders have recently teamed up with Chilean entrepreneurs to begin sphagnum harvesting at Puerto Varas in southern Chile. It takes 15 wet tonnes to produce 1 tonne of dry moss, so harvesting it is a major operation. Helicopters have even been drafted in to lift sphagnum from swamps in the southern and western parts of the South Island. The moss is harvested from private land or from swamps leased by the Department of Conservation and deemed to be of low conservation value. However, the ecological importance of moss is a moot point. A recent report by the Ministry of Agriculture and Forestry concluded that "the level of harvest of sphagnum moss determined to be sustainable is unknown". Home-grown bounty Today dried moss from New Zealand is imported to the UK preformed into liners for hanging baskets. But the UK home-grown trade is expanding quickly, according to Alison Dyke of the non-governmental organisation Reforesting Scotland. What was once a cottage industry is becoming commercialised. Wild moss is proving to be a lucrative forest product which, along with pheasant shooting, rivals the value of the timber itself. On a good site a "mosser" can fill 400 bags in a day, with bags selling at ?1 each. One Welsh firm shifts around 50 truckloads of moss and foliage each year. In the UK, most of the legal collecting tends to be restricted to the parts of conifer plantations earmarked for clear-felling. Much harvesting is on a modest scale and supplements the livelihoods of local farmers and forestry workers, says Helen Sanderson from the Royal Botanic Gardens at Kew, London. "But illicit collecting on moorland and in mixed semi-natural woodlands is worrying," she says. In the US, alarm bells are already ringing. Here, the burgeoning trade in wild moss is concentrated in the deciduous woodlands of the Appalachian mountains in the east and the temperate rainforests of the Pacific north-west. The moss in the rainforests is epiphytic, meaning it grows on trees, and the area boasts luxuriant mats of moss and liverworts dripping from the branches of huge old trees. Twenty years ago this bryological bounty was exploited modestly, mostly by self-employed locals who collected moss as a stopgap between logging or fisheries work. Since the early 1990s, however, the locals have been sidelined by labour contractors who bring in crews of poorly paid immigrant workers to strip whole sections of forest. Up to 40,000 tonnes of moss are now being removed from US forests every year, Muir says. "Markets are driving what is happening on the ground," says Rebecca McLain, a policy analyst at the Institute for Culture and Ecology in Portland, Oregon. A few controlling companies buy up most of the harvesting leases then sublease to labour contractors. "You can't blame any individual, they are all trying to make a living," says McLain. "The problem lies in the way the system is structured." Big, short-term leases encourage over-exploitation, and the supply chain exacerbates the situation. The gang-harvested moss is sold on to garden centres and florists, which tend not to ask where it came from or how it was collected. "Until the floral industry starts to worry about its green credentials and consumers become more discerning, things are not going to change," Dyke says. Collectors do target common mosses, but they can't help gathering up rarer species too, says Susan Studler of the West Virginia University in Morgantown. In one typical 100-kilogram load, along with the two or three common species of feather moss that were the collector's quarry, she found 75 different species of moss and lichen as well as several rare ferns. Hundreds of invertebrate species and even young salamanders are also scooped up. "This high 'incidental take' of species is what particularly concerns me," Studler says. In an attempt to control the trade, forest managers have banned collecting in some places, but government authorities don't have the manpower to enforce their own rules. Besides, tighter regulations on public land seem to have intensified harvesting in the remaining forests, worsening the ecological impact, according to JeriLynn Peck of the University of Minnesota, St Paul, a pioneering researcher on the US moss trade. She believes that not all moss harvesting is unsustainable, but that exploitation at present rates could destroy natural ecosystems that have taken years to evolve. "Experimentally harvested plots suggest that some mosses need at least 10 years and probably as many as 30 years to regrow," she says. Worse, epiphytic mosses may only be able to colonise the rough young twigs of saplings, not the smooth branches of veteran trees. If that is true, these moss mats are as old as the trees themselves - hundreds of years old in many cases. "By definition this is an unsustainable harvest," says Robin Hall Kimmerer from the State University of New York at Syracuse, "and the loss will have consequences we cannot foresee." Mosses can hold 10 times their weight of water. In the forest they act as slow-release sponges, buffering the flow of water and nutrients. They are the forest's seedbeds, form insulating and antiseptic nest liners for birds and mammals, and provide sources of food and shelter for thousands of creatures. Salamanders breed and feed in moss mats, for example, and in turn become food for animals higher up the food chain. Remove the moss - a major link in a web of interactions - and the whole ecosystem could be disrupted. No one knows whether it will prove possible to regulate moss harvesting. One telling case study reveals that even small-scale removal may cause damage. Researchers accompanied an extended family of 10 people living in the mountains of Mexico as they gathered moss destined for Christmas nativity scenes. In the 1996 season the collectors removed 50 tonnes wet weight of moss from the forest floor. They harvested the moss patchily, only taking about 2 per cent of the total surface area. That may be sustainable, but with the moss they inadvertently gathered the seedlings of fir trees (Bryologist, vol 104, p 517). As a result, the researchers conclude, the harvest could threaten the long-term regeneration of the forest. Worse still, it jeopardises a flagship protected species, because these are the very trees that provide the winter resting place for monarch butterflies after their famous annual transcontinental migration. Green-nosed reindeer With enough ecological knowledge and plenty of determination on the part of governments and conservationists, it may be possible to control the excesses of big industrial harvesters and balance forest conservation with local livelihoods. But that will inevitably take time. In the meantime, we the consumers should consider our responsibilities. Mosses may look good in Christmas decorations, but is it worth it? Kimmerer is certain it is not. This is like witnessing "an antique tapestry ripped to shreds and stuffed in a bag", she says. "All this destruction - and for what?" Kimmerer is horrified to find that in upstate New York you can buy green teddy bears and life-sized reindeer made of wire frames stuffed with Oregon moss. "The time to be a bystander has passed," she says. Not just for Christmas The commercial trade in mosses is new, but our resourceful forebears have relied on the little green fronds for millennia For restful slumbers Sleeping on a pillow stuffed with hypnum moss was said to bring sweet dreams. Linnaeus, the father of modern plant taxonomy, spoke highly of the bedroll of polytrichum moss he enjoyed while travelling with the Sami people of Lapland. Foot salve The 5200-year-old body of the Ice Man discovered in a melting Tyrolean glacier in 1991 was wearing boots packed with mosses, including species found only in lowland valleys 100 kilometres away. Today, odour-eating liners for walking boots sometimes contain a layer of sphagnum moss. Weapon of war Legend has it that Spanish Christians used moss to hide from the Moors outside the western town of B?jar. After covering themselves with the plant, they crept to the foot of the Moorish fortress and lay disguised as rocks. When the gates were opened, the moss men sprang up and defeated the unsuspecting guards. Natural bubble wrap Resistant to rot and mould, moss is an ideal packing material. The cemetery lawns of New York state have been liberally colonised by one European moss species that probably arrived in the late 1800s packed round imported nursery plants. In the bathroom Able to soak up many times its own weight in water, moss has long been put to good use as nappies (diapers), toilet paper and sanitary protection. Wound dressing Moss extracts are antiseptic and can fight fungal skin infections. In both world wars, when conflict disrupted supplies of cotton from Egypt, sphagnum moss was collected for the Red Cross and sent to the front to pack soldiers' wounds. Meltdown * 19 December 1998 * Gail Vines EACH year millions of people send greeting cards bearing images of what is, let's face it, just a couple of balls of snow. Not that manufacturers are complaining: snowmen are good for business. Increasingly standardised, depictions of these smiley chaps with their carrot noses, bright scarves and bedraggled hats can shift not only cards but a wide assortment of Christmas decorations, not to mention confectionery, books, videos and soft toys. Today, the snowman is up alongside Santa as a secular icon of Christmas, at least throughout northern Europe and North America. But what is he doing there? As a figure of jollity and fun, the snowman undoubtedly contributes to the carnival air of the Christmas festival, argues Tricia Cusack, who lectures in history of art, architecture and design at the University of Birmingham. The snowman's intrinsic good humour is captured by his French appellation: he is "bonhomme de neige". His rotund body conveys an air of bacchanalia, of celebratory overindulgence. But snowmen do a lot more symbolic work besides, Cusack contends. For a start, by simply standing out there alone in the cold, the snowman provides the perfect contrast to the putative seasonal warmth of the family within. Fantasies about perfect families are central to Christmas iconography these days, and here images of snowmen can do double duty. "As one of the minor household gods at Christmas," says Cusack, "the snowman both enhances the element of fantasy and magic for children and reminds adults of childhood." If so much seems benign, there is more that is less so, says Cusack. Snowmen may look innocent, but nowadays, even they cannot escape connotations of class, gender, and even "race". Social commentators have noted that the Christmas festival can seem a rather "white" affair, exacerbated, perhaps, by the very whiteness of the snowman himself. Gender, however, is undoubtedly at the heart of the snowman phenomenon. Snowmen look androgynous but are presumed to be male, says Bernard Mergen. A historian at George Washington University, Mergen is one of a select band of scholars who have made the cultural significance of snow their specialist subject. "Woody Allen cleared up the ambiguity in his movie Radio Days, when he had two boys decorate their snowman with a carrot penis," says Mergen. "Reports of anatomically correct nude snowmen regularly appear in the press." Occasional images of snow-women are seen as funny, precisely because they overturn gender expectations. So what sort of a man is this? The combination of the snowman's masculinity and his "ritual location in the semi-public space of garden or field" is telling, Cusack contends. It's no accident that the Victorian promotion of Christmas as a family-centred celebration went hand in hand with the ideology that a woman's place is in the home. Traditional celebrations of Christmas underline women's supposed role in the domestic sphere, while the snowman presents an image of masculine control of public space. "The snowman's presence is a reminder of masculine dominance and predominance outside the home," argues Cusack. It becomes "a household god keeping nature in orderit represents masculine ordering and surveillance." Despite this, the snowman remains essentially an outsider, excluded from the family circle. Traditional Christmas scenes show a solitary snowman viewed through a window from the warmth of a roaring hearth. Cusack argues that as well as being a symbol of male dominance, the snowman can also represent a potential object of charity. No wonder he has so few accessories and a Dickensian hat. She suspects that the Victorians enjoyed the snowman as a symbolic outcast, available to be the recipient of a new red scarf. "The snowman, cared for, smiles back, and the family within gain satisfaction both from their cosiness and their charity." But if Cusack sees England's Victorian underclass in today's snowmen, Mergen points to a more subversive role for the snowmen built on the streets of American cities in the 19th century. Kitted out with a tall stovepipe hat or a derbyboth of which bespoke wealth and social pretentionsthe traditional American snowman was, on the whole, "a symbol of authority to be attacked", Mergen concludes. Unwittingly, perhaps, Daniel Carter Beard, one of the founders of the Boy Scouts of America in the 1880s, promoted the building of snowmen as a wholesome activity for young boys, who were supposed to confine snowballs to those harmless targets. Intent on more subversive activity, the youths made snow effigies of male authority figures and then destroyed them. Carrots and sticks The sociology of snowman-building today remains shockingly under-researched, despite an exhaustive catalogue of the materials used in the construction of snowmen by two American scholars, Avon Neal and Ann Parker, in the late 1960s. According to their investigations, eyes can be fashioned out of chunks of coal, shrivelled apples, stones, bolts, electrical fuses, bottle caps, champagne corks, batteries, buttons and nutshells. Additional accoutrements for mouths include twigs, pebbles, rusty horseshoes, false teeth and possibly a pipe. Carrots tend to be favoured for noses, but corncobs, sticks and clothes pegs have also served that purpose, while in Raymond Briggs's celebrated children's story and animated film, The Snowman, a clementine (or similar citrus fruit) serves as nasal equipment. Hints of nakedness are offset by the provision of a row of "buttons". The finishing touches are provided by a scarf and, most famously, a hat, or at a pinch a hat substitute such as an old pot. Tradition, tradition A straw poll of half a dozen embassies in London suggests that snowmen thrive in countries as diverse as Sweden, Germany, Russia and Japan. "It is traditional to use coal for the eyes," says the spokeswoman at the Swiss Embassy, "but these days it can be difficult for children to find." The Russian press officer said that buckets made good substitutes for hats, which are presumably all in use on human heads. Carrots for noses are universally favoured. Indeed, one enterprising British company in West Sussex has marketed "Grow Your Own Snowman" packets, which on closer inspection turn out to contain carrot seeds. "We had planned to offer you the opportunity to grow your own Frozen Friend with our amazing patented whizzo F111 hybrid Sno-seeds, the result of crossing a snowdrop with a mangel wurzle in sub-zero conditions," the packaging explains. "Unfortunately, all the seeds melted on the way to the toy factory. So instead we have substituted carrot seeds." Yet, jokey vegetables aside, there's a hint of menace about snowmen. In Frosty the Snowman, a hit song of the 1950s, children run to follow Frosty to the town square, where he evades a policeman and then disappears. His anarchic behaviour befits someone made of snow, says Mergen, which is "paradoxically hard and soft, substantial yet ephemeral". Briggs's snowman also becomes a touch frightening when he subverts his accepted status as an outsider and comes inside the house. As the parents sleep, he makes mischief with their property, even trying on a set of false teeth soaking by the bedside, and then flies away with their underaged son. Perhaps our ambivalence to snow is showing. Could it be that we make snowmen look so friendly to compensate for our intrinsic fear of all that white stuff falling from the sky? Humans may have sought to assert their dominance over snow by shaping it in their own image, argues Mergen. But if the snowman is in some way a suggestion that nature can be enjoyed but also tamed and controlled, his demise as winter gives way to spring reasserts the natural cycles of death and rebirth. Even The Snowman ends with the main protagonist reduced to a hat in a puddleno wonder the haunting theme music is in a minor key. Poets have written of the snowman's plight, seeing him as Everyman, doomed to transience, who, according to the Canadian poet P. K. Page, eventually melts away into a "landscape without love". As his snowmen melted, they "greyed a little too, growing sinister and disreputable in their sooty fur", he wrote in The Snowman. And in Wallace Stevens's bleak poem of 1921, also called The Snow Man, he evokes "the listener, who listens in the snow, and nothing himself, beholds nothing that is not there and the nothing that is". In Mergen's view, Wallace's snowman is a quintessential 20th-century American. American novelty shops sell plastic snow domes, filled with bits of plastic snow, lumps of coal, a hat and a carrot, entitled "California snowmen". In an age of global warming, will we come to see the snowman's destiny as a potent symbol of our collective future? Perhaps the human genius for self-destruction is best symbolised by one card on sale in Britain this Christmas: it shows three gleeful snowmen gathered round a roaring brazier, warming themselves by the fire. From checker at panix.com Thu Dec 22 20:36:16 2005 From: checker at panix.com (Premise Checker) Date: Thu, 22 Dec 2005 15:36:16 -0500 (EST) Subject: [Paleopsych] spiked: Why humans are superior to apes Message-ID: Why humans are superior to apes http://www.spiked-online.com/Printable/0000000CA40E.htm 4.2.24 by Helene Guldberg Humanism, in the sense of a faith in humanity's potential to solve problems through the application of science and reason, is taking quite a battering today. As the UK medical scientist Raymond Tallis warns, the role of mind and of self-conscious agency in human affairs is denied 'by anthropomorphising or "Disneyfying" what animals do and "animalomorphising" what human beings get up to' (1). One of the most extreme cases of 'animalomorphism' in recent years has come from the philosopher John Gray, professor of European thought at the London School of Economics. In his book Straw Dogs: Thoughts on Humans and Other Animals, Gray argues that humanity's belief in our ability to control our destiny and free ourselves from the constraints of the natural environment is as illusory as the Christian promise of salvation (2). Gray presents humanity as no better than any other living organism - even bacteria. We should therefore not be too concerned about whether humans have a future on this planet, he claims. Rather, it is the balance of the world's ecosystem that we should really worry about: 'Homo rapiens is only one of very many species, and not obviously worth preserving. Later or sooner, it will become extinct. When it is gone the Earth will recover.' Thankfully, not many will go along with John Gray's image of humans as a plague upon the planet. For our own sanity, if nothing else, we cannot really subscribe to such a misanthropic and nihilistic worldview. If we did, surely we would have no option other than to kill ourselves - for the good of the planet - and try to take as many people with us as possible? However, even if many will reject Gray's extreme form of anti-humanism, many more will go along with the notion that animals are ultimately not that different from us. The effect is the same: to denigrate human abilities. Today, a belief in human exceptionalism is distinctly out of fashion. Almost every day we are presented with new revelations about how animals are more like us than we ever imagined. A selection of news headlines includes: 'How animals kiss and make up'; 'Male birds punish unfaithful females'; 'Dogs experience stress at Christmas'; 'Capuchin monkeys demand equal rights'; 'Scientists prove fish intelligence'; 'Birds going through divorce proceedings'; 'Bees can think say scientists'; 'Chimpanzees are cultured creatures' (3). The argument is at its most powerful when it comes to the great apes -chimpanzees, gorillas and orangutans. One of the most influential opponents of the 'sanctification of human life', as he describes human exceptionalism, is Peter Singer, author of Animal Liberation and co-founder of the Great Ape Project (4). Singer argues that we need to 'break the species barrier' and extend rights to the great apes, in the first instance, followed by all other animal species. The great apes are not only our closest living relatives, argues Singer, but they are also beings who possess many of the characteristics that we have long considered distinctive to humans. Is it the case that apes are just like us? Primatology has indeed shown that apes, and even monkeys, communicate in the wild. Jane Goodall's observations of chimpanzees show that not only do they use tools, but that they also make them - using sticks to fish for termites, stones as anvils or hammers, and leaves as cups or sponges. Anybody watching juvenile chimps playfighting, tickling each other and giggling, will be struck by their human-like mannerisms and their apparent expressions of glee. But one has to go beyond first impressions in order to establish to what extent great ape abilities can be compared to those of humans. Is it the case that ape behaviour is the result of a capacity for some rudimentary form of human-like insight? Or can it be explained through Darwinian evolution and associative learning? Associative learning, or contingent learning, are concepts developed in the early twentieth century by BF Skinner, one of the most influential psychologists, to describe a type of learning that is the result of an association between an action and the reinforcer - in the absence of any insight. BF Skinner became famous for his work with rats, pigeons and chickens using his 'Skinner Box'. In one experiment he rewarded chickens with a small amount of food (the reinforcer) when they pecked a blue button (the action). If the chicken pecked a yellow, green, or red button, it would get nothing. Associative or contingent learning, concepts developed by the school of behaviourism, is based on the idea that animals behave in the way that they do because this kind of behaviour has had certain consequences in the past, not because they have any insight into why they are doing what they do. In Intelligence of Apes and Other Rational Beings (2003), primatologist Duane Rumbaugh and comparative psychologist David Washburn argue that ape behaviour cannot be explained on the basis of contingent learning alone (5). Apes are rational, they claim, and do make decisions using higher order reasoning skills. But the evidence for this is weak, and getting weaker, as more rigorous methodologies are being developed for investigating the capabilities of primates. As a result, many of the past claims about apes' capacity for insight into their own actions and those of their fellow apes are now being questioned. Cultural transmission and social learning The cultural transmission of behaviour, where actions are passed on through some kind of teaching, learning or observation rather than through genetics, is used as evidence of apes' higher order reasoning abilities. This is currently being revised. The generation-upon-generation growth in human abilities has historically been seen as our defining characteristic. Human progress has been made possible through our ability to reflect on what we, and our fellow humans, are doing - thereby teaching, and learning from, each other. The first evidence of cultural transmission among primates was found in the 1950s in Japan, with observations of the spread of potato washing among macaque monkeys (6). One juvenile female pioneered the habit, followed by her mother and closest peers. Within a decade, the whole of the population under middle age was washing potatoes. A review by Andrew Whiten and his colleagues of a number of field studies reveals evidence of at least 39 local variations in behavioural patterns, including tool-use, communication and grooming rituals, among chimpanzees - behaviours that are common in some communities and absent in others (7). So it seems that these animals are capable of learning new skills and of passing them on to their fellows. The question remains: what does this tell us about their mental capacities? The existence of cultural transmission is often taken as evidence that the animals are capable of some form of social learning (such as imitation) and possibly even teaching. But there is in fact no evidence of apes being able to teach their young. Michael Tomasello, co-director of the Wolfgang K?hler Primate Research Center in Germany, points out that 'nonhuman primates do not point to distal entities in the environment, they do not hold up objects for others to see and share, and they do not actively give or offer objects to other individuals. They also do not actively teach one another' (8). Yet even if apes cannot actively teach each other, if they are capable of social learning - in terms of imitation (which it has long been assumed that they are) - this does still imply they are capable of quite complex cognitive processes. Imitation involves being able to appreciate not just what an act looks like when performed by another individual, but also what it is like to do that act oneself. They must be able to put themselves in another person's shoes, so to speak. However, comparative psychologist Bennett Galef points out, after scrutinising the data from Japan, that the rate the behaviour spread among the macaque monkeys was very slow and steady, not accelerated as one might expect in the case of imitation (9). It took up to a decade for what, in human terms, would be described as a tiny group of individuals to acquire the habit of the 'innovator'. Compare this to the human ability to teach new skills and ways of thinking and to learn from each other's insights: which laid the foundation for the agricultural and industrial revolutions, the development of science and technology and the transformations of our ways of living that flow from these. Reviewing the literature on primate behaviour, it emerges that there is in fact no consensus among scientists as to whether apes are capable of the simplest form of social learning - imitation (10). Instead it could be the case that the differences in their behavioural repertoires are the result of what has been coined stimulus enhancement. It has been shown in birds, for instance, that the stimulus enhancement of a feeding site may occur if bird A sees bird B gaining food there. In other words, their attention has been drawn to a stimulus, without any knowledge or appreciation of the significance of the stimulus. Others argue that local variations may be due to observational conditioning, where an animal may learn about the positive or negative consequences of actions, not on the basis of experiencing the outcomes themselves, but on the basis of seeing the responses of other animals. This involves a form of associative learning (learning from the association between an action and the reinforcer), rather than any insight. Michael Tomasello emphasises the special nature of human learning. Unlike animals, he argues, humans understand that in the social domain relations between people involve intentionality, and in the physical domain that relations between objects involve causality (11). We do not tend to respond blindly to what others do or say, but, to some degree, analyse their motives. Similarly we have some understanding how physical processes work, which means we can manipulate the physical world to our advantage and continually develop and perfect the tools we use to do so. Social learning and teaching depends on these abilities, and human children begin on this task at the end of their first year. Because other primates do not understand intentionality or causality they do not engage in cultural learning of this type. The fact that it takes chimps up to four years to acquire the necessary skills to select and adequately use tools to crack nuts implies that they are not capable of true imitation, never mind any form of teaching. Young chimps invest a lot of time and effort in attempts to crack nuts that are, after all, an important part of their diet. The slow rate of their development raises serious questions about their ability to reflect on what they and their fellow apes are doing. Language But can apes use language? Groundbreaking research by Robert Seyfarth and Dorothy Cheney in the 1980s on vervet monkeys in the wild showed that their vocalisations went beyond merely expressing emotions such as anger or fear. Their vocalisations could instead be described as 'referential' - in that they refer to objects or events (12). But it could not be established from these studies whether the callers vocalised with the explicit intent of referring to a particular object or event, for instance the proximity of a predator. And Seyfarth and Cheney were careful to point out that there was no evidence that the monkeys had any insight into what they were doing. Their vocalisations could merely be the result of a form of associative learning. Later experiments have attempted to refine analyses in order to establish whether there is an intention to communicate: involving an understanding that someone else may have a different perspective or understanding of a situation from themselves, and using communication in order to change the others' understanding. It is too early to draw any firm conclusions on this question from research carried out to date. There is no evidence that primates have any, even rudimentary, human-like insight into the effect of their communications. But neither is there clear evidence that they do not. What is clear, however, is that primates, as with all non-human animals, only ever communicate about events in the here and now. They do not communicate about possible future events or previously encountered ones. Ape communications cannot therefore be elevated to the status of human language. Human beings debate and discuss ideas, constructing arguments, drawing on past experiences and imagining future possibilities, in order to change the opinions of others. This goes way beyond warning fellow humans about a clear and present danger. Deception and Theory of Mind What about the fact that apes have been seen to deceive their fellows? Does this not point towards what some have described as a Machiavellian Intelligence (13)? Primatologists have observed apes in the wild giving alarm calls when no danger is present, with the effect of distracting another animal from food or a mate. But again the question remains whether they are aware of what they are doing. To be able to deceive intentionally, they would have to have some form of a 'theory of mind' - that is, the recognition that one's own perspectives and beliefs are sometimes different from somebody else's. Although psychologist Richard Byrne argues that the abilities of the great apes are limited compared with even very young humans, he claims that 'some "theory of mind" in great apes but not monkeys now seems clear' (14). However, as the cognitive neuroscientist Marc Hauser points out, most studies of deception have left the question of intentionality unanswered (15). Studies that do attribute beliefs-about-beliefs to apes tend to rely heavily on fascinating, but largely unsubstantiated, anecdotes. As professor of archaeology Steven Mithen points out, 'even the most compelling examples can be explained in terms of learned behavioural contingencies [associative learning], without recourse to higher order intentionality' (16). So even if apes are found to deceive, that does not necessarily imply that the apes know that they are deceiving. The apes may just be highly adaptive and adept at picking up useful routines that bring them food, sex or safety, without necessarily having any understanding or insight into what they are doing. Self-awareness Although there is no substantive evidence of apes having a theory of mind, they may possess its precursor - a rudimentary self-awareness. This is backed up by the fact that, apart from human beings, apes are the only species able to recognise themselves in the mirror. In developmental literature, the moment when human infants first recognise themselves in the mirror (between 15 and 21 months of age) is seen as an important milestone in the emergence of the notion of 'self'. How important is it, then, that apes can make the same sort of mirror recognition? The development of self-awareness is a complex process with different elements emerging at different times. In humans, mirror recognition is only the precursor to a continually developing capacity for self-awareness and self-evaluation. Younger children's initial self-awareness is focused around physical characteristics. With maturity comes a greater appreciation of psychological characteristics. When asking 'who am I?', younger children use outer visible characteristics - such as gender and hair colour - while older children tend to use inner attributes - such as feelings and abilities. The ability of apes to recognise themselves in the mirror does not necessarily imply a human-like self-awareness or the existence of mental experiences. They seem able to represent their own bodies visually, but they never move beyond the stage reached by human children in their second year of life. Children Research to date presents a rather murky picture of what primates are and are not capable of. Field studies may not have demonstrated conclusively that apes are incapable of understanding intentionality in the social domain or causality in the physical domain, but logically this must be the case. Understanding of this sort would lead to a much more flexible kind of learning. It may be the case that the great apes do possess some rudimentary form of human-like insight. But the limitations of this rudimentary insight (if it exists at all) becomes clear when exploring the emergence, and transformative nature, of insight in young children. We are not born with the creative, flexible and imaginative thinking that characterises humans. It emerges in the course of development: humans develop from helpless biological beings into conscious beings with a sense of self and an independence of thought. The study of children can therefore give us great insights into the human mind. As Peter Hobson, professor of developmental psychopathology and author of The Cradle of Thought: Exploring the Origins of Thinking, states: 'It is always difficult to consider things in the abstract, and this is especially the case when what we are considering is something as elusive as the development of thought. It is one of the great benefits of studying very young children that one can see thinking taking place as it is lived out in a child's observable behaviour' (17). Thinking is more internalised, and therefore hidden, in older children and adults, but it is more externalised and nearer to the surface in children who are just beginning to talk. Hobson puts a persuasive case for human thought, language, and self-awareness developing 'in the cradle of emotional engagement between the infant and caregiver'. Emotional engagement and communication, he argues, are the foundation on which creative symbolic thought develops. Through reviewing an array of clinical and experimental studies, Hobson captures aspects of human exchanges that happen before thought. He shows that even in early infancy children have a capacity to react to the emotions of others. This points to an innate desire to engage with fellow human beings, he argues. However, with development, that innate desire is transformed into something qualitatively different. So, for instance, at around nine months of age, infants begin to share their experiences of objects or actions with others. They begin to monitor the emotional responses of adults, such as responding to facial expression or the tone of voice. When faced with novel situations or objects, infants look at their carers' faces and, by picking up emotional signals, they decide on their actions. When they receive positive/encouraging signals, they engage; when the signals are anxious/negative, they retreat. Towards the middle of the second year these mutually sensitive interpersonal engagements are transformed into more conscious exchanges of feelings, views and beliefs. Hobson is able to show that the ability to symbolise emerges out of the cradle of early emotional engagements. With the insight that people-with-minds have their own subjective experiences and can give things meanings comes the insight that these meanings can be anchored in symbols. This, according to Hobson, is the dawn of thought and the dawn of language: 'At this point, [the child] leaves infancy behind. Empowered by language and other forms of symbolic functioning, she takes off into the realms of culture. The infant has been lifted out of the cradle of thought. Engagement with others has taught this soul to fly.' (p274) The Russian psychologist Lev Vygotsky showed that a significant moment in the development of the human individual occurs when language and practical intelligence converge (18). It is when thought and speech come together that children's thinking is raised to new heights and they start acquiring truly human characteristics. Language becomes a tool of thought allowing children increasingly to master their own behaviour. As Vygotsky pointed out, young children will often talk out loud - to themselves it seems - when carrying out particular tasks. This 'egocentric speech' does not disappear, but gradually becomes internalised into private inner speech - also known as thought. Vygotsky and Luria concluded that 'the meeting between speech and thinking is a major event in the development of the individual; in fact, it is this connection that raises human thinking to extraordinary heights' (19). Apes never develop the ability to use language to regulate their own actions in the way that even toddlers are able to do. With the development of language, children's understanding of their own and other people's minds is transformed. So by three or four years of age, most children have developed a theory of mind. This involves an understanding of their own and others' mental life, including the understanding that others may have false beliefs and that they themselves may have had false beliefs. When my nephew Stefan was three years of age, he excitedly told me that 'this is my right hand [lifting his right hand] and this is my left hand [lifting his left hand]. But this morning [which is the phrase he used for anything that has happened in the past] I told daddy that this was my left hand [lifting his right hand] and this is my right hand [lifting his left hand]'. He was amused by the fact that he had been mistaken in his knowledge of what is right and what is left. He clearly had developed an understanding that people, including himself, have beliefs about things and that those beliefs can be wrong as well as right. Once children are able to think about thoughts in this way, their thinking has been lifted to a different height. The formal education system requires children to go much further in turning language and thought in upon themselves. Children must learn to direct their thought processes in a conscious manner. Above all, they need to become capable of consciously manipulating symbols. Literacy and numeracy serve important functions in aiding communication and manipulating numbers. But, above all, they have transformative effects on children's thinking, in particular on the development of abstract thought and reflective processes. In the influential book Children's Minds, child psychologist Margaret Donaldson shows that 'those very features of the written word which encourage awareness of language may also encourage awareness of one's own thinking and be relevant to the development of intellectual self-control, with incalculable consequences for the kinds of thinking which are characteristic of logic, mathematics and the sciences' (20). The differences in language, tool-use, self-awareness and insight between apes and humans are vast. A human child, even as young as two years of age, is intellectually head and shoulders above any ape. Denigrating humans As American biological anthropologist Kathleen R Gibson states: 'Other animals possess elements that are common to human behaviours, but none reaches the human level of accomplishment in any domain - vocal, gestural, imitative, technical or social. Nor do other species combine social, technical and linguistic behaviours into a rich, interactive and self-propelling cognitive complex.' (21) In the six million years since the human and ape lines first diverged, the behaviour and lifestyles of apes have hardly changed. Human behaviour, relationships, lifestyles and culture clearly have. We have been able to build upon the achievements of previous generations. In just the past century we have brought, through constant innovation, vast improvements to our lives: including better health, longer life expectancy, higher living standards and more sophisticated means of communication and transport. Six million years of ape evolution may have resulted in the emergence of 39 local behavioural patterns - in tool-use, communication and grooming rituals. However this has not moved them beyond their hand-to-mouth existence nor led to any significant changes in the way they live. Our lives have changed much more in the past decade - in terms of the technology we use, how we communicate with each other, and how we form and sustain personal relationships. Considering the vast differences in the way we live, it is very difficult to sustain the argument that apes are 'just like us'. What appears to be behind today's fashionable view of ape and human equivalence is a denigration of human capacities and human ingenuity. The richness of human experience is trivialised because human experiences are lowered to, and equated with, those of animals. Dr Roger Fouts from the Chimpanzee and Human Communication Institute expresses this anti-human view well in his statement. '[Human] intelligence has not only moved us away from our bodies, but from our families, communities, and even Earth itself. This may be a big mistake for the survival of our species in the long run.' (22) Investigations into apes' behaviour could shed some useful light on how they resemble us - and give us some insight into our evolutionary past, several million years back. Developing a science true to its subject matter could give us real insights into what shapes ape behaviour. Stephen Budiansky's fascinating book If A Lion Could Talk shows how evolutionary ecology (the study of how natural selection has equipped animals to lead the lives they do) is showing us how animals process information in ways that are uniquely their own, much of which we can only marvel at (23). But as Karl Marx pointed out in the late nineteenth century: 'What distinguishes the worst architect from the best of bees is this, that the architect raises his structure in imagination before he erects it in reality. At the end of every labour process, we get a result that already existed in the imagination of the labourer at its commencement.'(24) Much animal behaviour is fascinating. But, as Budiansky shows, it is also the case that animals do remarkably stupid things in situations very similar to those where they previously seemed to show a degree of intelligence. This is partly because they learn many of their clever feats by pure accident. But it is also because animal learning is highly specialised. Their ability to learn is not a result of general cognitive processes but 'specialised channels attuned to an animal's basic hard-wired behaviours' (23). It is sloppy simply to apply human characteristics and motives to animals. It blocks our understanding of what is specific about animal behaviour, and degrades what is unique about our humanity. It is ironic that we, who have something that no other organism has - the ability to evaluate who we are, where we come from and where we are going, and, with that, our place in nature - increasingly seem to use this unique ability in order to downplay the exceptional nature of our own capacities and achievements. Read on: [2]spiked-issue: Animals (1) New Humanist, November 2003 (2) Straw Dogs: Thoughts on Humans and Other Animals, by John Gray, Granta, August 2002 (3) [3]'How animals kiss and make up', BBC News, 13 October 2003; [4]Male birds punish unfaithful females, Animal Sentience, 31 October; [5]Dogs experience stress at Christmas, Animal Sentience, 10 December 2003; [6]Capuchin monkeys demand equal rights, Animal Sentience, 20 September 2003; [7]Scientists prove fish intelligence, 31 August 2003; [8]Birds going through divorce proceedings, Animal Sentience, 18 August 2003; [9]Bees can think say scientists, Guardian, 19 April 2001; [10]Chimpanzees are cultured creatures, Guardian, 24 September 2002 (4) See the [11]Great Ape project website (5) Intelligence of Apes and Other Rational Beings, by Duane M Rumbaugh and David A Washburn (buy this book from [12]Amazon (UK) or [13]Amazon (USA)) (6) Frans de Waal, Nature, Vol 399, 17 June 1999 (7) Nature, Vol 399, 17 June 1999 (8) Michael Tomasello, 'Primate Cognition: Introduction to the issue', Cognitive Science Vol 24 (3) 2000, p358 (9) BG Galef, Human Nature 3, 157-178, 1990 (10) See a detailed review by Andrew Whiten, 'Primate Culture and Social Learning', Cognitive Science Vol 24 (3), 2000 (11) Tomasello and Call, Primate Cognition, Oxford University Press, 1997 (12) [14]Peter Singer: Curriculum Vitae (13) Machiavellian Intelligence: Social Expertise and the Evolution of Intellect in Monkeys, Apes, and Humans, (eds) Andrew Whiten and Richard Byrne, Oxford 1990. Buy this book from [15]Amazon (USA) or [16]Amazon (UK) (14) [17]How primates learn novel complex skills: The evolutionary origins of generative planning?, by Richard W Byrne (15) M Hauser, 'A primate dictionary?', Cognitive Science Vol 24(3) 2000 (16) The Prehistory of the Mind: A Search for the Origins of Art, Religion and Science, Steven Mithen, Phoenix, 1998. Buy this book from or [18]Amazon (UK) or [19]Amazon (USA) (17) The Cradle of Thought: exploring the origins of thinking, Peter Hobson, Macmillan, 22 February 2002, p76. Buy this book from [20]Amazon (UK) or [21]Amazon (USA) (18) Thought and Language, Lev Vygotsky, MIT, 1986 (19) Ape, Primitive Man and Child, Lev Vygotsky, 1991, p140 (20) Children's Minds, Margaret Donaldson, HarperCollins, 1978, p95 (21) Tools, Language and Cognition in Human Evolution, Kathleen R Gibson, 1993, p7-8 (22) [22]CHCI Frequently Asked Questions: Chimpanzee Facts (23) If a Lion Could Talk: Animal Intelligence and the Evolution of Consciousness, by Stephen Budiansky. Buy this book from [23]Amazon (UK) or [24]Amazon (USA) (24) Capital, Karl Marx, vol 1 p198 [bug.gif] [pixel.gif] Reprinted from : [25]http://www.spiked-online.com/Articles/0000000CA40E.htm _________________________________________________________________ spiked, Signet House, 49-51 Farringdon Road, London, EC1M 3JP Email: [35]info at spiked-online.com References 2. http://www.spiked-online.com/Sections/Science/OnAnimals/Index.htm 3. http://news.bbc.co.uk/1/hi/scotland/3183516.stm 4. http://www.animalsentience.com/news/2003-10-31a.htm 5. http://www.animalsentience.com/news/2003-12-10.htm 6. http://www.animalsentience.com/news/2003-09-20.htm 7. http://www.animalsentience.com/news/2003-08-31.htm 8. http://www.animalsentience.com/news/2003-08-18.htm 9. http://www.guardian.co.uk/uk_news/story/0,3604,474807,00.html 10. http://education.guardian.co.uk/higher/artsandhumanities/story/0,12241,798331,00.html 11. http://www.greatapeproject.org/ 12. http://www.amazon.co.uk/exec/obidos/ASIN/0300099835/spiked 13. http://www.amazon.com/exec/obidos/tg/detail/-/0300099835/spiked-20 14. http://www.princeton.edu/%7Euchv/faculty/singercv.html 15. http://www.amazon.com/exec/obidos/tg/detail/-/0198521758/spiked-20 16. http://www.amazon.co.uk/exec/obidos/ASIN/0521559499/spiked 17. http://www.saga-jp.org/coe_abst/byrne.htm 18. http://www.amazon.co.uk/exec/obidos/ASIN/0500281009/spiked 19. http://www.amazon.com/exec/obidos/tg/detail/-/0500281009/spiked-20 20. http://www.amazon.co.uk/exec/obidos/ASIN/0333766334/spiked 21. http://www.amazon.com/exec/obidos/tg/detail/-/0195219546/qid=1077209516/spiked-20 22. http://www.cwu.edu/~cwuchci/chimpanzee_info/faq_info.htm 23. http://www.amazon.com/exec/obidos/tg/detail/-/0684837102/spiked 24. http://www.amazon.com/exec/obidos/tg/detail/-/0684837102/spiked-20 25. http://www.spiked-online.com/Articles/0000000CA40E.htm 26. http://www.spiked-online.com/ 27. http://www.spiked-online.com/sections/culture/index.htm 28. http://www.spiked-online.com/sections/health/index.htm 29. http://www.spiked-online.com/sections/life/index.htm 30. http://www.spiked-online.com/sections/liberties/index.htm 31. http://www.spiked-online.com/sections/politics/index.htm 32. http://www.spiked-online.com/sections/risk/index.htm 33. http://www.spiked-online.com/sections/science/index.htm 34. http://www.spiked-online.com/sections/technology/index.htm 35. http://www.spiked-online.com/forms/genEmail.asp?sendto=9§ion=central From checker at panix.com Thu Dec 22 21:07:39 2005 From: checker at panix.com (Premise Checker) Date: Thu, 22 Dec 2005 16:07:39 -0500 (EST) Subject: [Paleopsych] eNotes: Mikhail Bulgakov: Master and Margarita Message-ID: Mikhail Bulgakov: Master and Margarita eNotes [I have just finished this masterpiece of Soviet literature and thank Nancy for giving it to me to celebrate my abandoning reality when I turned sixty. As you will see, the novel is immensely complex and takes place on three levels of reality. I wish I had had this guide book when I started the novel. Here it is, and I recommend your using it when you read the novel for yourself. I think, though, that Soviet music, esp. that of Shostakovich, confers for me greater insights into human nature than the literature does. I ask for an evaluation from Soviet experts about the literary merits of the work.] Table of Contents 1. Master and Margarita: Introduction 2. Mikhail Bulgakov Biography 3. One-Page Summary 4. Summary and Analysis 5. Quizzes 6. Themes 7. Style 8. Historical Context 9. Critical Overview 10. Character Analysis 11. Essays and Criticism 12. Suggested Essay Topics 13. Sample Essay Outlines 14. Compare and Contrast 15. Topics for Further Study 16. Media Adaptations 17. What Do I Read Next? 18. Bibliography and Further Reading 1. INTRODUCTION The Master and Margarita by Mikhail Bulgakov is considered one of the best and most highly regarded novels to come out of Russia during the Soviet era. The book weaves together satire and realism, art and religion, history and contemporary social values. It features three story lines. The main story, taking place in Russia of the 1930s, concerns a visit by the devil, referred to as Professor Woland, and four of his assistants during Holy Week; they use black magic to play tricks on those who cross their paths. Another story line features the Master, who has been languishing in an insane asylum, and his love, Margarita, who seeks Woland's help in being reunited with the Master. A third story, which is presented as a novel written by the Master, depicts the crucifixion of Yeshua Ha-Notsri, or Jesus Christ, by Pontius Pilate. Using the fantastic elements of the story, Bulgakov satirizes the greed and corruption of Stalin's Soviet Union, in which people's actions were controlled as well as their perceptions of reality. In contrast, he uses a realistic style in telling the story of Yeshua. The holy life led by Christ in this book is more ordinary than the miraculous one told in the Scriptures. Because the book derides government bureaucracy and corruption, the manuscript of The Master and Margarita was hidden for over twenty years, until the more lenient Khrushchev government allowed its publication. 2. AUTHOR BIOGRAPHY In his final weeks, as he lay dying of nephrosclerosis, Mikhail Bulgakov continued to dictate changes for The Master and Margarita to his wife. He had been working on the book for twelve years, through eight versions, and he meant it to be his literary legacy. Bulgakov was born in Kiev on May 3, 1891. His father was a professor at the Kiev Theological Seminary, an influence that appears in the novel through mentions of the history and philosophy of religious matters. Bulgakov graduated with distinction from the University of Kiev, and after attaining his medical degree from St. Vladimir's University, he went into the army, which sent him to a small town in the province of Smolensk. It was 1916, and Russia was involved in the First World War. The autobiographical stories in Bulgakov's collection A Country Doctor's Notebooks are based on his experiences in Smolensk. Bulgakov returned to Kiev in 1918, but was drafted into the White Army to fight in Russia's civil war against the communist Red Army. On a train trip home from Northern Caucasus, where the army had sent him, he sat up all night writing his first short story, and when the train stopped he took the story to the local newspaper office, which promptly published it. The following year, 1920, Bulgakov gave up medicine and moved to Moscow to write full time. He had several books published and several plays produced. His greatest success was the play Days of the Turbins, which was his adaptation for the stage of his own novel The White Guard. The story features a family that suffers at the hands of the Communists during the revolution, a depiction that would earn Bulgakov the suspicion of the Communists, who by then controlled the government. Despite the Communist reaction, Soviet Union audiences would applaud the play. From 1925 to 1928, the author was affiliated with the Moscow Arts Theater, where he had an uneasy relationship with the theater's founder and director, Konstatin Stanislavsky, who is known today for developing the theatrical technique referred to as "Method acting." In 1929, the Russian Association of Proletarian Writers became the official government agency overseeing the political content of literary works. Bulgakov found himself unable to publish because his ideologies did not conform to those of the Communists. In frustration, he burned many of his manuscripts in 1930. He wrote an appeal directly to Joseph Stalin, the secretary general of the Communist Party and leader of the country. Stalin had been a fan of Days of the Turbins, and by his order Bulgakov was reinstated into the Art Theater. For the next ten years, Bulgakov wrote, directed, and sometimes acted, and he worked on The Master and Margarita. Upon his death in 1940, he instructed his wife to hide the manuscript of The Master and Margarita because he was afraid that it would be confiscated and destroyed by government censors. It was not published for another twenty-seven years, when the government of the Soviet Union had become more open to intellectual differences to the party line. Until the publication of The Master and Margarita in an English translation in 1967, few people outside of the Soviet Union had ever heard of Bulgakov. In subsequent years, his other novels, short stories, plays, essays, and his autobiography have been published, as well as numerous publications about his life and works. 3. ONE-PAGE SUMMARY Part 1 Bulgakov's The Master and Margarita is split into three different, yet intertwined, versions of reality: events in presen-tday Moscow, including the adventures of satanic visitors, events concerning the crucifixion of Yeshua Ha-Notsri, or Jesus Christ, in first-century Yershalaim, and the love story of the Master and Margarita. Wednesday Mikhail Alexandrovich Berlioz, an important literary figure, and Ivan Nikolayevich Ponyryov, a poet who is also known as Bezdomny, which means "homeless," meet at Patriarch's Ponds to discuss a commissioned poem that Berlioz had asked Ivan to pen. Berlioz would like Ivan to rewrite the poem because he believes the poem makes Jesus too real. He goes on to explain why he believes Jesus never existed, providing Ivan with a brief history of religion. Berlioz is eventually interrupted by a mysterious man named Professor Woland, who assures them that Jesus did indeed exist. When Berlioz objects, Woland begins the story of Pontius Pilate, but not before he tells Berlioz he will be decapitated before the day is out. The story shifts to Yershalaim, where Pilate is hearing Yeshua's case. Yeshua is accused of inciting the people to burn down the temple, as well as advocating the overthrow of Emperor Tiberius. Pilate is forced to try him, and Yeshua is sentenced to death. Back in Moscow, Berlioz is indeed later decapitated by a streetcar. After Berlioz is killed, Ivan confronts and chases Woland and his gang-- a choirmaster, Korovyov, and a huge tomcat, Behemoth--through the streets to no avail. When he tries to relate the happenings of the day, he is taken to the asylum. Part 2 Thursday Styopa Likhodeyev, Berlioz's flat mate and director of the Variety Theater, awakes with a hangover to find Woland waiting for him. Woland apprises Likhodeyev that he has agreed to let Woland make seven performances of black magic at his theater. Likhodeyev does not remember having made this agreement. Contracts do contain Likhodeyev's signature; it seems that Woland is manipulating the situation, but Likhodeyev is bound to the agreement. Once a dazed Likhodeyev realizes that he must allow Woland to perform, Woland introduces the theater director to his entourage--Behemoth, Korovyov, and the singlefanged Azazello--and announces that they need apartment number 50, which has a reputation for being cursed (tenants of the apartment usually end up missing after a while). It is revealed that Woland and his group do not think highly of Styopa; they believe people like him in high places are scoundrels. Styopa soon finds himself transported to Yalta. The satanic gang spread mayhem throughout the building, and the manager has foreign planted on him and is taken away by the police. The manager of the Variety Theater, Ivan Savelyevich Varenukha, attempts to find Styopa, who has been sending desperate telegrams from Yalta. At the same time, he, with the help of others, is trying to ascertain the identity of the mysterious Woland. To prevent inquiries by Varenukha, Woland sends a new infernal creature, Hella, to Varenukha, and she turns him into a vampire. At the Variety Theater, Woland and his entourage give a black magic performance, during which the master of ceremonies is decapitated, and bewitched money--which later turns into bottle labels, kittens, and sundry objects--is rained over the crowd. Meanwhile, back at the asylum, Ivan meets his neighbor--the hero of the tale, the Master. He tells him about the previous day's events, and the Master assures him that Woland is Satan. The Master then tells him about his own life. He is an aspiring novelist and married, but he reveals he is in love with his "secret wife," Margarita, who is also married. His novel--the Pontius Pilate story--was their obsession, but critics lambasted it after it was offered for publication. Maddened, the Master had burned the manuscript, and ended up at the asylum. Later, Ivan dreams the next part of Pilate's story: Condemned men are walking to their executions; Matvei watches them hang and feels responsible. Then he grows angry and curses God. A storm comes, and the prisoners are put humanely to death by a guard stabbing them under the pre-text of giving them water. Part 3 Friday Woland and his group are still wreaking havoc in Moscow, and Margarita is pining over her love, the Master, and she rereads what is left of the Master's novel. She then goes to a park where she sees Berlioz's funeral and meets Azazello, who sets up a meeting between Margarita and Woland. He also gives Margarita some cream, telling her it will make her feel better. After she smears the cream over her body, she becomes a witch. Azazello contacts her and tells her to fly to the river for the meeting with Woland. She flies naked over the city and, on the way, destroys the critic Latunsky's apartment for he had been the one who ruined the Master. Her maid Natasha, now a witch, and Nikolai Ivanovich, now a pig, join her after using the cream. They meet Woland and his followers, and Satan's ball takes place with Margarita as the hostess. A parade of both famous and commonplace evil people attend, and the ball climaxes with the murder of Baron Maigel. Margarita drinks blood, and opens her eyes to find the ball is over. Woland grants Margarita a wish for being the hostess of the ball. She chooses to be reunited with her lover, the Master. He soon appears before Margarita. The Master is confused at first, but he soon realizes that he is reunited with the woman he loves. Woland also has a copy of the Master's entire manuscript even though the Master had burned it, and he gives it to the Master. He then returns the Master and Margarita and other characters, including Nikolai Ivanovich and Varenukha, back to their lives as they wish. Natasha however chooses to remain a witch. The Pilate story continues and Pilate meets with the chief of the secret police, Afranius. He pre-monishes that Judas of Kerioth, the man who betrayed Yeshua, will be murdered, and indeed, he is later lured outside of the city and murdered. Afranius reports the murder of Kerioth to Pilate, as well as the burial of the criminals. In a conversation with Levi (who was found to have taken Yeshua's body after the execution), Pilate reveals that it was he who killed Kerioth. Part 4 Saturday An investigation into the strange events incited by Woland and his group begins, while Ivan is possessed by visions of Pilate and the bald hill on which Yeshua and the two other criminals had been executed. A shoot-out occurs in apartment number 50 between the investigators and Behemoth, but, surprisingly, no one is hurt. Instead, the building burns. Behemoth and Korovyov continue to perform more pranks that leave many areas of Moscow burning. Levi comes to Woland with a message from Yeshua: he requests that Woland give the Master "peace." Woland agrees, and Azazello gives poisoned wine to the Master and Margarita. Their bodies die and the couple flies off with the infernal creatures, who, as they fly, return to their real figures. They soon come upon a man and his dog. Woland states that the man is the hero of the Master's novel: Pontius Pilate. He claims that Pilate has been sitting in the same spot for the past two thousand years with his dog, Banga. The Master is allowed to set Pilate free from his immortal insomnia by creating and stating the final line of his novel; he yells, "Free! Free! He is waiting for you!" Pilate and Banga are finally able to leave their static existence and be with Yeshua. The Master and Margarita are not given enlightenment, but they are allowed to spend the rest of eternity together in a small cottage. 4. SUMMARY AND ANALYSIS Summary and Analysis: Chapter 1 New Characters Ivan: A poet writing under the pseudonym "Homeless." Berlioz: The chairman of the Massolit literary association and the editor of a literary journal. Professor (also known as "Consultant," and "Woland"): A foreigner recently arrived in Moscow later to be revealed as the devil. Annushka: The woman who spills sunflower oil and causes Berlioz's death. Citizen with Checkered Jacket (also known as "Koroviev," "the choirmaster," and "Fagott"): Initially a phantasm seen by Berlioz, he is later to be revealed as Woland's accomplice. Summary Berlioz and Ivan appear at Moscow's Patriarch's Ponds as the spring sun sets and sit down at a food stand along the oddly desolate walk running parallel to Malaya Bronnaya Street. After they drink apricot soda, Berlioz feels a spasm in his heart and perceives "a blunt needle lodged in it," and is gripped with a worrisome fear. A tall, transparent citizen dressed in a short checkered jacket appears briefly, striking further terror in him. But Berlioz calms down to talk with Ivan about the poem about Jesus that Ivan has written for the next issue of a journal edited by Berlioz. Berlioz points out that Ivan, who has adopted the literary pseudonym "Homeless," has concentrated on portraying Jesus as a bad person, whereas he should have focused on portraying the fabricated existence of Jesus as a myth. As Berlioz starts telling Ivan about other mythological gods with the same characteristics as Jesus, a man who is about 40 years old, of foreign appearance and wearing grey shoes and a matching grey suit, appears on the walk. He is carrying a stick with a black knob shaped like a poodle's head, and his teeth are covered by "platinum crowns on the left side and gold on the right." He joins Ivan and Berlioz's conversation, speaking Russian with a clean foreign accent. As he queries them about religion, both men confirm they are atheists. The foreign man asks more questions, mentioning the inability of man to predict the future and, as examples of this inability, points out to Berlioz that sometimes men get cancer or slip under tram-cars. Berlioz becomes suspicious of the foreigner, who predicts that Berlioz will be decapitated. He also predicts that Berlioz will not make the evening meeting at Massolit, a Soviet literary association, because Annushka has already bought and spilled the sunflower oil. Berlioz, who is the chairman of Massolit, asks the man, who has a card identifying himself as a professor, who he is. The professor says he is a German and is in Moscow to serve as a consultant. He claims to be using his skills as a polyglot who specializes in black magic to sort through some manuscripts of Gerbert of Aurillace, a tenth-century necromancer. This professor insists that Jesus did exist, and he begins telling a story about Jesus and Pontius Pilate. Analysis The epigram from Goethe's play, Faust, that opens Master and Margarita suggests that Goethe provided at least some of Bulgakov's inspiration, and indeed, some parallels with that play will appear in this novel. Bulgakov's novel was written in the 1930s, a time of great repression and hardship in the Soviet Union under Stalin. The repressive and controlling atmosphere of that Communist state, with its extensive use of secret police, informers, spies, public denunciations, and threats to subdue citizens and coerce them into obeying the government, is evident throughout much of the novel. The novel itself, opening with the two odd and somewhat surreal elements of the vacant walkway and the stand selling only apricot soda, begins on a skewed note. Berlioz's pang of fear and vision of the citizen in the checkered jacket continues the odd feeling, but also adds a sense of foreboding: it seems something is dangerously awry in Moscow. When the foreigner arrives and begins talking with Berlioz and Ivan about the existence or nonexistence of Jesus and God, he is essentially questioning the official atheism that makes up one of the basic beliefs of the Soviet state. When he points out that men often don't know what they'll be doing in a few hours, he is also questioning the central planning that organized and controlled much of Soviet life. These two factors help explain Ivan and Berlioz's bewildered response to the professor. As the professor begins telling his story, he has already thrown the two men off guard and has effectively begun to undermine two of the principle tenets of the Soviet Union. Summary and Analysis: Chapter 2 New Characters Pontius Pilate: The fifth Roman procurator of Judea. Mark Ratslayer: A Roman centurion. Yeshua: A philosopher who has been arrested by the Romans for potentially causing unrest. Matthew Levi: One of Yeshua's followers. Joseph Kaifa: The high priest of the Jews and president of the Sanhedrin. Hooded Man (also known as "Aphranius"): A man who meets Pilate; he is later revealed as the head of the Roman secret police in Judea. Summary The professor continues his story, which begins with Pontius Pilate, the procurator of Judea, sitting in the "colonnade between the two wings of the palace of Herod the Great" early in the morning on the day before Passover. The weary Pilate is hounded by the smell of rose oil and must confirm the death sentence the accused man, Yeshua, faces. Two legionnaires bring in Yeshua, who is about 27 years old and dressed in an old chiton. Pilate begins interrogating Yeshua but, angered at being called "good man" by him, orders the centurion Mark Ratslayer to teach Yeshua a lesson. Mark Ratslayer whips Yeshua and tells him to call Pilate solely by the name "Hegemon." Yeshua returns to Pilate and tells Pilate his name and that he comes from Gamala, which is in the north of Judea. Yeshua lacks a permanent home, does not know who his parents are, and has no family. Despite Pilate's accusations, Yeshua denies calling for the temple building to be destroyed. He also says Matthew Levi, a former tax collector, ascribes false statements to Yeshua in his writings on his goatskin parchment. Nonetheless, Matthew Levi is Yeshua's faithful companion. The sick Pontius Pilate briefly longs for poison, then asks Yeshua, "What is truth?" Yeshua responds by saying the truth is that Pilate is sick and thinks of death, but he adds, "[Y]our suffering will soon be over." Yeshua tells Pilate to get out of his palace and go for a walk with him. Pilate orders his hands to be unbound, and as their conversation proceeds, Yeshua denies he is a physician and proclaims, "[T]here are no evil people in the world." Pilate concludes that Yeshua is mentally ill and, instead of being executed, will be put under confinement at Pilate's residence in Stratonian Caesarea on the Mediterranean Sea. After reading a document on Yeshua, Pilate's skin turns brown, his eyes sink, and he has a vision of Yeshua's head being replaced by the head of the former Roman emperor Tiberius. Pilate quickly thinks "I'm lost!" and "We're lost!" He recovers and, looking sharply at Yeshua, asks him if he has said anything against the great Caesar. In response, Yeshua says that he told Judas of Kiriath, just before being arrested, that "all authority is violence over people," and when the kingdom of truth and justice comes, authority will disappear. An outraged Pilate confirms Yeshua's death sentence. He orders two centuries of Roman soldiers to transport Yeshua, as well as two other condemned men, to Bald Mountain to be executed. They leave, and Pilate meets with the high priest of the Jews, Joseph Kaifa, and informs Kaifa of Yeshua's confirmed death sentence. However, he gives Kaifa the option of releasing Yeshua or another prisoner, Bar-Rabban, in honor of the Passover feast. Kaifa says Bar-Rabban will be released. Kaifa and Pilate then dispute Kaifa's decision and debate the general relations between Romans and Jews. They manage to reconcile though, and Pilate goes up to a platform to tell a crowd of Jews that Bar-Rabban is being released. The three condemned men are taken toward Bald Mountain and, at around 10 A.M., Pilate heads toward the gates that lead into the palace garden. Analysis The story of Yeshua and Pilate in Yershalaim, although clearly derived from the Gospel accounts of Jesus's life, departs from the Gospels in many ways, especially in the long focus on Pilate. His weariness, agony, and authority contrast with Yeshua's humbleness, plain speech, and gentleness. Pilate's conversation with Yeshua is punctuated by Pilate's vision of Tiberius' sickly head and a sense of being lost and living an agonizing immortality. Pilate's change after this vision is expressed in his vow to punish Yeshua with death and his denial that "the kingdom of truth will come." Pilate has chosen power over truth, and after dismissing Yeshua he enters into negotiations of power with Kaifa. Pilate proclaims to the public, in a theatrical show of power disguised as mercy, that Bar-Rabban, not Yeshua, will be spared execution. Nonetheless, Pilate is afraid up until the time the condemned men are removed from his sight, and when his sense of anguish recurs, he is overwhelmed by impotence. Pilate's royal powers have done nothing to allay his weariness and fear of death. He is trapped, and rather than prolong his talk with Yeshua, who might have led him out of his trap, he has condemned Yeshua. Yeshua had pointed out that his own hair did not come from Pilate, and Pilate responded by threatening that he "can cut that hair"; in the same spirit, Pilate rejects the arrival of the kingdom of truth and calls out "Criminal!" to his staff in a weak voice "cracked with commanding." Summary and Analysis: Chapters 3-4 New Characters Black Cat (also known as "Behemoth"): A cat who boards the tram-car; he is to be revealed as a member of Woland's retinue. Summary The professor has concluded his narrative, and evening has come to the Patriarch's Ponds. He declares that he was present for the entire story he has related to Berlioz and Ivan. Berlioz patronizes the professor, whom he takes for a mad German, but the professor predicts that he will be living in Berlioz's apartment shortly. The three men talk about the existence of the devil before Berlioz heads to the tram-car station at Bronnaya to report the professor to the foreigners' bureau. Before reaching the turnstile at the station, he sees the same citizen he had seen before, only now the citizen wears checkered trousers. A tram-car comes along, Berlioz loses his footing on the cobbles by the turnstile, and his head is severed by the tram-car when he falls on the rails. Ivan rushes to the turnstile in response to the accident and hears a woman say that Annushka broke a liter-bottle of sunflower oil on the turnstile. Remembering the professor's prophecy, he rushes back to find him and sees the professor, along with the citizen Berlioz had seen, who is now called "the choirmaster" and wears a pince-nez with a cracked lens. The professor tells an angry Ivan he doesn't speak Russian, and the choirmaster blocks Ivan, then vanishes. Ivan looks into the distance to see the professor, the choirmaster, and a huge whiskered black cat by the exit to Patriarch's Lane. Ivan chases after the trio, but they split up and slip away. Once Ivan realizes he won't be able to catch any of them, he also realizes how little time the chase took. Concluding that he can find the professor in house 13, apartment 47, which is located on a lane near Arbat Square, he goes there and is let into the apartment by a silent little girl. Ivan goes into the bathroom to catch the professor but instead encounters a naked woman in the bathtub. He retreats to the kitchen, where he sees a dozen small primus stoves, two candles, and two icons. Ivan takes a candle and the icon that is made of paper and goes out into the lane. He concludes the professor is at the Moscow River and, upon reaching the river, he begins swimming in it. Not finding the professor there either, he comes out to find his clothes have been stolen by the man he asked to guard them. His Massolit identification card is also gone, and Ivan dresses in a torn Tolstoy blouse and a pair of striped drawers left by the thief. Ivan decides to run to Griboedov's, where he will surely find the professor. Analysis The professor's story entrances Ivan and Berlioz despite their avowal that Jesus's existence is made up. Although Berlioz continues to patronize the professor, his bewilderment before having his head cut off by the tram-car again shows that there is something about the professor that cannot be dismissed. The fruition of his prophecy confirms that the novel's characters should not take this man lightly. The checkered man, now called a choirmaster, is also in on the nefarious deed. Although the cat causes Ivan astonishment, the fantastic speed of his chase and assumption that he will find the professor in apartment 47 or Griboedov's are also astonishing. The novel has firmly established its bizarre, surreal atmosphere. Meanwhile, the earlier talk of religion, together with the prophecy's fruition, the Pilate story, and Ivan's inexplicable decision to take the paper icon, hint at a deeper message and an allegorical meaning to this novel. Summary and Analysis: Chapters 5-6 New Characters Riukhin: A poet who helps bring Ivan into the psychiatric clinic. Zheldybin: Berlioz's assistant, who receives and spreads the news of Berlioz's death. Doctor (also known as Dr. Stravinsky): The head doctor of the Moscow psychiatric clinic. Summary Griboedov's, a restaurant on the ground floor of The House of Griboedov, is known as the best restaurant in Moscow. The house serves as a club for Massolit, a Moscow literary organization. The club itself is very cozy and plush, but the restaurant, with its reasonable prices and superb menu, is the club's greatest feature. On this night, 12 writers have gathered for the meeting Berlioz would be attending if he were not dead. It is nearing 11:00 P.M., and the impatient writers, who had expected the meeting to start at 10 o'clock, grumble impatiently before going down to the restaurant at midnight. Meanwhile, Berlioz's assistant, Zheldybin, is given the news of Berlioz's death and goes to visit the head and body laid out on two separate tables. At midnight, "a handsome dark-eyed man with a dagger-like beard" who looks like a Caribbean pirate enters the restaurant. The news of Berlioz's death spreads through Griboedov's at the same time, and shortly thereafter Ivan runs onto the restaurant's veranda. Ivan, carrying the candle and with the icon pinned to the breast of his blouse, starts looking for the professor and reveals that the professor has killed Berlioz. However, no one believes this story and the diners, concluding he is insane, capture Ivan. The pirate dismisses the restaurant's doorman for letting Ivan in, and Ivan is carried to a truck, which will take him to a psychiatric clinic located on the banks of a river outside of Moscow. At 1:30 A.M., a doctor arrives in the examining room to meet Ivan. The poet Riukhin, who had helped carry Ivan into the truck, tells the doctor what Ivan has done. Ivan, though, complains that he's been mistreated and is perfectly sane, adding a denunciation of Riukhin as "a typical little kulak." The doctor and Riukhin listen to Ivan narrate the encounter with the professor before Ivan is manhandled by some orderlies. The doctor, suspecting Ivan has schizophrenia, orders Ivan to be put in room 117 and assigned a nurse. The truck takes Riukhin back to Moscow, and a disconsolate Riukhin returns to Griboedov's to drink vodka by himself. Analysis The pleasures provided at Griboedov's, the home of Massolit, are a sample of the privileges available to artists who comply with the Soviet authorities. However, the disgruntlement among those waiting for Berlioz indicates that not all Massolit members are equal, and the Soviet ideal of a classless society has not been realized. The refusal to take Ivan's story seriously is not surprising, but it recalls the earlier statement that Berlioz was not used to seeing extraordinary phenomena. It seems either Communism or literary success has dulled the Massolit members' sense of the unusual and inexplicable. So the only apparent option is to remove Ivan to the psychiatric clinic on the outskirts of Moscow, where he can be treated in isolation. Ivan's statement that the icon scared the professor and his comrades hints that the professor may be demonic, but even Ivan isn't able to consider that possibility. Riukhin, who briefly wonders if Ivan is really quite sane and questions the value of his own poems, weakens and rapidly ages, but he dismisses his trembling and concern for Ivan and instead drinks vodka and forgets his problems. This is in contrast to Ivan, who is shunned by Massolit and must still face his problems alone in the clinic. Summary and Analysis: Chapters 7-8 New Characters Styopa Likhodeev: The director of the Variety Theatre, he is also Berlioz's roommate. Rimsky: The financial director of the Variety Theatre. Azazello: The third member of the Professor's retinue. Summary The chapter opens by introducing Styopa Likhodeev, Berlioz's co-tenant in apartment 50 at 302-bis on Sadovaya Street. Styopa is beset by a raging headache, apparently the result of his drinking the prior night. An aside on the history of the apartment reveals that people began disappearing from it two years earlier. Anna Fougeray, a jeweler's widow, had let out three of the apartment's rooms, but all three lodgers vanished, and in response, Anna left the apartment permanently. When Berlioz, Styopa, and their respective wives moved in, both wives vanished within a month. Styopa wakes up at 11 A.M. to see "an unknown man, dresses in black and wearing a black beret," sitting in his room. This stranger explains that he had arranged to meet Styopa in the apartment at 10 and has been waiting since then for him to wake up. A bewildered Styopa eats caviar and white bread and drinks vodka from a tray while sipping some vodka served by the stranger, but fails to recall any arranged meeting with him. The stranger identifies himself as Woland, a professor of black magic, and explains that yesterday he arrived in Moscow, met Styopa, who is the director of the Variety Theatre, and signed a contract to put on seven magic performances at the theatre for 35,000 rubles. Styopa is shown the contract but still cannot remember meeting Woland, as the professor will be known for the rest of the novel. He calls up the theatre's financial director, Rimsky, to confirm the contract. Having done this, Styopa hangs up the phone and sees, wearing his pince-nez, the same man Berlioz had twice encountered before dying, as well as the black cat Ivan has already seen. These two, along with Woland, intend to replace Styopa in the apartment. A fourth figure, Azazello, enters wearing a bowler hat and displaying his fangs and flaming red hair. The cat and Azazello tell Styopa to leave; Styopa gets dizzy and opens his eyes to find himself on a jetty. He asks a man where he is and is told he is in the city of Yalta, which is located in southern Russia. Styopa loses consciousness. At the same time, 11:30 A.M., Ivan wakes up. He calls for an attendant, who gives him a bath, then he puts on his pajamas. He is taken to an examination room and examined by three people, then returns to his room to eat breakfast. The doctor, whose name is Stravinsky, enters, and his entrance reminds Ivan of Pontius Pilate. Dr. Stravinsky hears Ivan's story about Berlioz's death and the foreigner who saw Pilate and foretold Berlioz's death. However, after reviewing Ivan's actions the previous night, he advises Ivan against reporting the foreigner to the police as a futile idea that will bring Ivan right back to the clinic. Ivan is instead left alone in his room after being given a pencil and paper to write down his story. Analysis The strange disappearances from apartment 50 were apparently the work of the secret police, who usually arrived in the night, always arrested people under great secrecy, and did not inform any neighbors of their arrests. In such an atmosphere of secret and unpleasant visits, the professor's presence in apartment 50 perhaps should not surprise Styopa as much as it does. However, Styopa is quick to realize that the wax seal on Berlioz's study door means Berlioz has been arrested. Woland's ability to manipulate the official machinery of Moscow to arrange his magic show without the knowledge of the Variety executives again displays his unusual powers. And Styopa, like Ivan, finds himself transported at stunning speed. Although thus far Woland has not killed anyone, his powers are clearly immense, and one wonders why he is putting on his magic show and what will happen at the show. The description of Ivan's dawning transformation into a more hesitant, cautious, and deliberate man seems to show how the experiences that put him in the clinic have served to subdue him. So he meekly accepts Dr. Stravinsky's advice not to go to the police and to start forgetting about Pilate. His spirit is weakening under the influence of authority. Summary and Analysis: Chapters 9-10 New Characters Bosoy: Chairman of the tenants' association at 302-bis. Varenukha: Administrator of the Variety Theatre. Summary Nikanor Ivanovich Bosoy, chairman of tenants' association for Berlioz's former residence, is besieged by requests from people seeking to occupy Berlioz's old apartment. At noon, when he goes up to apartment 50, he sees the choirmaster sitting at Berlioz's desk, dressed in his checkered jacket and wearing the pince-nez. A suspicious Bosoy questions the choirmaster, who says his name is Koroviev. Koroviev says he is the interpreter for Woland, and explains that Woland has been invited by Styopa to live in the apartment for a week while Styopa travels to Yalta. A surprised Bosoy finds a letter from Styopa in his briefcase explaining the arrangement. Koroviev answers Bosoy's request to see Woland by saying Woland is too busy training the cat to see Bosoy. Koroviev adds that Woland's stay will be profitable for the association, and agrees to pay the association 5,000 rubles in cash for the weeklong occupancy of the apartment. Koroviev also slips Bosoy a wad of cash and a pass for the magic show. Bosoy, although pleased with this bribe, also feels anxious about the entire situation. Koroviev promptly calls the authorities to turn in Bosoy for "speculating in foreign currency," testifying that he has 400 American dollars hidden in the vent in the privy of his apartment. Bosoy returns to his apartment, wraps his wad of 400 rubles in newspaper and puts it in the ventilation duct of the privy, and goes into the dining room. The doorbell promptly rings, and two citizens step in, find the wad, which now contains dollars rather than rubles, and escort Bosoy out of the house. As Chapter 10 opens, it is 2 P.M. and Rimsky and Varenukha, the administrator of the Variety theatre, are meeting in Rimsky's office trying to sort out the meaning of Woland's magic show. They have also been waiting since 11:30 for Styopa, who had called them at about 11, to arrive, but Styopa has since disappeared from his apartment. A woman comes in to deliver a telegram announcing that a mental case identifying himself as Styopa has been found in Yalta. A disbelieving Varenukha starts calling people to try to find Styopa, but a new telegram mentioning Woland confirms that the man in Yalta is indeed Styopa. The still disbelieving Rimsky and Varenukha wonder how this man knows about Woland, and another telegram arrives, confirming Styopa's identity through Styopa's own handwriting. Rimsky tells Varenukha to take the stack of telegrams to the secret police for them to sort out. Varenukha calls Styopa's apartment to check if Styopa is home and speaks with Koroviev, who identifies himself as Woland's assistant. Varenukha decides that Styopa must be at a new tavern in Pushkino called ?Yalta'," whereupon another telegram from Styopa asks them to send 500 rubles. Rimsky gives Varenukha the money to send to Styopa, and Varenukha goes to his office. He answers the phone, and the caller warns Varenukha against taking the telegrams anywhere. Varenukha thinks someone is trying to play tricks. As he walks into the garden, he feels the urge to go to the summer toilet to see if the wire over its light bulb has been installed. In the toilet he encounters a fat, cat-like man, who fiercely punches Varenukha's ear, then Azazello gives him a blow on the other ear. The cat-like man points out that Varenukha was warned against taking the telegrams anywhere, and the two men carry Varenukha into apartment 50, then vanish and are replaced by a naked, red-haired girl. The girl kisses Varenukha. Analysis The flood of people seeking to possess Berlioz's living space is the result of Soviet control over an insufficient supply of housing. Bosoy, as chair of the tenants' association, has immense control to grant or deny housing requests, and this control often lets him receive bribes. So, although Bosoy is somewhat uneasy about Koroviev, he happily accepts the payment and bribe. But Woland and his retinue, with their unclean powers, are able to reward Bosoy's deceit by planting the $400 in his bathroom vent. Upon being discovered, Bosoy's first instinct is to condemn his accuser, somewhat like Ivan condemning Riukhin, but the condemnations do neither one any good. The story of Varenukha and Rimsky scrambling to find Styopa displays the helplessness of Soviet authorities when faced with the unexpected element of Woland's unclean powers. Official channels of communication, such as the telephone and telegram, are no help in solving the problem of Styopa's disappearance. In fact, they only serve to make things worse, as in Koroviev's call and the call warning Varenukha not to take the telegrams anywhere. Rimsky, like earlier characters, grows visibly aged very quickly, in another example of distortions of time and space. Similar distortions are present in the cat's transformation into a cat-like fat man, the two vanishing robbers, and the apparition of the devilish woman. The consequence of her kiss is not known, but it will likely create more problems. Summary and Analysis: Chapters 11-12 New Characters Georges Bengalsky: The master of ceremonies at the Variety. Sempleyarov: Chairman of the Moscow theatres' Acoustics Commission. Praskovya Fyodorovna: A nurse. Summary Ivan, who is still in his room at the clinic, has failed in his attempt to write a statement about the professor. Nurse Praskovya Fyodorovna sees him crying and goes to the doctor, who gives him an injection while assuring Ivan he will cry no more. Ivan quickly begins to feel better, and the moon starts to rise as evening settles on Moscow. Ivan, thinking to himself, at first dismisses the death of Berlioz as absurd, but then "the former Ivan" points out to "the new Ivan" that the professor predicted Berlioz's demise. The new Ivan ponders the strange professor and wonders who will replace Berlioz as editor of the journal. A voice that resembles the consultant's calls Ivan a fool, which makes Ivan happy, and he sees a man on the balcony, who tells him to keep quiet. The scene shifts to the Variety's stage for Chapter 12, as the Guilli family's cycling acrobatics form the opening act of Woland's magic show. Meanwhile, Rimsky tries to call Varenukha at 10 P.M., only to learn that all the building's phones are out of order. He goes down to the theatre's dressing room to meet Woland and is surprised to see Koroviev, along with the black cat, accompanying Woland. The cat's trick of drinking water from a glass astounds everyone in the dressing room. On stage, Georges Bengalsky, the master of ceremonies, starts to introduce Woland, but the introduction falls flat, and Woland, Koroviev, and the cat take the stage. Woland begins by calling Koroviev "Fagott," the name Koroviev will be known by throughout the magic show. He chats with Koroviev about Moscow. After an interruption from Bengalsky, the show begins with Koroviev and the cat flipping a deck of cards back and forth, and Koroviev swallowing the cards as they are returned to him by the cat. The deck is then found on a citizen named Parchevsky, after which a heckler claims the deck was planted on Parchevsky. Koroviev tells the heckler he now has the deck. This heckler finds ten-ruble bills in his pocket instead of the deck, and when a fat man in the stalls asks "to play with the same kind of deck," Koroviev shoots his pistol up at the ceiling, and money begins raining down. The audience starts grabbing the bills, but Koroviev stops the rain of money by blowing into the air. Bengalsky steps in to declare that the rain of cash was merely a trick of mass hypnosis and asks Woland to make the notes disappear, but Koroviev and the audience do not like this idea. Someone in the gallery calls for tearing Bengalsky's head off. Koroviev says he likes this idea, and the cat jumps upon Bengalsky and tears his head off with two twists of his paws. An outraged audience asks for the head to be put back on Bengalsky, and the cat puts it back. A crowd rushes to help Bengalsky after he starts moaning, and he is taken away by ambulance. Meanwhile, Woland disappears, and as he does, Koroviev displays ladies' dresses, hats, shoes, and accessories from Paris. After he offers the women in the audience the chance to exchange their dresses and shoes for the Parisian dresses and shoes, one brunette takes up the offer. After she receives a pair of shoes and a dress, women rush the stage to get their new dresses and shoes. When Sempleyarov, the chairman of the Acoustics Commission of the Moscow theatres, calls for the trickery to end, Koroviev exposes his affair with his mistress, "an actress from a traveling theatre. Sempleyarov's wife defends him and, amidst the continuing chaos, Koroviev and the cat, now called Behemoth, vanish from the stage. Analysis The chapter title makes Ivan's schizophrenia explicit. He, newly submissive, can no longer make sense of the Pilate story or Berlioz's death and can only be healed by numbing medicine. Although the former Ivan knows that the Berlioz episode is quite disturbing, the new Ivan, unwilling to confront such difficulty, dismisses Berlioz's death as a temporary matter. The appearance of the man on the balcony, though, may mean the man will disturb Ivan's newly forgetful nature. Woland's magic show employs the basic strategy of manipulating, exposing, and distorting expectations and wishes. The initial conversation between him and Koroviev, rather than playing to the crowd, centers on the character of the Muscovites. When the fat man calls for money, he and everyone else receives it. Bengalsky's attempt to suavely introduce the show and smooth over the surprises is shouted down by the money-hungry audience, and when an audience member calls for his head, the request is granted. The audience takes the briefest notice of this ghastliness before the women are given the chance to pursue their desire for adornment and luxury. When Sempleyarov tries to stop the ensuing chaos, his secret acts are exposed by Koroviev. The mad audience, carried away by all the tumult, fails to notice Fagott and Behemoth are disappearing. The culprits have escaped, and the audience is left to deal with the repercussions of the night's exposures. Summary and Analysis: Chapter 13 New Characters Master: Currently in the psychiatric clinic with Ivan, he has written a novel about Pilate and Yeshua. Master's lover (also known as Margarita): Lives with the master in a basement apartment. Summary Ivan's visitor is a dark-haired man, approximately thirty-eight years of age. He explains that he has gained access to the clinic's common balcony by stealing some keys and could escape, but stays at the clinic because he has nowhere to go. Ivan confesses to this visitor that his poetry is bad and promises not to write any more poems. The visitor tells Ivan that Bosoy has arrived in room 119 cursing Pushkin and insisting that "unclean powers" live in apartment 50. Ivan tells the visitor he is in the clinic because of the story about Pilate and Berlioz's death, and the visitor tells Ivan that the professor at Patriarch's Ponds was actually Satan. Ivan, as his former self, tells the visitor they should try to catch Woland, and the visitor informs Ivan he has written a novel about Pilate, which is why he is in the clinic. Identifying himself as "a Master," he tells Ivan he had won 100,000 rubles in a lottery and used the money to rent a basement apartment and write his novel. The master continues telling a story about his past: one day, he met a woman carrying repulsive yellow flowers in her hand on Tverskya Boulevard and fell in love with her. However, both the master and she were married, so they met secretly every afternoon in his apartment. She urged the master to keep working on his novel, but it was rejected by publishers, and two critics wrote articles attacking the manuscript. However, the article by the critic Latunsky was the most savage attack of all, and the master became mentally ill from his struggles. One day in mid-October his lover urged him to travel to the Black Sea. He gave her 10,000 rubles to keep until he departed, and she promised to return to the master the next day. That night, he set out to burn his notebooks and manuscript but was interrupted by a visit from his lover. She rescued one chapter of the novel from the fire, and she vowed to tell her husband about the affair and stay with the master permanently. She also promised him she would return in the morning. After the master retreats to the balcony and tells Ivan room 120 is now occupied by Georges Bengalsky, he continues the story, which has shifted to mid-January. In the intervening three months, the master was held by the police. On a cold night after his release, the master set out on foot for the psychiatric clinic, and was picked up by a truck driver, who took him to it. Having finished his story, the master leaves Ivan's room and says he cannot tell any more of the story of Yeshua and Pilate, which, in ay case, would be better told by Woland. Analysis The master's appearance provokes Ivan, like Riukhin, to dismiss his poems as worthless, but he, unlike Riukhin, resolves to abandon further poetic effort. Ivan's honesty wins him the master's confidence and advice. As he points out, Ivan's inability to identify Satan shows how odd and illusory life in Moscow is. The populace, which has throughout the novel made the devil part of everyday conversation, is unable to identify Satan when he actually appears. In contrast to Ivan's meekness and willingness to obey others, the master, as he tells Ivan, firmly set out to write his novel about Pilate by himself and quickly realized he loved the woman with the yellow flowers. Their devotion to each other and the Pilate novel sets them apart from ordinary Muscovites, but the master is punished by the Communist literary establishment for writing his novel. In burning his manuscript, the master submitted to this official judgment, but his lover proved more courageous in her support for him. The master, though, is at least aware of his fear, and is aware that things may still change. It seems his appearance has somehow changed Ivan, though it remains to be seen exactly how and in what ways. Summary and Analysis: Chapters 14-15 New Characters Sergei Gerardovich Dunchil Dunchil: A roughly 50-year-old man accused of hiding currency and a gold necklace. Summary Rimsky, sitting in his office at the Variety Theatre, hears a policeman's whistle as he stares at a stack of cash from the magic show on his desk. He looks out on the street to see two disheveled, nearly naked women leaving the theatre. The clock strikes midnight, and Varenukha enters Rimsky's office. An anxious, fearful Rimsky asks Varenukha about Styopa, and gets the answer that Styopa was found in the tavern in Pushkino. Rimsky is happy with this news, but as Varenukha tells the story of Styopa's outrageous drunkenness at the Yalta tavern, Rimsky realizes Varenukha's entire story is a lie. Rimsky, who is aware of some kind of danger, examines Varenukha and finds he has a large bruise on the right side of his nose, a pale, chalky pallor, and cowardly eyes. Rimsky rings a bell for help, only to notice the bell is broken but Varenukha has noticed the ringing. When Varenukha lies about the cause of his bruise and Rimsky sees that he casts no shadow, Varenukha realizes he has been found out. He locks the door, and Rimsky goes to the window to see a naked woman pressing her face against it, trying to get in. Just as it seems Varenukha and the naked woman, who is dead, are about to kill Rimsky, a cock cries three times in response to the dawning of a new day. The woman flies away and Varenukha floats out the window. A suddenly aged Rimsky runs downstairs and flees on the express train to Leningrad. Before arriving in the clinic's room 119, Bosoy was taken to the secret police for questioning about the illegal currency. The authorities, concluding he was insane because he claimed Koroviev was the devil, put Bosoy in the clinic. He arrived in the evening and given an injection to quiet him down. Now asleep, Bosoy begins to dream about currency. Bosoy finds himself in a small, elegant theatre which lacks seats, so the bearded male audience sits on the floor. A bell rings and a young, handsome artist emerges and calls Bosoy up onto the stage. When asked by the artist to hand over his currency, Bosoy answers by claiming Koroviev "stuck" him with the $400. Bosoy goes back to sit on the floor, and after the theatre fills with darkness, fiery red words emerge on the walls telling the audience, "Turn over your currency!" Sergei Gerardovich Dunchil comes on stage, as does Dunchil's wife, and after Dunchil's initial denials of hiding currency or diamonds, Dunchil's mistress emerges bearing a tray with his $18,000 and diamond necklace. The curtain drops, the artist emerges again, and he brings out an actor to perform excerpts from Pushkin's The Covetous Knight, a play about man's terrible fascination with gold and cash. A citizen named Kanavkin goes on stage to turn over his $1000 and twenty ten-ruble gold pieces, and, after being examined by the artist, reveals that his aunt is hiding some more money for him. The theatre's lights turn on and cooks swarm over the audience to ladle out bowls of soup and rye bread while encouraging the men to hand over their currency. Bosoy is awakened by nurse Praskovya Fyodorovna, and his cries wake up Ivan, the master, and Georges Bengalsky. Ivan falls back to sleep and begins a dream of his own. Analysis The magic show's consequences are revealed in the disheveled women wandering outside the theatre. The aged Rimsky feels himself getting frightened, and Varenukha's appearance makes things worse. The kiss has turned him into some sort of demon, and he and the dead woman converge on Rimsky. Tellingly, the third crowing of the cock, which recalls Jesus's prediction that Peter would betray him three times before the crowing of the cock, causes the woman and Varenukha to flee. The interrogation of Bosoy, presumably done by the secret police, sets the stage for Bosoy's dream. He has recognized the devilishness of Koroviev and in response has turned to religious symbols, but, like Ivan, is not taken seriously by the clinic staff. The dream clearly reflects Bosoy's recent misfortune, but in its focus on the exposure of currency hoarders it recalls the magic show. Here too, private cravings are exposed publicly on an odd stage. The performance of excerpts from the Pushkin play shows how classic Russian literature is used for state purposes by the Communists, who deploy it to pressure the audience into handing over their currency. Ivan's response to the commotion in room 119 is not to think of currency but to dream of the execution at Bald Mountain: perhaps this dream is inspired by the earlier visit from the master. Summary and Analysis: Chapters 16-17 New Characters Vassily Stepanovich: The Variety Theatre's bookkeeper. Prokhor Petrovich: Chairman of the commission on light entertainments. Anna Richardovna: The secretary to Prokhor Petrovich. Summary Ivan's dream begins with the Roman soldiers taking the three condemned men up to Bald Mountain to be executed. They are followed by about 2,000 citizens, who spread out around the hill. As the evening heat beats down, one person is noticed hiding on the north side of Bald Mountain under a fig tree, where he cannot see the execution. He had been tardy in following the procession of soldiers and, after failing to make it to the execution site itself, went off to the north side of Bald Mountain because he would be alone there, apart from the soldiers and the crowd. The man, whose name is Matthew Levi, had thought of taking a knife and killing his companion, Yeshua, and then himself, as the three condemned men were marched to the execution. For that purpose he ran back to Yershalaim and stole a bread knife, but upon running back, realized the procession was too far ahead of him for the plan to be realized. After Matthew Levi curses God for failing to kill Yeshua quickly, a massive storm cloud swallows the setting sun and rolls westward. The scene shifts to the three condemned men hanging on their posts. Yeshua is faring better than the other two, and he is given a soaked sponge to drink water from. The executioner stabs Yeshua and Dysmas to death, and Gestas, the third man, dies after being given the sponge. Just after the three men are proclaimed dead, lightning and thunder emerge from the cloud, as well as a deluge of rain. The soldiers leave, and Matthew Levi goes up to the posts to cut down the three bodies, then carries the body of Yeshua down from the hilltop. Ivan's dream has ended. The scene shifts to the Variety Theatre, where a long line of citizens has gathered seeking tickets to Woland's magic show, and the theatre's phones are incessantly ringing. Rimsky's wife comes into the theatre at 10 A.M. looking for Rimsky or information about his whereabouts, and the police arrive at 10:30. Rimsky's wife is sent home, and the investigators arrive with a dog, who is taken away after failing to follow a scented trail. The investigators ask the bookkeeper, Vassily Stepanovich, why the posters for the show have vanished along with the contract, but he knows nothing about either issue. They visit apartment 50, but Woland is not there. With the Variety's directors gone and no sign of Woland, the magic show is cancelled, and Vassily is told to give a report on yesterday's show and turn in the receipts from it. But when a departing Vassily pulls out a ten-ruble bill to pay his cab driver, he realizes the bill is fake, and the driver says two other fares have paid him with fake ten-ruble bills received from last night's magic show. Vassily pays him with other bills and goes to deliver his report to the commission on light entertainment. There, he encounters tumult: the chairman's secretary, Anna Richardovna, pulls Vassily into the chairman's office, where he finds an nothing but an empty man's suit. The secretary blames the chairman's transformation on the devil and tells Vassily the black cat had gone into the chairman's office and replaced the chairman, Prokhor Petrovich, with the empty suit. Vassily walks over to the commission's affiliate, where he finds the employees involuntarily singing a song. Vassily learns that Koroviev has come into their offices teaching them the "Glorious Sea" song, hypnotizing them with it. A perplexed Vassily goes on to the financial sector to deposit the box-office money, but when he opens his briefcase he finds stacks of various foreign currency instead of the rubles. Vassily is arrested. Analysis Matthew Levi's presence at some distance from the execution site somewhat invalidates his anger at God for failing to give Yeshua a quick death; Matthew Levi has already failed to end Yeshua's suffering, so it hardly seems just for him to criticize God for not bringing about Yeshua's death. However, by framing the execution through Matthew Levi, this chapter emphasizes the simple humanity of those being executed rather than the Roman power that has authorized their deaths. So the three bodies are given to Matthew Levi to release from their posts, and it is he who takes possession of Yeshua's body. The continuing realism of the Yershalaim narrative is in sharp contrast to the strange, unexpected events in Moscow. The Variety Theatre, still coping with the uproar caused by the magic show, is at a loss as to what to do. The only option is to cancel the night's scheduled show and try to recover, but the show's dispersal of fake money snares Vassily Stepanovich. The government in Moscow continues being overwhelmed by Woland and his retinue continues, as evidenced in the staff's involuntary, hypnotic singing. The more dramatic disappearance of Prokhor Petrovich's body is a rather astounding example of how space and time continue to be played with by the devilish threesome. The city has been turned upside down by these characters, who have only been in town for two days. Summary and Analysis: Chapter 18 New Characters Poplavsky: Berlioz's uncle. Andrei Fokich Sokov: Barman at the Variety Theatre. Kuzmin: A doctor who treats Sokov's ailment. Summary Berlioz's uncle, Poplavsky, arrives in Moscow hoping to gain occupancy of his nephew's former apartment. Finding that Bosoy is not in, he decides to head up to apartment 50, where he encounters Koroviev and Behemoth and learns from Koroviev that Behemoth sent the telegram informing Poplavsky of Berlioz's demise. Behemoth demands to see Poplavsky's passport and tells him he isn't allowed at Berlioz's funeral and that he should go back to Kiev and lie low. Azazello enters, hits Poplavsky with a chicken, and throws his suitcase down the stairway. Poplavsky makes his way downstairs and encounters an old, melancholy man. He tells the man where apartment 50 is, watches the man go upstairs to the apartment, then watches the man run back downstairs and out of the building. This man, Andrei Fokich Sokov, is barman at the Variety. A beautiful, nearly naked woman lets him into the apartment where he meets Woland and Behemoth. He encounters difficulties with them before asking Woland about the fake bills, which have caused Sokov to lose 109 rubles from making change for false bills at the bar. A voice from the apartment's adjacent room reveals that Sokov has 249,000 rubles and 200 ten-ruble gold pieces, and a scared Sokov finds the fake bills have turned back into real ten-ruble notes. The voice also predicts Sokov will die of liver cancer next February, and Sokov runs out of the house and heads to a specialist in liver diseases. Believing the prophecy is true, he asks Professor Kuzmin for help. Kuzmin dismisses Sokov as a schizophrenic crook, but when a dancing sparrow flies onto his desk, Kuzmin becomes light-headed and dizzy. After seeing a nurse with a man's mouth and a fang at his desk, he goes to bed for some much-needed rest. Analysis Poplavsky's base reason for coming to Moscow is another example of how Woland draws out characters' inner desires. He, like the theatergoers, is punished for pursuing his desire. The Variety's barman, Sokov, displays more courage than most of those who have encountered Woland. Perhaps this courage comes from his God-fearing nature, but it does not keep him from being frightened by the exposure of his own currency hoarding, or the prediction that he will die of liver cancer. Sokov, unlike Berlioz, takes the prediction seriously. But the doctor dismisses Sokov's fears as mere phantasms. Kuzmin encounters the same problem of fake magic show money the taxicabs had dealt with, and he, like Varenukha before him, has a frightening encounter with a dead woman. The meaning of the sparrow's appearance is somewhat obscure. However, sparrows have already appeared in the novel, and it seems plausible that they represent higher powers, whether for good or for bad. Certainly this sparrow only deepens Kuzmin's troubles. Summary and Analysis: Chapters 19-20 New Characters Nikolai Ivanovich: Margarita's husband. Summary The master's lover, the 30-year-old, childless Margarita, has a comfortable life but does not love her husband. With the master gone and her not knowing if he is alive or dead, she sinks into despair. But on Friday, the same day as the bookkeeper's arrest, she wakes up around noon, sensing that her dream last night of the master calling to her means he is either dead and calling for her to join him, or alive, and they will see each other soon. She listens to her housemaid, Natasha, talk about last night's magic show, but she dismisses the stories as a false rumor. Margarita takes the trolley-bus down the Arbat and hears talk of a corpse's head being stolen from a coffin before getting off and taking a seat on a bench under the Kremlin wall. After watching a funeral procession go by, she says she would "pawn [her] soul to the devil" to know if the master is alive. She wonders who is being buried, and Azazello tells her it is Berlioz. The corpse's head has been stolen, however. He points out Latunsky in the procession in response to her request, then tells her he has come to her with some business, namely, to invite Margarita for a visit to a foreigner that evening. When a disbelieving Margarita dismisses him, he recites some of the master's novel, and an amazed Margarita asks him if the master is alive. Azazello affirms that he is alive and instructs Margarita to take off all her clothes at home at 9:30 that evening, rub herself with the ointment he will have given her, and wait for him to call her at 10. Margarita agrees and puts the ointment into her handbag. At 9:29 that evening, Margarita spreads the ointment over her body, and, looking in the mirror, sees herself as twenty-year-old woman with naturally curly black hair. She is pleasantly amazed by this, and, when she feels her body become weightless and free, she becomes very happy. She writes a farewell note to her husband, telling him she is now a witch and is leaving him forever. Natasha sees her transformation and helps her pack up for the trip. Meanwhile, her husband, Nikolai Ivanovich, arrives in his car to sit on a bench in the garden outside their home. Azazello promptly calls and tells her to shout "Invisible!" as she flies over the gate. Margarita takes the broom that comes into the house, and she throws off her shift, cries "Invisible! Invisible!" and flies off. Analysis Margarita, with her dissatisfaction in the midst of wealth, a superficially happy marriage, and roomy lodgings, is a stark contrast to earlier Moscow characters, who sought after material luxury as though it was the key to happiness. Margarita is instead devoted to her relationship with the master. She also follows the presentiment that arises from her dream: unlike other characters, she follows her intuitions about the supernatural. She also is willing to able to talk with the absent master on the bench under the Kremlin. Margarita is not superstitious though, as is seen in her refusal to believe Natasha's stories about the magic show. She responds to Azazello's arrival with fresh, uninspired speech, and even reproaches him. Her courage, even audacity, has been matched only by Yeshua. But she does trust Azazello and agrees to visit Woland, seeing it as a chance to reunite with the master. Woland, who has earlier caused several people to age rapidly, does the opposite trick for Margarita. Her sense of freedom and anticipation also contrasts with the fear so many characters have felt. She realizes that Woland is not dangerous, and embraces her future as a witch who has abandoned her husband. Other characters seem to dread the future, but she thinks it will bring her happiness. Summary and Analysis: Chapters 21-22 New Characters Fat Man: A man who Margarita encounters on the banks of the Yenisey River. Hella: A witch. Summary Margarita's invisible flight is underway. She flies along the Arbat, dodging utility wires as she flies about 20 feet off the ground before coming across the House of Dramatists and Literary Workers. Margarita finds Latunsky's name on the tenants' list, finds his top-floor apartment, and enters through an open window. She turns on the bathtub faucet, smashes the piano with a hammer, and starts smashing Latunsky's windows and other windows on the top floor. Continuing to smash windows methodically, she descends to the fourth floor as the overflowing bathtub water starts to fall through the floor to the apartment below Latunsky's. But she sees a small boy who tells her he is afraid, and she stops smashing windows, puts down her hammer, and quickly flies out of Moscow. Natasha, flying on a pig who is Margarita's transformed husband, joins her and explains that the ointment has enabled Natasha to fly and produced the husband's transformation. Natasha flies on ahead, and Margarita, sensing that her goal is near, slows her flight to land near the Yenisey River. There, she sees a naked, fat, drunken man, who calls her Queen Margot and falls into the river. Margarita flies to the opposite bank of the river, where musicians are playing a march in her honor, and naiads, naked witches, and a goat-legged figure give her a welcome. The goat-legged person calls for a car on an improvised telephone made from two twigs, and the car, driven by a crow, arrives to take Margarita back to Moscow. The car drops off Margarita at a deserted cemetery, where she meets Azazello. They fly on the broom to 302-bis. They walk past three men, all of them wearing a cap and high boots, and go into apartment 50. They walk up the dark apartment's stairs to a landing and see Koroviev there, holding a little lamp. He, dressed in formal wear, asks Margarita to follow him as Azazello disappears. She sees that they are in a huge hall before Koroviev explains the huge size of the hall by saying that it is easy for someone acquainted with the fifth dimension to expand space. Koroviev goes on to tell Margarita that Woland gives a ball every year in the spring on the full moon, and he needs her to serve as hostess. The woman must be named Margarita and be a Moscow native, two characteristics she matches. Margarita accepts, and she and Koroviev enters a small room, in which Azazello stands. The witch who had surprised the Variety's barman, and who is named Hella, sits on a rug by an oak bed. Behemoth sits before a chess table holding a knight, and Woland sits on the bed, staring at Margarita and wearing a long nightshirt. He and Behemoth are playing chess. Behemoth is dressed in a bow tie and wearing ladies' opera glasses from a strap on his neck, which he deems suitable attire for the evening's ball. Woland identifies Behemoth for Margarita, and some of the chess pieces begin moving in their squares, to Margarita's surprise. In a bid for victory, Behemoth gets Woland's king to run off the board, but Woland sees what has happened and gets the cat to give up. Woland shows Margarita his globe, which is a living microcosm of the real world, with wars, fires, and collapsing houses happening on it. He tells Margarita that Abaddon, the Hebrew word for destruction, does excellent work, and Abaddon promptly emerges from the wall, scaring Margarita with his dark glasses. Azazello tells Woland that Natasha and Nikolai Ivanovich are at the apartment door trying to get in, and Woland decides to have her stay with Margarita, but refuses to let the husband into the ballroom. Analysis Woland has manipulated space before, but here he grants Margarita control over space in her flight on the broom. She uses her power to gain revenge on Latunsky by smashing his windows and flooding his apartment, but she stops when she notices the little boy's fear. Natasha's transformation into a witch seems to play no important role in the plot, but it does give Margarita the first chance to exercise her new power as witch and queen by letting Natasha remain a witch. Her arrival in the forest seems to be deeply symbolic, especially her encounter with the drunken fat man. Why is she greeted with such a ceremony, and what was the point of having her fly all the way to the forest when she is immediately driven back to Moscow by the crow? Perhaps such a ceremony is required for all those who visit Woland, but it at least serves the purpose of displaying Margarita's new status. Margarita's entrance into apartment 50 is marked by the strangeness of Koroviev's introductory talk about how easy it is to expand space. This has already been made clear by the depositing of Styopa in Yalta, among other events, but Koroviev merely makes a little joke to prove that space is expandable. He also tells Margarita she was the only suitable hostess for Woland's ball, but does not tell her why she was suitable. The ensuing scene in the room where Woland and the cat play chess gives Margarita a further chance to prove her mettle. Woland's description of his three comrades as "a small, mixed and guileless company" is hard to take at face value, but here, with Margarita as their guest, they appear very relaxed and unguarded. The sight of the horrors on Woland's globe highlights the agonies of earthly existence, dominated as it is by Abaddon's destructiveness. Summary and Analysis: Chapter 23 Summary With midnight looming, the hosts must hurry to prepare for the ball. Margarita is washed in a jeweled pool filled with blood, then with rose oil. Rose petal slippers are put on her feet, a diamond crown is put on her head, and Koroviev hangs an oval picture of a black poodle around her neck. He instructs her to acknowledge every guest, and an empty ballroom appears, adorned with columns, tulips, and lamps, and populated by some "naked Negroes" standing by the columns, and an orchestra of roughly 150 men, conducted by Johann Strauss. Another room has walls of roses and a wall of Japanese double camellias, fountains of champagne bubbling in three pools, Negroes to serve the champagne, and a jazz band. Margarita is put on her throne, from which she can see a huge fireplace in the vast front hall. Just after midnight, the first guests--the counterfeiter, alchemist, and traitor Monsieur Jacques and his wife--emerge from a gallows and a coffin that drop down into the fireplace. Margarita receives them, and more figures emerge from coffins in the fireplace. More and more guests arrive, but Margarita takes particular notice of one woman, who had used a handkerchief to choke her newborn boy to death, and has for thirty years put a handkerchief on her night table, then woken up and found the handkerchief still there. When Margarita asks about the fate of the man who raped this woman and fathered the child, Behemoth, who has gone underneath her throne, says not to bother with him, and Margarita, warning him to say nothing more, rakes Behemoth's ear with her left hand's fingernails. The woman, named Frieda, briefly talks with Margarita, who advises her to get drunk. Koroviev introduces numerous other guests, but Margarita grows tired of them and their stories. Her body feels weary, especially her right knee, which is being kissed by all the guests. After three hours of receiving guests, the last two arrive. Margarita is then massaged in a pool of blood and gains her strength, which she needs to manage the crowd of guests, which is dancing to songs played by a jazz band of monkeys. Margarita and Koroviev leave the pool, and after having a few bizarre visions, Margarita returns to the ballroom and, to her amazement, a clock strikes midnight. Silence falls upon the guests, and Woland, Azazello, Abaddon, and some men who resemble Abaddon walk in. Azazello holds Berlioz's head on a platter, and Woland tells the head about his prediction of Berlioz's death. He declares that, according to the notion that "it will be given to each according to his faith," Berlioz will pass into non-being. But a new guest, Baron Meigel, "an employee of the Spectacles Commission" charged with showing foreigners around Moscow, has arrived. Meigel had offered to provide his services to Woland, and Woland returned the favor by inviting Meigel to the ball. However, he quickly accuses Meigel of being "a stool-pigeon and a spy" and predicts he will die within a month. Azazello fatally shoots Meigel, and Koroviev, after gathering the blood spouting from Meigel's body in a cup, gives it to Woland to drink. Once Woland has drunk, his patched shirt and worn slippers are replaced with a black chlamys and a steel sword on his hip, and he tells Margarita to drink. She does and the ballroom disappears, replaced by the ordinary setting of apartment 50. Margarita walks through the apartment's door. Analysis Margarita's bath in rose oil and blood recalls Pilate's hatred of rose oil and the earlier images of blood. The rose-petal slippers add to the sense that in this novel, roses do not represent vitality or happiness. The generally sumptuous, even decadent atmosphere of the ball conflicts with its ravaged guests, who, appropriately, begin emerging from the fireplace at the witching hour of midnight. The stories connected with the guests are all grotesque, but there seems to be no point to them. Unlike in Dante's Inferno, which apparently inspires this assemblage of the damned, Margarita is not instructed by their stories, she merely endures them. Perhaps her role as hostess is merely a trial of her strength. Berlioz's appearance gives Woland the chance to disprove Berlioz's theory, but he surprisingly shows mercy to Berlioz, condemning him to mere nonexistence, not damnation. Baron Meigel, on the other hand, suffers death for merely doing his job. Woland's reassurance that Margarita is not drinking blood displays once more the curious role played by alcohol thus far, both at the ball and in the novel as a whole. Summary and Analysis: Chapter 24 Summary Woland's bedroom is just as it was before the ball. Margarita drinks a glass of pure alcohol and feels refreshed by it. She drinks a second glass and starts to eat caviar. Koroviev confirms her suspicion that the three men at 302-bis were secret police and predicts they will come to arrest him. Margarita, excited by Meigel's murder, prompts Koroviev to say that Azazello can hit any covered-up objects. The company proceeds to play target practice with playing cards. At about 6 A.M., Woland says Margarita can request something in return for serving as hostess. Margarita asks that Frieda no longer be given her handkerchief, and when Frieda appears, Margarita declares that this will be done. Woland says since Margarita granted this wish herself, she can have one more for herself. Margarita asks for her master to come back immediately, and he promptly appears. Margarita, now clothed in a black silk cloak, sees that he looks sick, but when he drinks two glasses offered by Margarita, he gets better. The master and Woland talk about his novel before the master's manuscript is found on top of a stack of manuscripts Behemoth was sitting on. Margarita asks Woland for her and the master to return to their former life in the basement apartment, and Woland grants this wish. But before the two leave, Natasha wins her wish to remain a witch, and Varenukha successfully asks to return to his life before he became a vampire. After Margarita is given a diamond-studded horseshoe, Woland's retinue escorts Margarita and the master into a black car by the entrance to 302-bis. When Margarita realizes she forgot her horseshoe, Azazello runs up to get it and encounters the Annushka who had spilled the sunflower oil that Berlioz slipped on. She has stolen the napkin-covered horseshoe after the party walked downstairs. Azazello orders her to give it back, and gives her 200 rubles in exchange for preserving the horseshoe. He gives the horseshoe to Margarita and goodbyes are exchanged before the car takes Margarita and the master to their basement. There, Margarita starts reading from the master's Pilate novel. Analysis The alcohol given Margarita sparks her vivacity, and this strength, together with Woland's comment that Meigel's blood has given rise to grapevines, calls to mind the Christian belief in transubstantiation. The conversation between her, Woland, Koroviev, Azazello, and Behemoth is striking in its quick turns of subject, its lack of small talk, and its immediacy. Margarita's sacrificing and trusting spirit, rewarded by Woland's granting her multiple wishes, asks first not for the return of the master, but relief for Frieda. This request both brings to mind Goethe's Faust and sparks Woland's comments on mercy. This mercy is not within Woland's power, of course, and Margarita is rewarded for her wish by both having the power to grant it herself and being granted another one. The drink that had helped Margarita helps the master as well. And he too is treated mercifully by the return of his manuscript. Woland's comment that "manuscripts don't burn" seems to speak to the power of art, especially in overcoming tyranny. The couple, in exchange for enduring their trials, are returned to their basement apartment. The appearance of Margarita's husband and Varenukha shows how those two men are not capable of suffering the trials the couple endured. The husband merely asks for the proper bureaucratic procedures to be followed, and Varenukha seems to simply lack the strength to be a vampire. The return of Margarita's diamond-studded horseshoe gives the novel a chance to reiterate Annushka's status as a bad omen and her grubby, materialistic nature. It also again emphasizes the theme of exposure and secrecy. Summary and Analysis: Chapter 25 Summary A hurricane has struck Yershalaim, and Pilate is lying on a couch under the columns of his palace. He mutters to himself before seeing the hooded man who had been present at the execution. They greet each other, and as the evening sun starts shining, the hooded man reports that the city's populace is calm, and the Roman troops can leave. They converse about the execution before Pilate raises the issue of Judas of Kiriath. The hooded man, who is the head of the Roman secret police in Judea, confirms that Judas will be paid well for handing over Yeshua. Pilate mentions his fear that Judas will be killed tonight by one of Yeshua's friends. He asks the hooded man, whose name is Aphranius, to protect Judas, and though Aphranius vows to do this, Pilate predicts Judas will be killed. He also asks Aphranius for a report on the burial of the executed men before Aphranius leaves. Analysis The weather imagery of the chapter, with its initial hurricane and the sun emerging from that storm to shine its twilight rays on Yershalaim, calls to mind the important role weather has played throughout the novel in setting scenes and highlighting moods. Here, the storm seems to reflect Pilate's unsettled mind as well as provide the appropriate backdrop for the shadowy machinations of Aphranius, the hooded man. The vow that only the power of the Roman Caesar is guaranteed is ironic, given that Pilate has gained no peace from his own power, and the Roman Empire itself will begin its decline not long after Pilate leaves office. Despite the vow about guarantees, Pilate is willing to prophesize Judas's death. This prophecy appears to be read by Aphranius as an order to murder Judas. Summary and Analysis: Chapter 26 New Characters Niza: A woman pursued by Judas; she betrays him to the Hooded Man. Judas: The man who betrayed Yeshua; he is murdered by the Hooded Man's henchmen. Summary An anguished Pilate calls to his dog, Banga, for comfort. Meanwhile, Aphranius gets three carts loaded with entrenching tools and barrels of water, and the cart drivers, escorted by 15 men on horseback, set off for Bald Mountain. Aphranius leaves as well on horseback and goes to Antonia Fortress, then to Greek Street in the Lower City. He meets Niza, a young woman, at a house there, and they leave separately after their brief meeting. At the same time, Judas of Kiriath is leaving his dreary house. He goes into Kaifa's courtyard briefly, then heads toward the marketplace of the Lower City, where he sees Niza. Niza tells Judas to go the olive estate in Gethsemane, following her lead, and meet her in the grotto there. He follows Niza and calls for her in the estate's garden, but instead two men jump out at him. The first man fatally stabs Judas in the heart, and Aphranius appears on the estate's road to tell the two men to leave quickly. They take Judas's purse and its thirty tetradrachmas. Aphranius comes back to Yershalaim, puts on his helmet and sword, and reverses his cloak into a military chlamys. On the Passover night, Yershalaim is celebrating, but Pilate in his palace merely goes to sleep around midnight and begins to dream. In the dream, he, accompanied by Banga, walks with Yeshua in the moonlight, talking about "something very complex and important." Pilate realizes that Yeshua must be alive if he is walking beside him, and Yeshua says both that cowardice is the worst vice and that he and Pilate will always be linked. Pilate wakes to realize Yeshua is indeed dead, and he sees Mark Ratslayer, who tells Pilate that Aphranius is waiting to see him. Aphranius informs Pilate that Judas has died in Gethsemane, but Aphranius claims not to know who killed Judas. Aphranius also tells Pilate the executed men have been buried, and the body of Yeshua was found with Matthew Levi in a cave on the northern slope of Bald Skull. Matthew Levi, who helped bury Yeshua, is now at the palace, and he meets Pilate in its garden. Matthew Levi asks that his bread knife be returned to the shop he took it from. He shows Pilate a parchment scroll with some of Yeshua's sayings. Matthew Levi rejects Pilate's offer for him to serve at Pilate's library in Caesarea and declares that he will kill Judas. Pilate smiles as he tells Matthew Levi he has already killed Judas. Levi leaves after asking for a piece of clean parchment, and dawn breaks with Pilate and Banga asleep once again. Analysis Pilate's weakness, weariness and fear are reiterated at the very start of the chapter. Judas, in contrast, is neither weary nor wary thanks to his desire for Niza, and therefore entirely fails to realize that she is leading him into a deadly trap. Her betrayal, together with Aphranius' underhanded act of murder, replay themes of falseness, duplicity, and secret dealings seen constantly throughout the novel. Pilate's moonlight dream clearly shows his regret at Yeshua's death and his sense of his own cowardice, but Pilate appears to be beyond help. The bloody business of Judas' death is at hand, and Pilate accepts Aphranius' evasive assurance that Judas is indeed dead. Pilate has also predicted that Yeshua's body will be taken, but it is impossible not to compare the subdued story of Matthew Levi helping bury Yeshua with the Gospel accounts of Jesus's resurrection. Here, events take place on a much more mundane level, and the bribe offered Matthew Levi is only a job as a librarian. In another mundane instance, Matthew Levi's murderous desire is quelled by the simple fact that Judas is already dead. Summary and Analysis: Chapters 27-28 New Characters Pavel Yosifovich: A guard at the currency store. Summary As Margarita finishes reading the chapter, Saturday morning has come to Moscow. The police are busy investigating Woland's appearance. Sempleyarov was called to the investigation headquarters and questioned about the magic show, and apartment 50 was visited more than once to check for hiding places and occupants without finding anything. There is no evidence of Woland's presence in Moscow. Prokhor Petrovich has returned to his suit but knows nothing about Woland. Rimsky was found hiding in the wardrobe of a Leningrad hotel room and ordered to return to Moscow on Friday evening. Styopa was found to have left Yalta in a plane headed for Moscow, and everyone else but Varenukha has been found. The investigators visited Ivan at the clinic on Friday evening. He answered questions about Koroviev and Berlioz's death, and the man who questioned him decided that Berlioz was hypnotized when he died. At dawn on Saturday, Styopa disembarked and was greeted by investigators, and Varenukha has at last been found in his apartment. Varenukha, after initially lying, talks about being beaten by Koroviev and a fat man resembling a cat, and Rimsky comes in on the Leningrad train but reveals no information. After more questioning of other witnesses, a company of men arrive at 302-bis on Saturday afternoon. Koroviev, Azazello, Woland, and Behemoth are in apartment 50 awaiting their arrest. The company uses skeleton keys to enter the apartment. They find Behemoth holding a primus on the dining room table. He dodges an attempt to catch him with a net, but is shot by a Mauser blast. Behemoth drinks benzene to heal his wound, takes his Browning out, and opens fire. The ensuing shootout wounds no one, and Behemoth, declaring it "time to go," uses the benzene to set fire to the apartment. The company of men escapes, and Woland and his retinue fly out of the apartment. Koroviev and Behemoth make their way to a currency store on the Smolensky marketplace, which serves as a department store selling items in exchange for foreign currency. Behemoth transforms himself into a cat-like fat man, holding a primus, when he is told no cats are allowed in the store. They enter, and Behemoth eats some mandarins, a chocolate bar, and three herrings. After a salesgirl calls for Pavel Yosifovich, he arrives and calls for the doorman to blow a whistle. A crowd surrounds Behemoth and Koroviev, who makes a weak protest. Behemoth's benzene goes ablaze, and the pair escape to the Griboedov House, where a woman asks them for their writers' identification cards, which are required for entrance, but they have none. Archibald Archibaldovich, the restaurant manager, orders her to let them in, and they, together with Archibald, sit down at the best table in the restaurant. Archibald leaves the pair at their table to look after the preparation of the fillets of hazel-grouse being served them and, as some guests talk about the fires set across Moscow, three armed men enter and open fire at Koroviev and Behemoth. The two vanish, and the benzene sets fire to the Griboedov House. Analysis Margarita has endured her trials without suffering damage from them, a sign of her strong and resilient character. Meanwhile, the Muscovite authorities, like Aphranius' henchman, have trouble finding the men they're looking for. The parallelism of these searches for criminal culprits cannot be accidental, and seems meant to highlight some underlying similarities between Yershalaim and Moscow. The Moscow investigation leads nowhere, as the earlier successes in exposing currency hoarders are followed by an inability to track down Woland and his retinue, who have brazenly flouted the law. Here, petty criminals are rigorously prosecuted, while large-scale crime goes unpunished. The miserable condition of most of Woland's victims contrasts with Ivan's dreaminess and plain indifference. The episode of the chase, with Koroviev and Behemoth using their manipulation of time and space to escape, reprises earlier instances of such artifice. This artifice has some similarity with the master's attempt to reach back two thousand years to tell the story of Pilate. In Moscow though, it is simply used to effect a quick, mysterious escape. Archibald Archibaldovich's suave, tactical treatment of Koroviev and Behemoth recalls the ease with which Margarita handled her trial. He, like her, has the ability to cope with and manage serious and threatening characters. Summary and Analysis: Chapters 29-32 Summary Woland and Azazello are sitting on the stone terrace of an old Moscow house, looking over the city as the sun sets. Matthew Levi, who has been sent by Yeshua, appears on the terrace and asks that Woland to give the master and Margarita peace, rather than the light. Woland agrees, Matthew Levi leaves, and Koroviev and Behemoth, who still appears as a fat man, arrive. Woland tells them that one last storm is coming to complete things, and the storm arrives, darkening the skies over Moscow. As Matthew Levi appears on the terrace, the master and Margarita awake and talk in their basement. She tells a disbelieving master that they were really at Satan's, and she struck a deal with him and is now a witch. Azazello appears to say Woland has invited the couple to go on an excursion with him. They agree to go. Azazello takes a bottle of the same wine Pilate had drunk and pours it into glasses. The wine is poisoned, and upon it both the master and Margarita fall ill and die. Azazello then pours some drops of the wine into Margarita's mouth to revive her. Margarita helps give the master some wine, and he too revives. The couple leaves after Azazello starts a fire in the basement, and the three jump on their steeds and fly over Moscow. The master and Margarita go to the clinic to visit Ivan and say farewell. Margarita kisses Ivan and tells him, "[E]verything will be as it should be with you." The couple leave, and Praskovya Fyodorovna, the nurse, reveals that the master has just died in room 118. Master, Margarita, and Azazello join Woland, Koroviev, and Behemoth on their horses, on a hill overlooking Moscow. As the master gives one last look on Moscow, Behemoth and Koroviev give their own farewell whistles. Woland cries "It's time!" and the six steeds and their riders depart in the sky, with Margarita looking back to see nothing of Moscow. The horses tire as evening settles on the earth and night emerges. Koroviev changes into a knight, and Woland explains that the knight is here because he made an unfortunate joke about light and darkness on a night "when accounts are settled." Behemoth is transformed into a thin youth, and Azazello's face turns white and cold, taking on the visage of "the demon of the waterless desert." The master's hair turns white, and Woland's horse becomes "a mass of darkness." Eventually the riders stop their horses near Pilate, who sits in an armchair on a desolate summit, accompanied by his dog, who like Pilate looks up at the moon. Pilate has slept on his chair for 2000 years, but he and his dog, Banga, are overcome by insomnia during full moons. Pilate always dreams that he wants to walk with Yeshua on a path in the moonlight, but can't reach the path, so only talks to himself, cursing his immortality and fame. The master shouts to Pilate that he is free, and Yeshua waits for him. The mountains collapse, leaving just the armchair, and a city arises, with the moonlight path shining on it. Pilate and Banga rush down this path. Woland bids farewell to the master and Margarita, the scene disappears, and dawn breaks just after the midnight moon. The couple walk over a small stone bridge along a path, and Margarita points to the master's eternal home, where he will sleep and she will watch over him. Analysis Woland and Azazello's retreat from Moscow indicates that the novel is heading toward its denouement. The crucial events are finished, now it remains to determine characters' fates. So Matthew Levi appears to tell Woland the master and Margarita deserve peace rather than light as a reward for their courage. The couple may not be angelic, but they are heroic. The killing of the couple by Azazello is the result of their courage in deciding to stay together and bear their woes together. Again alcohol produces a change in characters, this time for the worse, but it also paves the way for their fate to be resolved. The apartment's destruction by fire serves as a reflection of the death of the couple. It emphasizes that both their home and their past are concluded, and they are beginning a new life. But the farewell to Ivan shows that he too is being given a reward for his trials. The transformations of Koroviev and Behemoth seems to turn them back into the people they originally were, before being condemned to serve Woland. They, in any case, are a sidelight to the drama of Pilate, who, still a coward, is still alone, weary, and living in a shadowy world. It is not easy to tell why the master is assigned the job of freeing Pilate. Perhaps his artistic paternalism of Pilate, in the form of writing his novel about the procurator, has given him the authority to free Pilate. Yeshua and Pilate are now able to renew their ancient conversation, and the master and Margarita gain their eternal, peaceful life in their new home. The master's sleep seems to be a mark of his redemption, as the suffering he has endured is replaced with a long, restful slumber. Summary and Analysis: Epilogue Summary Back in Moscow, the narrator surveys the aftermath of Woland's appearance. Rumors of unclean powers have spread, but Woland himself has simply disappeared. Many black cats have been killed, and some citizens with names similar to Koroviev and Woland were detained. Most citizens dismiss the entire affair as a case of artful mass hypnosis, but the populace remains on edge. Natasha and Margarita have disappeared, and people generally think Woland's retinue took them because of their beauty. Georges Bengalsky has lost his vigor and retired, while Varenukha became pleasant and kind, and Styopa has grown healthier but keeps away from women. Rimsky has left his post to head up a children's marionette theatre, Sempleyarov is now manager of a mushroom cannery, and Bosoy has stopped going to the theatre. Ivan, meanwhile, appears at the Patriarch's Ponds at every festal spring full moon, which is the first after the equinox. He sits on the bench where he sat on the day Berlioz died, talks to himself for an hour or two, then goes to the Arbat. There, he goes to a Gothic mansion and sees Margarita's old husband sitting on a bench in the house's garden, muttering about his fate. Ivan goes home sick. In his recurring dream on this night, he sees an executioner stab his spear into the heart of Gestas, one of the men executed on Bald Mountain. Then Ivan receives an injection, and sees in his dream Pilate and Yeshua, walking on a moonlight path and talking about Yeshua's execution. Ivan is then led by a beautiful woman to the master, and the woman says, "Everything with you will be as it should be," before kissing him on the forehead. After the moonlight floods Ivan, he begins to sleep with a blissful face. Analysis It remains for the epilogue to describe the consequences down in Moscow of Woland's visit. The bungled investigation and general paranoia create many innocent victims, most notably the cats. In becoming anonymous to the authorities, the master has gained oblivion, which is perhaps the rarest gift available in a state that obsessively monitors its citizens. Nearly all those who were in contact with Woland and his retinue are damaged, and the prediction of Sokov's death came true. But Ivan, still haunted by his meeting with Woland, cannot escape from the story of Yeshua and Pilate. He too, it seems, is rewarded for his bravery by being granted peace, albeit in a temporary form. 5. QUIZZES Questions and Answers: Chapter 1 Questions 1. What is for sale at the stand with the sign "Beer and Soft Drinks"? 2. Why is Berlioz upset with Ivan Homeless' poem about Jesus Christ? 3. What nations do Berlioz and Ivan think the Professor comes from? 4. Who does the Professor predict will kill Berlioz? 5. How does the Professor describe himself? Answers 1. The stand has no seltzer or beer for sale, only warm apricot soda. 2. Berlioz says the poem presents Jesus as a living person when it should present him as someone who never existed. 3. Berlioz thinks the Professor comes from Germany, then France, and Ivan thinks he comes from England, then Poland. 4. The Professor predicts a Russian woman who belongs to the Komsomol will kill Berlioz. 5. The Professor describes himself as a polyglot who knows "a great number of languages," "a specialist in black magic," a historian, and perhaps a German. Questions and Answers: Chapter 2 Questions 1. Why does Pontius Pilate fear he will have "a bad day"? 2. What happens to Yeshua after he calls Pilate a "good man"? 3. What is Yeshua's reaction to the writings of Matthew Levi? 4. How does Yeshua say he entered Yershalaim? 5. Why does Pilate squint his eyes as he mounts the platform? Answers 1. Pilate fears he will have a bad day because he has been pursued by the smell of rose oil since dawn. 2. Pilate has Mark Ratslayer beat Yeshua with a whip and tell him to call Pilate Hegemon and stand at attention. 3. Yeshua claims he has said none of the things Levi has attributed to him, and he asks Levi to burn his writings. 4. He says he entered Yershalaim "by the Susa gate, but on foot, accompanied only by Matthew Levi" and no one recognized him as he entered. 5. Pilate squints his eyes because he does "not want to see the group of condemned men" being brought to the platform. Questions and Answers: Chapters 3-4 Questions 1. Who is the last person Berlioz sees before he dies? 2. What is the Professor's reply to Ivan's question, "[W]ho are you"? 3. Who does Ivan think the Choirmaster is? 4. Where does Ivan realize he will find the Professor? 5. When Ivan comes out of the Moscow River, what does he find in place of his clothes? Answers 1. The last person Berlioz sees is the citizen who had formed "himself out of the thick swelter" earlier, a man with checkered trousers, a small mustache, and tiny eyes. 2. The Professor's reply is "no understand no speak Russian." 3. Ivan thinks the Choirmaster is in partnership with the Professor and is trying to keep Ivan from catching the Professor. 4. Ivan realizes he will find the Professor in house 13, apartment 47. 5. Ivan finds "a pair of striped drawers, the torn Tolstoy blouse, the candle, the icon and a box of matches. Questions and Answers: Chapters 5-6 Questions 1. How is Griboedov's restaurant described? 2. What two things happen exactly at midnight? 3. Where is the psychiatric clinic? 4. What does Ivan denounce Riukhin as? 5. What measure does Ivan take to catch the Professor? Answers 1. Griboedov's restaurant is described as being richly decorated, having a select clientele, and offering top-quality food for reasonable prices. 2. The twelve writers go down to Griboedov's restaurant, and the Griboedov's jazz band starts playing. 3. The psychiatric clinic is "on the outskirts of Moscow by the bank of the river." 4. Ivan denounces Riukhin as "a little kulak carefully disguising himself as a proletarian." 5. Ivan uses a small candle and the icon to catch the Consultant. Questions and Answers: Chapters 7-8 Questions 1. What began happening two years ago at apartment 50? 2. Why does Styopa have trouble speaking? 3. How much is Woland to be paid for his performances of black magic? 4. Where does Styopa find himself after leaving apartment 50? 5. What does Dr. Stravinsky believe will happen to Ivan if he goes to the police? Answers 1. Two years ago, people began disappearing from apartment 50 without a trace. 2. Styopa has trouble speaking because "at each word, someone stuck a needle into his brain, causing infernal pain. 3. Woland is to be paid 10,000 rubles up front, "as an advance on the thirty-five thousand rubles due him for seven performances. 4. He finds himself at the end of a jetty in the city of Yalta. 5. Dr Stravinsky believes Ivan will be back at the clinic in two hours if Ivan goes to the police. Questions and Answers: Chapters 9-10 Questions 1. Who is Bosoy? 2. How much does Koroviev pay to rent apartment 50? 3. What does Rimsky think of the black magic show? 4. What does Varenukha hear when he calls the Likhodeev apartment? 5. Why does Varenukha stop on his way to deliver the telegrams? Answers 1. Bosoy is the "chairman of the tenants' association of no. 302-bis on Sadovaya Street in Moscow," Berlioz's former residence. 2. Koroviev pays five thousand rubles to rent apartment 50. 3. He completely dislikes the black magic show and is "surprised he's been allowed to present it." 4. Varenukha hears "a heavy, gloomy voice singing: ? rocks, my refuge '" when he calls the Likhodeev apartment. 5. Varenukha stops on his way to deliver the telegrams because he feels an irrepressible desire "to check whether the repairman had put a wire screen over the light-bulb" in the summer toilet. Questions and Answers: Chapters 11-12 Questions 1. What is Ivan worried about as he starts to write his statement about the Consultant? 2. How is Georges Bengalsky described? 3. What is the first trick Fagott performs? 4. What does Woland say about the audience? 5. What does Fagott say Sempleyarov did the previous night? Answers 1. Ivan is worried that if he describes Berlioz as being deceased it might lead the clinic to "take him for a madman." 2. Georges Bengalsky is described as plump, wearing "a rumpled tailcoat and none-too-fresh shirt," and with a clean-shaven face. 3. The first trick Fagott performs is to flip a deck of cards to the cat, then have the cat send the cards back, with Fagott swallowing the cards. 4. "They love money," like all people, and "the housing problem has corrupted them." 5. Fagott says Sempleyarov went to visit his mistress on Yelokhovskaya Street. Questions and Answers: Chapter 13 Questions 1. What things upset Ivan's guest? 2. What languages does Ivan's guest know? 3. What did Ivan's guest know the end of his novel would be? 4. What was the title of Latunsky's article on Ivan's guest? 5. How did Ivan's guest get to the clinic? Answers 1. Ivan's guest is upset by noise, violence, and, in particular, people's cries. 2. Ivan's guest knows Russian, English, French, German, Latin, and Greek. 3. "The fifth procurator of Judea, the equestrian Pontius Pilate." 4. The title of Latunsky's article was "A Militant Old Believer." 5. Ivan's guest began walking to the clinic, then was picked up by a truck driver who was driving to the clinic. Questions and Answers: Chapters 14-15 Questions 1. How has Varenukha's appearance changed? 2. What interrupts the dead woman? 3. Who are the members of the audience in Bosoy's dream? 4. What was Dunchil hiding in his mistress' apartment? 5. What does the artist say about the human eye? Answers 1. Varenukha now has a pale, chalk-like pallor on his face, and his eyes display furtiveness and cowardliness. 2. The crowing of a cock in the garden interrupts the dead woman. 3. All the members of the audience are bearded men. 4. Dunchil was hiding "Eighteen thousand dollars and a necklace worth forty thousand in gold." 5. The artist says the human eye cannot conceal the truth. Questions and Answers: Chapters 16-17 Questions 1. Where is Matthew Levi during the execution of the three men? 2. How does Matthew Levi get his bread knife? 3. What does Matthew Levi do with his bread knife? 4. What forces the employees of the affiliate for city spectacles to sing? 5. What does the bookkeeper find when he opens his bundle at the cash deposit window? Answers 1. Matthew Levi sits on a stone under a sickly fig tree on the north side of the mountain during the execution of the three men. 2. Matthew Levi takes his bread knife from the counter of a bread shop in Yershalaim. 3. Matthew Levi uses his bread knife to cut the ropes binding the three executed men to their posts. 4. The secretary of the affiliate for city spectacles says the employees are forced to sing by "some sort of mass hypnosis." 5. The bookkeeper finds various kinds of foreign money when he opens his bundle at the cash deposit window. Questions and Answers: Chapter 18 Questions 1. Why does Poplavsky go to Moscow? 2. How does Poplavsky respond to Koroviev when he says the cat sent Poplavsky the telegram? 3. How is the man who asks Poplavsky about the location of apartment 50 described? 4. What is the girl in apartment 50 wearing? 5. How much money does Woland say the barman has? Answers 1. Poplavsky goes to Moscow to try to get occupancy of Berlioz's apartment. 2. Poplavsky goggles his eyes in disbelief when he hear the cat send him the telegram. 3. The man who asks Poplavsky about the location of apartment 50 is tiny and elderly, "with an extraordinarily melancholy face." 4. The girl in apartment 50 is wearing a small lacy apron, "a white fichu on her head," and golden slippers on her feet. 5. Woland says the barman has 249,000 rubles in five savings banks "and two hundred ten-ruble gold pieces at home under the floor." Questions and Answers: Chapters 19-20 Questions 1. Who is Margarita married to? 2. How does Margarita interpret her dream of the master? 3. When does Azazello encounter Margarita? 4. What are Azazello's instructions to Margarita? 5. What does Margarita do with her shift? Answers 1. Margarita is married to Nikolai Ivanovich, "a very prominent specialist" who has made a very important discovery. 2. Margarita interprets her dream of the master to mean that he is either dead, and she will die soon to join him, or he is alive, and they "will see each other very soon!" 3. Azazello encounters Margarita just after she says she'd pawn her "soul to the devil just to find out" if the master is alive or not. 4. Azazello tells Margarita to take off her clothes at 9:30 and rub her face and body with the ointment, then wait for him to call her at 10. 5. Margarita throws her shift over her husband's head. Questions and Answers: Chapters 21-22 Questions 1. Where does Latunsky live? 2. How does Margarita realize she is flying very rapidly? 3. What does the hog carry with him? 4. How were three rooms changed into six? 5. What is Abaddon's appearance? Answers 1. Latunsky lives in apartment 84 of the "House of Dramatists and Literary Workers." 2. Margarita realizes she is flying very rapidly once she looks down and sees two rows of lights quickly vanish beneath her. 3. The hog carries a briefcase in his front hoofs, a pince-nez on a string, and a hat. 4. Three rooms were changed into six by dividing one room of a three-room apartment in two, trading the apartment for a three-room and a two-room apartment, then trading the three-room apartment for two two-room apartments. 5. Abaddon's appearance is as a gaunt man with dark glasses. Questions and Answers: Chapter 23 Questions 1. What is Margarita washed in? 2. What does the cat think is the worst job in the world? 3. What did Frieda do with her handkerchief? 4. What happens to Berlioz's head? 5. What does Woland say Baron Miegel is being accused of? Answers 1. Margarita is first washed in blood, then in rose oil. 2. The cat thinks being a tram conductor is the worst job in the world. 3. Frieda used her handkerchief to choke her newborn boy. 4. The flesh of Berlioz's head turns dark and shriveled, then falls off in pieces, and its eyes disappear. 5. Woland says "wicked tongues" are calling Baron Miegel "a stool-pigeon and a spy." Questions and Answers: Chapter 24 Questions 1. What target does Azazello shoot at? 2. What does Margarita think she will do if she gets out of Woland's apartment? 3. What does Woland say about mercy? 4. Where is the master's manuscript? 5. Where does Annushka hide the diamond-studded horseshoe and napkin? Answers 1. Azazello shoots at the upper right-hand pip in the seven of spades. 2. Margarita thinks she will drown herself in the river if she gets out of Woland's apartment. 3. Woland says mercy "sometimes creeps, quite unexpectedly and perfidiously, through the narrowest cracks." 4. The master's manuscript is on the top of a "thick stack of manuscripts" the cat is sitting on. 5. Annushka hides the diamond-studded horseshoe and napkin in her bosom Questions and Answers: Chapter 25 Questions 1. When do the sun's rays return to Yershalaim? 2. What does Pilate's guest say "can be guaranteed in this world"? 3. What, according to Yeshua, is one of the first among human vices? 4. Where does Judas of Kiriath work? 5. When does Pilate notice the sun has set? Answers 1. The sun's rays return to Yershalaim just as Pilate's visitor appears on the balcony. 2. Pilate's guest says "the power of great Caesar" is the only thing that "can be guaranteed in this world." 3. Yeshua says cowardice is one of the first among human vices. 4. Judas of Kiriath "works in the money-changing shop of one of his relatives." 5. Pilate notices the sun has set only after the head of the secret service has left the balcony. Questions and Answers: Chapter 26 Questions 1. What does Niza say is the reason she decided to go out of town? 2. What does Judas see above the temple? 3. In Pilate's dream, what does he think of the execution of Yeshua? 4. Where did Matthew Levi take Yeshua's body? 5. What position does Pilate offer Matthew Levi? Answers 1. She says she decided to go out of town because she would have been bored if Judas came to her house. 2. Judas sees "two gigantic five-branched candlesticks" blaze above the temple. 3. In Pilate's dream, he thinks the execution of Yeshua never happened because Yeshua is walking beside Pilate, and because "it would be terrible even to think" of executing Yeshua. 4. Matthew Levi took Yeshua's body with him into "a cave on the northern slope of Bald Skull." 5. Pilate offers Matthew Levi a job sorting and looking after the papyri in Pilate's library in Caeserea. Questions and Answers: Chapter 27-28 Questions 1. How did Dr. Stravinsky cure the staff from involuntarily singing "Glorious Sea"? 2. What did the investigator conclude about Berlioz's death? 3. What does the cat do after saying it's time for him to go? 4. What does the cat say about Dostoevsky? 5. What happens when the three men shoot at Koroviev and the cat? Answers 1. Dr. Stravinsky cured the staff of involuntarily singing "Glorious Sea" by giving them subcutaneous injections. 2. The investigator concluded Berlioz threw "himself under the tram-car while hypnotized." 3. The cat hurls his Browning, knocks out the two window panes, then splashes some benzene, which catches fire. 4. The cat says "Dostoevsky is immortal!" 5. When the three men shoot at Koroviev and the cat, the two disappear and a pillar of fire from the primus shoots onto the tent roof. Questions and Answers: Chapters 29-32 Questions 1. What does Matthew Levi say of the master? 2. What will the storm do? 3. What does Margarita say she likes? 4. What does Woland tell Koroviev to avoid? 5. What will Margarita do once the master falls asleep in his eternal home? Answers 1. Matthew Levi says the master "does not deserve the light, he deserves peace." 2. The storm "will complete all that needs completing." 3. Margarita says she likes "quickness and nakedness." 4. Woland tells Koroviev to avoid inflicting any injuries. 5. Margarita says she will always be with him, watching over his sleep. Questions and Answers: Epilogue Questions 1. Who has died as a result of the visit of Woland and his companions? 2. What happened to black cats after Woland left? 3. What do the investigators learn about the master? 4. What has happened to Georges Bengalsky? 5. What is Ivan's new job? Answers 1. Berlioz and Baron Meigel die because of Woland's visit. 2. Approximately one hundred blacks cats were killed, and about another dozen were badly disfigured. 3. The investigators fail to learn why the master was abducted, and they never learn his last name. 4. Georges Bengalsky has lost much of his gaiety and retired from his job to live on his savings. Every spring during the full moon falls into an anxious state. 5. Ivan is now a Professor and a researcher at the Institute of History and Philosophy. 6. THEMES Absurdity The actions taken by the devil, Woland, and his associates in Moscow seem to be carried out for no reason. From the beginning, when Woland predicts the unlikely circumstances of Berlioz's beheading, to the end, when Behemoth stages a shoot-out with the entire police force, there seems to be no motivation other than sheer mischief. After a while, though, their trickery reveals a pattern of preying upon the greedy, who think they can reap benefits they have not earned. For example, when a bribe is given to the chairman of the tenants' association, Bosoi, Woland tells Korovyov to "fix it so that he doesn't come here again." Bosoi is then arrested, which punishes him for exploiting his position. Similarly, the audience that attends Woland's black magic show is delighted by a shower of money only to find out the next day that they are holding blank paper, while the women who thought they were receiving fine new clothes later find themselves in the streets in their underwear. These deceptions appear mean-spirited and pointless, but the victims in each case are blinded by their interest in material goods. Guilt and Innocence The story of Pontius Pilate serves to raise fundamental questions about guilt. As the Procurator of Judea, the representative of the Roman government in Israel, Pilate is responsible for passing judgment on people the Israelis have arrested and brought before him. In Yeshua's case, he feels guilty having to sentence Yeshua to death. Pilate's conscience is awakened during his interview with Yeshua; he shows a fascination with the idea of acceptance, but because of his position he is not able to completely believe in it nor is he able to forget about the idea of evil. The subsequent feelings of guilt over having sent an innocent man to death are compounded when it is reported that, at his death, Yeshua blamed no one for what happened to him, and "that he regarded cowardice as one of the worst human sins." To lighten his guilt, Pilate orders the death of Judas, the man who turned Yeshua over to the authorities. However, Pilate is left eternally discontent; "there is no peace for him by moonlight and his duty is a hard one." Good and Evil The traditional understanding of the devil is that he is the embodiment of evil, and that any benefits one might expect from an association with him are illusory. In The Master and Margarita, the devil is portrayed slightly different. In the story he does take advantage of the people with whom he comes into contact, offering them money and goods that later disappear; however, he does not send any souls to hell. In fact, Bulgakov's depiction of the devil has him catering to a request made by Yeshua: he leaves the world with the souls of the Master and Margarita and in the afterlife the two souls are given a cottage in which they are united forever. Far more evil than the devil in this book is the literary establishment, which ruins the Master, indulges in gluttonous behavior, and aligns with the controlling Soviet government. By comparison, the actions of Woland and his associates can be looked at positively as they may actually lead people to better themselves. However, most of the victims of Satan attribute their experiences to hypnotism, putting the responsibility for their woes on the devil, not on themselves. In the case of Jesus, the novel portrays him as an obscure figure, a pawn in a political struggle. Whereas Jesus of the Bible is a celebrated prophet, with a dozen disciples and crowds of thousands who would come to hear him speak and welcome him, Yeshua has one follower, Levi Matvei, who is so mentally unstable that Yeshua himself is uneasy around him. Rather than a gospel of love, Yeshua's message is the more psychological observation that "there are no evil people on earth." Artists and Society Both of the true artists in this book, the Master and Ivan, end up in the mental institution under Dr. Stravinsky's care, while less talented people feast on opulent meals and listen to dance bands at Griboyedov House. The damage caused by false artists goes beyond greed and laziness: when the Master produces his novel the established writers mock him and his book before the public has a chance to see it. This negative reaction does not harm the Master financially--he is independently wealthy from having won the lottery-but it crushes his artistic sensibilities and drives him to madness. As a result, he burns his work and wanders aimlessly in the cold. He is then admitted to the asylum. Even in his insanity, though, the Master knows himself: he realizes that he has lost his identity and that he probably could not survive outside if he escaped the asylum. He suffered so greatly for having created a work of true art that in the end, when Woland restores his burned manuscript, he is hesitant to take it: "I have no more dreams and my inspiration is dead," he says, adding that he hates the novel because "I have been through too much for it." As for Ivan, the Master, during their initial meeting, tells the writer he should write no more poetry, a request Ivan agrees to honor. Later, as the Master leaves, he calls Ivan "my prot?g?." By the end of the story, Ivan becomes a historian, which is the position that the Master held before his novel about Pontius Pilate dramatically changed his life. 7. STYLE Structure This book uses a complex version of the story-in-story structure, weaving the narrative about Pontius Pilate in through the text of the story that takes place during the twentieth century in Moscow. The chapters about Pilate are continuous, following the same four-day sequence of events, and they are coherent, with the same tone of seriousness in the voice throughout the Pilate story. In one sense, their cohesion shows Bulgakov breaking the rules of narrative, because these chapters spring from the minds of different characters. Chapter two is presented as a story told by Woland to Berlioz and Ivan, chapter sixteen is supposed to be Margarita's dream, and chapters twenty-five and twenty-six are allegedly from the Master's novel. Bulgakov tells the events in all of these with one voice because doing so strengthens readers' senses of how much these characters are alike in their thinking. Mennipean Satire Critics have noted that this book follows the tradition of Mennipean Satire, named after Mennipus, the philosopher and Cynic who lived in Greece in the third century BC. Cynics were a school of Greek thinkers, founded by Diogenes of Sinope, who felt that civilization was artificial and unnatural, and who therefore mocked behaviors that were considered socially "proper." Diogenes is best remembered for carrying a lantern through Athens in broad daylight looking for an "honest man," but he also is said to have pantomimed sexual acts in the streets, urinated in public, and barked at people (the word "cynic" is believed to come from the Greek word meaning "doglike"). Cynics are remembered for being distrustful of human nature and motives: even today, people use the word "cynical" to describe someone who expects the worst of people. The satires of Mennipus, written in a combination of prose and verse, made fun of pretensions and intellectual charades. The elite were also ridiculed in Mennipus' plays, as they are in The Master and Margarita. The Roman scholar Marcus Tarentius Varro, living in the first century BC, took up this style when he wrote his Saturarum Mennipearum Libri CL (150 Books of Mennipean Satires, c. 8167 BC). The form has continued through the centuries, distinguished from other satires by the wide range of society it derides and the harshness with which it mocks. From the eighteenth century, Alexander Pope's ruthless Dunciad is considered a Mennipean Satire, as is Aldous Huxley's Brave New World, from 1932. In the 1960s and 1970s, around the time that The Master and Margarita was published, the form proved useful for Russian writer Aleksandr Solzhenitsyn to express his outrage with the Soviet system. Symbolism The symbolic aspects of this novel serve to both render a clear vision of the action while also linking the spirit of the different plot lines together. Of these, the most notable are the sun and the moon, which are mentioned constantly throughout, giving the sense that they are the true observers of the action. The first page of the novel, with Berlioz and Ivan at Patriarch's Ponds, begins "at the sunset hour," and goes on to introduce the devil as the sun recedes. Pontius Pilate's headache is worsened by the blaring sun, as is, the following day, Yeshua's suffering on the cross. In contrast, the Master and Ivan are both tormented by moonlight, which plays with their sanity. Traditionally, sunlight is associated with logic and rationality, while the light of the moon is often related to the subconscious. Another major symbol is the mention of thunderstorms, which appear in the most significant places in the book. The storm that gathers while Yeshua is on the cross and breaks upon his death is notable for its ferocity, as is the storm that washes over Moscow at the end, while Woland and his associates settle their business and leave. Writers often use a thunderstorm to symbolize the release of one character's pentup emotions. In The Master and Margarita, the storms can be seen as the crying out of whole cultures, ancient and modern, as they become aware of how diseased their social systems are. The book has numerous other events and objects that can be seen as symbolic because they refer one's thoughts to broader philosophical issues than those at hand. Foreign currency, for instance, can be equated with non-Soviet ideas, with value that the government tries to suppress; the blood-red wine that Pilate spills does not wash away, like the sins on his soul; and the empty suit that carries on business, as well as the Theatrical Commission staff that finds itself unable to stop singing "The Song of the Vulga Boatmen," all represent the mindlessness of the bureaucratic system. These are just a few of the elements that add meaning to the story if read as being symbolic as well as actual. 8. HISTORICAL CONTEXT The Stalin Era Bulgakov's writing career, particularly the twelve-year period between 1928 and 1940 when he worked on The Master and Margarita, was marked by Russia's transition from the monarchic empire ruled by Nicholas II, who was overthrown in the Russian Revolution in 1917, and the totalitarian Communist government that ruled the country throughout most of the twentieth century. The first post-revolutionary head of the country, Vladimir Lenin, had the practical concern of protecting the country from enemies and establishing the Soviet power base. He guided the country through the 1918 to 1921 civil war and kept the economy mixed, partially nationalized and partially privatized. In 1922, two years before Lenin's death, Joseph Stalin rose to be the secretary general of the Communist Party, and he used this position to gain control of the Soviet Union when Lenin died. Stalin felt that the country was far behind the world's more industrialized nations--at least a hundred years behind, in fact. He put forward programs, all part of what he called his "Five Year Plan," intended to increase production quickly. One place he pushed for change was agriculture. There were about twenty-five million farms in the Soviet Union in the mid-1920s, but few produced enough food to feed anyone but the families who lived on them. Successful farmers who made a profit were called "kulaks." Stalin proposed state-run agricultural collectives, which would produce enough to feed the whole country. The kulaks resisted. In 1929, he called for the "liquidation" of the kulaks, and in fighting to keep their farms they destroyed crops, livestock, and farming tools. Nearly one-third of Russia's cattle and half of the horses were destroyed between 1929 and 1933. Successful farmers were taken away to prisons. Soldiers were sent out across the land, arresting farmers who owned private land. In 1928, only 1.7 percent of Soviet peasants lived on collective farms, but that number grew rapidly with the military action: 4.1 percent of the peasants were on collective farms in October of 1929, a number that jumped to 21 percent just four months later and then 58 percent three months after that. By the end of the decade, 99 percent of the Soviet Union's cultivated land was collective farms, while millions of kulaks who had been taken from their farms labored in prison camps. Stalin's Five Year Plan also reorganized Soviet industry. The government organization "Gosplan," with half a million employees, had the task of planning productivity goals for all industries and checking with factories to see if they were meeting their goals, all with the intent of raising Russia's annual growth rate by 50 percent. Factory managers and workers who were seen as holding back progress, even for safety or economic reasons, were arrested and sent off to labor camps. Fearing punishment, many workers stayed at their jobs twelve and fourteen hours a day, while other factories, with no hope of reaching their assigned production levels, took the chance of falsifying paperwork. From 1928 to 1937 Russian steel production rose from 4 to 17.7 million tons; electricity output rose 700 percent; tractor production rose 40,000 percent. The country's national income rose from 24.4 billion rubles to 96.3 billion. The price, of course, was freedom, and readers of Bulgakov can see the dangers of being in a closed, controlling society with limited resources. The Brezhnev Years Tension between the Soviet Union and the United States was at its greatest between the mid-1940s and the mid-1960s. At the time, these were the world's two leading "super power" nations, and they competed against each other for technological superiority in the race to put humans on the moon and military superiority in the buildup of nuclear arms. In 1964, Nikita Khrushchev, the Soviet leader most identified with the Cold War, was forced from power in a coup d'etat, and was replaced by the duo of Leonid Brezhnev as the Communist Party leader and Aleksei Kosygin as Soviet Premier. The early part of their rule, from 1964 to 1970, was a period of reformation and stabilization. Brezhnev had risen up through the ranks of the Communist Party and was not interested in changing the social system, just in making the system function more smoothly within the structure set by the Soviet governing body, the Politburo. The year 1967, when The Master and Margarita was finally published, was a time of youth rebellion in the United States, but the same spirit of rebelliousness pervaded in other countries across the world as well. One of the most notable instances of riots against the government came in Czechoslovakia, where, during the "Prague Spring" of 1968, protesters almost shut down the country's Communist government. Because Czechoslovakia was an ally of the Soviet Union, Brezhnev sent Soviet troops across the border, into Czechoslovakia, to defeat the protesters and to keep control of the country for the Communists. It was a turning point in Soviet history, showing the world that the Soviet Union would go to great lengths to defend Communism. The fact that Bulgakov's book was finally published after nearly thirty years should not be taken as an indicator that the government was relaxing its policies toward artistic works judged to be critical of the political system. Writers were regularly arrested for spreading "anti-Soviet propaganda" if their work showed any flaws in the system, and convicted writers were sent to work in forced labor camps or to languish in mental asylums for "paranoid schizophrenia." Only a writer who managed to sneak his works out of the country and reach an international audience could avoid a harsh punishment from the government, which had its reputation within the international community to protect. This happened to Aleksandr Solzhenitsyn, who won the Nobel Prize for Literature in 1970 and was expelled from the country in 1974. 9. CRITICAL OVERVIEW Bulgakov was reviewed with respect during his lifetime, although it was not until the world saw The Master and Margarita, published almost thirty years after his death, that he came to be generally recognized as one of the great talents of the twentieth century. During his lifetime, his literary reputation stood mostly on the quality of the plays that he wrote for the Moscow theater, and, because of the totalitarian nature of Soviet politics, critics were at least as concerned with the plays' political content as their artistic merit. In the years after his death, Bulgakov's reputation grew slowly. Writing about Bulgakov's novel The White Guard in 1935's Soviet Russian Literature, Gleb Strave was unimpressed, noting, "As a literary work it is not of any great outstanding significance. It is a typical realistic novel written in simple language, without any stylistic or compositional refinements." Strave went on in his review to express a preference for Bulgakov's short stories, which were unrealistic and fanciful. In 1968, when The Master and Margarita was released in the West, Strave was still an active critic of Soviet literature. His review of the book in The Russia Review predicted the attention that it would soon obtain, but Strave did not think that it was worth that attention, mainly because of the story line with Margarita and the Master, which he felt "somehow does not come off." True to his prediction, though, critics welcomed the novel with glowing praise when it was published. Writing in The Nation, Donald Fanger predicted that "Bulgakov's brilliant and moving extravaganza .. may well be one of the major novels of the Russian Twentieth Century." He placed Bulgakov in the company of such literary giants as Samuel Beckett, Vladimir Nabakov, William Burroughs, and Norman Mailer. Many critics have focused their attention on the meaning of The Master and Margarita. D.G.B. Piper examined the book in a 1971 article for the Forum for Modern Language Studies, giving a thorough explanation of the ways that death and murder wind through the story, tying it together, illuminating the differences between "the here-and-now and the ever-after." In 1972 Pierre S. Hart interpreted the book in Modern Fiction Studies as a commentary on the creative process: "Placed in the context of the obvious satire on life in the early Soviet state," he wrote, "it gains added significance as a definition of the artist's situation in that system." While other writers saw the book as centering around the moral dilemma of Pilate or the enduring love of the Master and Margarita, Hart placed all of the book's events in relation to Soviet Russia's treatment of artists. Edythe C. Haber, in The Russia Review, had yet another perspective on it in 1975, comparing the devil of Goethe's Faust with the devil as he is portrayed by Bulgakov. That same year, Vladimir Lakshin, writing for Twentieth-Century Russian Literary Criticism, expressed awe for Bulgakov's ability to render scenes with vivid details, explaining that this skill on the author's part was the thing that made it possible for the book to combine so many contrasting elements. "The fact that the author freely blends the unblendable--history and feulleton, lyricism and myth, everyday life and fantasy--makes it difficult to define his book's genre," Lakshin wrote, going on to explain that, somehow, it all works together. In the years since the Soviet Union was dismantled, the potency of The Master and Margarita's glimpse into life in a totalitarian state has diminished somewhat, but the book's mythic overtones are as strong as ever, making it a piece of literature that is every bit as important, if not more, than it was when it was new. 10. CHARACTER ANALYSIS 1. Master 2. Ivan Nikolayich Ponyryov 3. Other Characters Master The Master is an author who has written a book about Pontius Pilate. "I no longer have a name," he tells Ivan when they meet at the mental hospital, where they both are incarcerated. While there, the Master explains his past to the poet. He once was an historian (which is the same profession that Ivan settles into at the end of the book), but when he won a large sum in the lottery, he quit his job to work on his book. One day he met Margarita, with whom he fell hopelessly in love. When she took the novel around to publishers, it came back rejected, and then, even though it was unpublished, the reviewers attacked it in the newspapers. In a fit of insanity, imagining that an octopus was trying to drown him with its ink, the Master burned his book. He gave what was left of his savings to Margarita for safe keeping, but he was soon arrested and put in the asylum, and he never saw her again. In the mental hospital, the Master has a stolen set of keys that allows him to escape, but he has nowhere to go. Margarita's reward for helping with the devil's ball is her reunion with the Master. Woland arranges for them to return to the Master's old apartment, for his bank account to be restored, for him to receive identification papers and, miraculously, for the burned novel to return to its original condition. In the end, at the request of Jesus, Woland takes the Master with him when he leaves the world: Jesus cannot take him because "He has not earned light, he has earned peace." Margarita joins him, of course, and they are never separated again. Ivan Nikolayich Ponyryov Ivan Nikolayich Ponyryov is a young, twenty-three-year-old poet, who writes under the pen name Bezdomny, which means "homeless" in Russian. This character is present in the first chapter of the novel and the last, as well as appearing intermittently throughout the story. When the novel begins, Ivan is meeting with Berlioz, a magazine editor, at Patriarch's Ponds. They are discussing the historical accuracy of Jesus when Woland, who is the devil, interrupts their conversation and tells them the story of the crucifixion as he witnessed it. He goes on to foretell the bizarre circumstances of Berlioz's death. When Berlioz dies in this exact same way a few minutes later, Ivan chases Woland and his accomplices across town, bursting through apartments and diving into the river. When he ends up at the headquarters of the writers' organization in his underwear, Ivan is arrested and sent to the mental ward. At the asylum, the Master is in a neighboring room; he is able to visit Ivan at night because he has stolen a set of keys that open the doors on their floor of the hospital. The Master explains that Ivan actually did encounter the devil, and he goes on to recount his own life story to the poet. Before he is released from the clinic, Ivan decides to stop writing poetry. By the end of the story, years after the events that make up the bulk of the book, Ivan has become an historian, but continues to be plagued by strange visions every time the moon is full. Other Characters Azazello Azazello is the harshest and most sinister member of Woland's band, the one who will physically attack an opponent rather than simply play tricks. He is a short, broad-shouldered disfigured man with a bowler hat and red hair. His face is described as being "like a crash" and a fang protrudes from his mouth. It is Azazello who is sent to recruit Margarita to host the devil's ball, although he is not comfortable with this responsibility: he is awkward around women and thinks that one of the other servants who has more charm should have been sent to talk to her. He gives Margarita the cream that she rubs onto her body to become a witch. His true character, revealed in the parting scene, is that of "the demon of the waterless desert." Behemoth Behemoth, one of the novel's most memorable figures, is a huge black cat who walks on his hind legs and has many humanlike qualities: he pays for his trolley fare, drinks brandy from glasses, fires guns, and more. At the black magic show at the Variety Theater, it is Behemoth who twists the head off of the master of ceremonies. When the apartment at 302B Sadovaya Street is raided by police, Behemoth takes a gun and stages a shootout with them; although it is later determined that, even after the firing of hundreds of bullets, nobody on either side was injured, Behemoth burns the apartment with kerosene, and then does the same to Griboyedov House, the headquarters of MASSOLIT. In the end he is revealed to not really be a cat at all, but "a slim youth, a page demon, the greatest jester there had ever been." Mikhail Alexandrovich Berlioz Mikhail Alexandrovich Berlioz is the editor of one of Moscow's most fashionable literary magazines and a member of the management committee of MASSOLIT, the most prominent literary association in Moscow. The novel opens with Berlioz in the park discussing the historical evidence of Jesus Christ with Ivan. Woland interrupts with his own story about Pontius Pilate, and minutes later he prophesies that Berlioz will not make it to the meeting to which he is going; instead, he will have his head cut off by a woman. Leaving the park, Berlioz slips and falls under a trolley car, driven by a woman, and the wheels cut his head off. Later, during the devil's ball, his head is brought in on a platter, still alive and aware. Bezdomny See Ivan Nikolayich Ponyryov. Nikanor Ivanovich Bosoi Nikanor Ivanovich Bosoi, whose surname means "barefooted," is the chairman of the tenants' association of 302B Sadovaya Street, the building where Berlioz and Likhodeyev shared an apartment. After signing a one-week lease with Woland, Bosoi accepts a bribe, and takes it home and hides it in an air duct in his apartment. Woland calls the police to report the bribery, and Bosoi is arrested. Fagot See Korovyov. Yeshua Ha-Notsri In the version of the crucifixion told by Woland and the Master in this book, Jesus has a different name; he is known as Yeshua Ha-Notsri. Yeshua is presented as a simple man, not braver nor more intelligent than most, but more moral. Like the Jesus of Biblical tradition, he fascinates Pilate with the meek humanity of his ideas, but unlike the Jesus of the Bible he does not display a sense of security about the overall rightness of his death. The most striking aspect of Yeshua's conversation is that he believes in the goodness of all humans, even those who are cruelly persecuting him: "There are no evil people on earth," he tells Pilate. Nikolai Ivanovich Nikolai Ivanovich is a neighbor of Margarita's who also rubs the special cream on himself that had turned Margarita into a witch. Instead of taking on witchlike qualities, he is turned into a hog. Jesus See Yeshua Ha-Notsri. Homeless See Ivan Nikolayich Ponyryov. Korovyov Korovyov is one of Woland's associates, who identifies himself as Woland's interpreter. He first appears at Patriarch's Ponds, near the place where Berlioz dies. He is described as a lanky man wearing pincenez glasses, a jockey cap, and a plaid suit. It is Korovyov who gives a bribe to Bosoi, and then calls the authorities to report him. As Woland and his entourage prepare to leave Moscow, it is revealed that Korovyov is not the buffoon he has presented himself as, but is a knight who once made "an ill-timed joke" and has been sentenced to serve Woland in this form because of it. Stepan Bogdanovich Likhodeyev The manager of the Variety Theater, Stepan Bogdanovich Likhodeyev, wakes up one morning after a night of drinking and finds that he has signed Woland to a week-long engagement at the theater. Margarita Nikolayevna Margarita Nikolayevna is the mistress of the writer known as the Master. In the past, when he was distraught about the novel, she comforted and nursed him. He gave all of his money to her for safekeeping, but then was arrested and taken away to the asylum. At a certain point, Margarita is asked to be the hostess of the devil's ball. Once she has a taste of witchcraft--invisibility and the ability to fly--she is glad to perform this duty. In return for her help, Woland offers to grant Margarita a wish. She wishes to be reunited with her beloved Master. Levi Matvei Unlike the traditional stories of Jesus in the New Testament of the Bible, Yeshua has only one disciple in this story. Levi Matvei follows the philosophical vagabond Yeshua Ha-Notsri around, writing down what he says, usually without much accuracy. "This man follows me everywhere with nothing but his goatskin parchment and writes incessantly," Yeshua explains to Pilate. "But once I caught a glimpse of that parchment and I was horrified. I had not said a word of what was written there. I begged him, 'Please burn this parchment of yours!' But he tore it out of my hands and ran away." Levi is the one who later brings the message to Woland that Yeshua would like to give the Master "peace." Natasha Natasha is Margarita's maid who witnesses Margarita's transformation into a witch after she rubs special cream over her body. Natasha then rubs the cream on herself and turns into a witch as well. Pontius Pilate Pontius Pilate is presented as a tormented figure in this novel. He is in Jerusalem during the Passover holiday and is forced to pass a death sentence on a man who he thinks is a tramp and a fool, but not dangerous. After the crucifixion, Pilate assigns soldiers to guard the man who betrayed Jesus, fearing religious followers might try to take revenge on him. However, in this novel, there is no evidence that Yeshua actually has followers. Later, Pilate reveals that he himself had the traitor murdered. Throughout the story there is evidence that Pilate has become fascinated with Jesus from his brief encounter with him, and at the end of the book, Pilate is united with Yeshua. Grigory Danilovich Rimsky The treasurer of the theater, Grigory Danilovich Rimsky, is visited by the ghost of Varenukha the night of Woland's performance, but manages to escape to the train station. Professor Woland Woland is frequently referred to in the book as a foreigner. He is mischievous and cunning, but also noble and generous. The contradictions in his personality show in his looks: "his left eye was completely mad, his right eye black, expressionless and dead." He claims to have been present when Pontius Pilate sentenced Jesus and he can foretell the future, but people rationalize his supernatural powers as illusions or else they, like Ivan and the Master, end up in the psychiatric ward. Woland and his associates wreak havoc in Moscow. They put on a show of black magic at the Variety Theater, at which gorgeous new clothes are given to all of the ladies and money falls from the ceiling: soon after, the women are found to be walking the streets in their underwear and the money that looked authentic proves to be meaningless paper. At the devil's ball, Woland drops his disguise as a visiting professor and reveals his true identity as the devil. On the day after the ball he and his associates ride off to the netherworld on thundering black stallions. 11. ESSAYS AND CRITICISM 1. The Nature and Politics of Writing 2. The Master and Margarita 3. Rehabilitated Experimentalist The Nature and Politics of Writing Mikhail Bulgakov's The Master and Margarita is a novel about novels--an argument for the ability of literature to transcend both time and oppression, and for the heroic nature of the writer's struggle to create that literature. The story's hero, the Master, is an iconographic representation of such writers. Despite rejection, mockery and self-censorship, he creates a fictional world so powerful that it has the ability to invade and restructure the reality of those that surround him. Indeed, it has a life beyond authorial control. Despite his attempts to burn it, the story of Pontius Pilate refuses to die. As Woland remarks, "Manuscripts don't burn." This transcendence of message over physical form--the eternal power of narrative over the mundane reality of flammable paper--is in itself an idea that "escapes" from Bulgakov's novel, becoming a commentary on his contemporary Soviet society and the role of authors like Bulgakov within it. Readers first meet the Master in Dr. Stravinsky's mental hospital, as he says when asked about his identity, "I am a master I no longer have a name. I have renounced it, as I have renounced life itself." His identity subsumed into his role as Great Author, the Master's symbolic status is sign-posted from his first appearance. Both the details of his creative process as well as the story he has created will be presented throughout Bulgakov's novel as powerful, almost occult forces, that are greater than material reality, just as the infernal visitors are greater than the rationalist society upon which they wreak havoc. The multiple narrative strands of the novel--the Master and Margarita's story of creation, the story-within-a-story of the master's novel, the dry world of state-controlled literature exemplified by MASSOLIT (a literary club in Moscow), and the ruleless world of the satanic gang--perform both individually and in their entirety as a commentary on the nature and power of narrative. As the hero explains to his fellow inmate, it was the creation of his novel that caused his transcendence to the status of Master--the act of writing forcing a kind of personal transformation upon him. He and his lover, Margarita, were completely consumed in one other and in his work-in-progress--the two consummations fed into and from one another. The novel enabled their romance at the same time as their romance enabled the novel--it is Margarita who oversees its creation and bestows the name "Master" upon its author, and it is she who keeps faith in it when the publishing world rejects it. When the Master burns his manuscript, throwing it in the wood stove, he is attempting to reverse the alchemical process of creation. The unclean text must be transformed into ashes in the "purifying" flames, just as he was transformed into An Author by the purifying act of creating it. In his story we can see a metaphorized version of the struggles of all authors, the master's story presenting a sort of extended meditation on the nature of being an author. The completion of the novel is the culmination of everything he was working toward and the expression of his personality, an "alternative self" in which his dreams reside. His rejection of writing thus becomes a rejection of his own mind, an act that is literalized by his self-committal to the asylum: he has literally "lost his mind." When Woland returns the manuscript to him, the Master rejects it, saying, "I hate that novel." Woland's reply encapsulates the crippling effects of such self-censorship. As he asks, "How will you be able to write now? Where are your dreams, your inspiration?" The Master replies, "I have no more dreams and my inspiration is dead .. I'm finished." Of course, by the end of the novel, the Master has re-embraced his story, completing the final line as he flies off to his eternal cottage with Margarita. This pattern of creative struggle, rejection, self-doubt and transcendence represents a simultaneous exploration and rejection of glorification through pain. It is creation, not rejection, that turns a simple author into a Master. In just the same way, the Master's version of the crucifixion stresses joy over suffering. It is forgiveness that allows Pontius Pilate to ascend to Heaven, not a proscribed period of torment; just as Margarita's compassion frees Frieda the infanticide from the eternal cycle of suffering. Both the literal Purgatory of Catholic theology and the metaphoric purgatory of authorial trial are rejected in favor of Grace and acceptance. This rejection of suffering-as-purity acts as a nuanced critique of literary life in Soviet culture. The writers of "acceptable" literature--the members of MASSOLIT--are forgettable idiots not worthy of serious critique. The authorial voice, represented by the all-powerful satanic gang, dismisses them with a capricious amusement exemplified by the fate of Berlioz, who simply has his head cut off to shut him up. Similarly, the proprietors of the Variety Theater are subjected to various Byzantine tortures befitting their production of terrible art. In this way, The Master and Margarita presents not so much an indictment of Socialist Realism as a disgusted mockery of it. Instead, the more serious and sensitive exploration is reserved for "real" authors, those who are outside state approval and whose work is marginalized and banned. Again, the Master is used to exemplify such authors. Subjected first to dismissal and then to active persecution, he gradually embraces the logic of MASSOLIT and burns his own book. As Woland says, "They have almost broken him," and they have done so by causing him to break himself. When this is taken into account, the rejection of suffering as a creative aesthetic must be read as a powerful call to an artistic community under siege rather than to the forces besieging it. The story of the Master's suffering acts as a parable which warns of the dangers inherent in heroizing struggle. To Bulgakov and his contemporaries, heroizing struggle was an attractive option, very difficult to resist. Soviet writers of the Stalinist period were subjected to extreme levels of censorship, and faced with a choice between living in fear, writing what they were told to write, or never attempting to get published. In The Master and Margarita, Bulgakov creates an artistic world that acknowledges these conditions, and negotiates a different intellectual and philosophical approach to them. The danger of accepting that struggle purifies is presented by the fate of the Master. Struggling does not purify him--rather it represents an acceptance of the forces ranged against him; a voluntary erasure of self that serves the purposes of the state. When he embraces the power of his narrative, he embraces a form of resistance, which says that joy, creation and the telling of stories must be an end in themselves, since--like the Master's novel--they may well be truly finished only after the death of the author. As Bulgakov bitterly said of his own work: I have heard again and again suspiciously unctuous voices assuring me, 'No matter, after your death everything will be published.' The most difficult task facing The Master, Bulgakov, and Soviet writers in general is to accept that fact while refusing to consign themselves to purgatory. The power of narrative to create belief, and the concurrent power of belief to restructure reality, is a major thematic aspect of the novel. This works in a multilayered way, with many versions of narrative playing against each other and providing commentaries on one another. In the most obvious, structural instance, the novel-within-a-novel motif allows Bulgakov to comment on the role of literature in the life of the society and author that produces it. A common genre in Russian literary history, the book within a book appears in such works as Pushkin's Eugene Onegin and Zamiatin's We. Bulgakov's innovation is the relationship between the two books. Though the story of Pontius Pilate is indeed a story within a story, and though it is indeed the Master's novel, discrete boundaries between the two texts are constantly blurred until it is no longer clear which story is taking place within which. Only once is an excerpt from the Master's novel presented as an excerpt--when Margarita sits down to read the charred fragment. The rest of the time, Pilate's story comes from the minds and mouths of others--from Woland at Patriarch's Ponds and the dreams of Homeless at the asylum. It becomes just as real as the story that seems to contain it, a parallel reality that reaches into contemporary Moscow and reshapes it according to its needs. Everything in the Moscow reality revolves around the Yershalayim reality that the Master's book set in motion, culminating in a scene in which it becomes apparent that the Master now exists, like Bulgakov's frame narrative, to resolve the painful reality of Pontius Pilate's story. What started as an author's attempt to achieve--to transform his life by the creation of literature--has been entirely reversed. The author exists in the service of literature, and not the other way round. The role of literature within the culture that produces it is similarly configured: it literally has the power to change the past, present, and future. The interaction of the Yershalaim and the Moscow realities complicates the relationship of cause and effect through the manipulation of chronology, and in doing so suggests that art transcends time. Chapter twenty-six of The Master and Margarita marks the end of the Master's story of Pilate, but in chapter thirty-two Pilate himself reappears, this time within the Moscow narrative. Woland tells the Master: We have read your novel, and we can only say that unfortunately it is not finished. I would like to show you your hero. He has been sitting here for nearly two thousand years He is saying that there is no peace for him He claims he had more to say to [Ha-Notsri] on that distant fourteenth day of Nisan. The meeting of the master and his hero Pilate in the 'eternal now' of the afterlife completes the link between past and present. The two concurrent story lines finally intersect physically, after they have touched upon each other throughout the novel. The Master frees Pilate from his eternal torment, and is himself granted peace by one of his own creations--his version of Levi Matvei who arrives as Yeshua's messenger to Woland. Narrative, this would seem to suggest, is so powerful that it is not only incapable of destruction, but also the very means by which reality is constructed. In this way, Pilate is paradoxically "created," millennia before his creator, the Master, was even born. When the Master wrote about Pilate, he effectively changed the past, and his characters gained the ability to walk into his present and change his life and the life of his society. In an extended chronological and narrative game, Bulgakov suggests that it is what we read that makes us believe, and what we believe that makes us who we are. Woland and his followers wreak havoc on Moscow by dropping millions of rubles into the audience of the Variety Theater, rubles that turn into foreign bills, soda bottles, and insects, infesting the economy with a supply of worthless money. As Bulgakov makes clear, money--no less than fiction and religion--is dependent on faith, on the willingness to believe that objects of material culture are greater than the sum of their parts. When that belief is lost, reality becomes a set of meaningless, valueless artifacts of no use to anyone. In the final analysis, The Master and Margarita represents an absolute rejection of "reality" as it is understood by Soviet materialist culture. Instead, the novel says, fiction is reality and reality is fiction. Everything is dependent on stories. Source: Tabitha McIntosh-Byrd, in an essay for Novels for Students, Gale, 2000. McIntosh-Byrd is a doctoral candidate at the University of Pennsylvania. The Master and Margarita The Master and Margarita was essentially completed in 1940 but its origin goes back to 1928, when Bulgakov wrote a satirical tale about the devil visiting Moscow. Like his literary hero, Gogol (as well as the Master in his own novel), Bulgakov destroyed this manuscript in 1930 but returned to the idea in 1934, adding his heroine, Margarita, based on the figure of his third wife, Elena Sergeevna Shilovskaia. The novel went through a number of different versions until, aware that he had only a short time to live, he put other works aside in order to complete it, dictating the final changes on his deathbed after he had become blind. It remained unpublished until 1965-66, when it appeared in a censored version in the literary journal Moskva, immediately creating a sensation. It has since been published in its entirety, although the restored passages, while numerous, add comparatively little to the overall impact of the novel. It has been translated into many other languages. (In English, the Glenny translation is the more complete, while the Ginsburg translation is taken from the original Moskva version.) The novel's form is unusual, with the hero, the Master, appearing only towards the end of the first part, and Margarita not until Part Two. It combines three different if carefully related stories: the arrival of the devil (Woland) and his companions in contemporary Moscow, where they create havoc; Margarita's attempt, with Woland's assistance, to be reunited with her love after his imprisonment and confinement in a psychiatric hospital; and an imaginative account of the passion of Christ (given the Hebrew name of Yeshua Ha-Nozri) from his interrogation by Pontius Pilate to his crucifixion. Differing considerably from the gospels, the latter consists of four chapters which may be regarded as a novel within a novel: written by the Master, related by Woland, and dreamed of by a young poet (Ivan Bezdomnyi, or 'Homeless') on the basis of 'true' events. Correspondingly, the action takes place on three different levels, each with a distinct narrative voice: that of Ancient Jerusalem, of Moscow of the 1930s (during the same four days in Holy Week), and of the 'fantastic' realm beyond time. The book is usually considered to be closest in genre to Menippean satire. Despite its complexity, the novel is highly entertaining, very funny in places, and with the mystery appeal of a detective story. In the former Soviet Union, as well as in the countries of Eastern Europe, it was appreciated first of all for its satire on the absurdities of everyday life: involving Communist ideology, the bureaucracy, the police, consumer goods, the housing crisis, various forms of illegal activities and, above all, the literary and artistic community. At the same time it is obviously a very serious work, by the end of which one feels a need for more detailed interpretation: what, in short, is it all about? The problem is compounded by the fact that it is full of pure fantasy and traditional symbols (features associated with devil-lore, for example), so that the reader is uncertain what is important to elucidate the meaning. Leitmotifs (such as sun and moon, light and darkness, and many others) connect the three levels, implying the ultimate unity of all existence. Soviet critics tended to dwell initially on the relatively innocuous theme of justice: enforced by Woland during his sojourn in Moscow, while Margarita tempers this with mercy in her plea to release a sinner from torment. Human greed, cowardice, and the redemptive power of love are other readily distinguishable themes. More fundamental ones are summed up in three key statements: 'Jesus existed' (the importance of a spiritual understanding of life, as opposed to practical considerations in a materialistic world that denied Christ's very existence); 'Manuscripts don't burn' (a belief in the enduring nature of art); and 'Everything will turn out right. That's what the world is built on': an extraordinary metaphysical optimism for a writer whose life was characterized by recurring disappointment. There is indeed a strong element of wish-fulfilment in the book, where characters are punished or rewarded according to what they are seen to have deserved. Thus the novel's heroes, the Master and Margarita, are ultimately rescued, through the agency of Woland, in the world beyond time. They are, however, granted 'peace' rather than 'light', from which they are specifically excluded: a puzzle to many critics. Here, on a deeper philosophical level, there is an undoubted influence of gnosticism with its contrasting polarities of good and evil--which, as I have argued elsewhere, are reconciled in eternity, where 'peace' represents a higher state than the corresponding polarities of light and darkness. Another influence is the Faust story, with Margarita (a far more dynamic figure than either the Master or Goethe's Gretchen) partly taking over Faust's traditional role, in that she is the one to make the pact with Woland, rejoicing in her role as witch. A major scene is 'Satan's Great Ball', a fictional representation of the Walpurgisnacht or Black Mass. Bulgakov, however, reinterprets his sources-- Faust, traditional demonology, the Bible, and many others--in his own way, creating an original and entertaining story which is not exhausted by interpretation. His devil is helpful to those who deserve it and is shown as necessary to God's purposes, to which he is not opposed. Bulgakov's Christ figure, a lonely 'philosopher', has only one disciple (Matthu Levi) although eventually Pontius Pilate, 'released' by Margarita from his torments after 2,000 years, is allowed to follow him as well. Woland too has his disciples: Azazello, Koroviev, and a huge, comical tomcat called Behemoth. So has the Master, with Ivan Bezdomnyi. Like Faust, the Master is the creative artist, 'rivalling' God with the devil's help; like Yeshua he is profoundly aware of the spiritual plane, but is afraid, cowed by life's circumstances. Endlessly fascinating, the novel indeed deserves to be considered one of the major works of 20th-century world literature. Source: A. Colin Wright, "The Master and Margarita," in Reference Guide to World Literature, second edition, edited by Lesley Henderson, St. James Press, 1995. Rehabilitated Experimentalist Bulgakov's brilliant and moving extravaganza [The Master and Margarita] may well be one of the major novels of the Russian 20th century For the Western reader, the novelty of Bulgakov's genre can only be relative after Joyce and Beckett, Nabokov, Burroughs and Mailer; yet the novelty of his achievement is absolute--comparable perhaps most readily to that of Fellini's recent work in the cinema ... [This] is a city novel, the enormous cast of characters (largely literary and theatrical types) being united by consternation at the invasion of Moscow by the devil--who poses as a professor of black magic named Woland--and his three assistants, one of whom is a giant talking cat, a tireless prankster and expert pistol shot ... On its satirical level, the book treats the traditional Russian theme of vulgarity by laughing at it until the laughter itself becomes fatiguing, ambivalent and grotesque. But there is more: thematically, the novel is put together like a set of Chinese boxes. A third of the way through, in a mental hospital, the hack poet Ivan Bezdomny meets the Master, whose mysterious presence adds a new dimension to the narrative--the dimension in which art, love and religion have their being. Ivan has been taken, protesting, to the hospital; the Master, significantly, has voluntarily committed himself, rejecting the world. He is a middleaged historian turned novelist who, after winning 100,000 rubles in the state lottery, devotes himself, an egoless Zhivago, to the twin miracles of love and art. Aided by the beautiful Margarita, whom he has met by chance in the street, he writes a novel about Pontius Pilate--which she declares to be her life--only to become the object of vicious critical attack in the press and, in a fit of depression, burns the precious manuscript . What, then, becomes of the manuscript? The answer is the key to Bulgakov's work. Echoes of Gogol, Goethe, Dostoevsky, Hoffmann and a dozen others are not hard to find, but they are internal allusions; to account for the form of the book--and its formal significance within Soviet literature-- one must mention Pirandello, and the Gide of The Counterfeiters. Bulgakov's characters, in the common Russian phrase, are out of different operas. The story of the disruption of Moscow by Woland and company is op?ra bouffe; the story of the Master and Margarita is lyrical opera. But there is a third and epical opera, richly staged and in a style that contrasts sharply with the styles of the other two. The setting is Jerusalem, the main subject Pontius Pilate, the main action the crucifixion of Christ. Rehabilitated Experimentalist This narrative is threaded through the whole of the book, in a series of special chapters . By merging [the question of what happened to the Master's novel] with Woland's account and Ivan's dream, Bulgakov seems to be suggesting that truth subsists, timeless and intact, available to men with sufficient intuition and freedom from conventional perception. The artist's uniqueness in particular lies in his ability to accept miracle--and this ability leads him, paradoxically, to a truth devoid of miracle, a purely human truth. I am simplifying what I take to be implicit, though complex and unclear, in Bulgakov's book, but there is a clue, easily overlooked, that would seem to support this interpretation. When the Master first appears to tell Ivan his story, Margarita is waiting impatiently for the promised final words about the fifth Procurator of Judea, reading out in a loud singsong random sentences that pleased her and saying that the novel was her life. Now, Bulgakov's own novel ends precisely with the phrase about the cruel Procurator of Judea, fifth in that office, the knight Pontius Pilate. Is the novel we read then, to be identified with the Master's? The answer is clearly (but not simply) yes. The perspectives turn out to be reversible. Bulgakov's novel had appeared to include a piece at least of the Master's; now at the end it appears that the Master's novel has enlarged to include Bulgakov's. The baffling correspondences, in any event, make the case for mystery, and the heart of mystery is transfiguration--quod erat demonstrandum. Margarita's faith in the Master's art is thus justified in ways which she could not have anticipated--and becomes a symbol of Bulgakov's similar faith in his own work. The Master's novel is Margarita's life in one sense as Bulgakov's novel is in another . [The Master and Margarita] is a plea for spiritual life without dogmatic theology, for individual integrity based on an awareness of the irreducible mystery of human life. It bespeaks sympathy for the inevitably lonely and misunderstood artist; it opposes to Philistinism not good citizenship but renunciation. Source: Donald Fanger, "Rehabilitated Experimentalist," in Nation, January 22, 1968, pp. 117-18. 12. SUGGESTED ESSAY TOPICS Chapter 1 1. Describe the Professor's conversation with Berlioz and Ivan. In what ways does he dispute the notion that God does not exist? How do his prophecy and specialty in black magic influence the impact of his claim that Jesus did exist? 2. What oddities does Berlioz encounter by the Patriarch's Ponds, aside from the Professor's appearance? How do these oddities provide a backdrop for his and Ivan's encounter with the Professor? Chapter 2 1. Describe the ways in which this chapter illustrates Pilate's weariness. How does his weariness contrast with Yeshua's character? 2. How does the story of Pilate and Yeshua told here differ and elaborate on the Gospel accounts of the relationship of Pilate and Jesus? Chapters 3-4 1. How might Berlioz's death serve as a seventh proof of God's existence? 2. Discuss Ivan's chase after the professor, the choirmaster, and the cat. How does the chase further the sense of surrealness and artifice created by earlier chapters? Chapters 5-6 1. Compare Ivan's belief that unclean powers have caused Berlioz's demise with the disbelieving reaction of the doctor and the people at Griboedov's to his testimony. 2. Analyze the character of Riukhin. Why does he curse the statue along the boulevard? What is the meaning of his confession that his poems are lies and his recognition that his life is miserable? Chapters 7-8 1. Analyze the history and current events at apartment 50. What could explain the peculiar events at the apartment? 2. Describe Ivan's mental and emotional condition in chapter eight. Why does no one believe Ivan's stories about Pilate and the death of Berlioz? Chapters 9-10 1. Discuss Bosoy's crime of speculating in foreign currency. What does his acceptance of the bribe and arrest indicate about corruption in Communist Moscow? 2. Analyze the attack on Varenukha. In light of earlier events, what does it reveal about Woland and his retinue? Why might Varenukha be attacked? Chapters 11-12 1. What does the title of chapter 11 mean? How might Ivan be said to have split in two? 2. How does Woland's magic show bring out the dark, material desires of Moscow's citizens and bring them to fruition? What are Woland's reasons for fulfilling these desires? Chapter 13 1. How does Ivan and the master's shared knowledge of the Pilate story change the credibility and objectivity of that story? Has the story become more convincing as a result? 2. What are the possible reasons for Ivan's refusal to believe he met Satan at the Patriarch's Ponds? Given the events in the novel thus far, is Ivan's disbelief well-founded? Chapters 14-15 1. Discuss the importance of the scene at Rimsky's desk. What accounts for the change in Varenukha? How does the cock's crowing recall the Gospel story of a cock crowing three times? 2. How does Bosoy's dream, with its emphasis on exposing people who hide illegal currency, fit in with the novel's theme of artifice and secrecy? Similarly, how do the theatrics of his dream compare with the theatrics of Woland's magic show? Chapters 16-17 1. Compare the surreal feeling of Bosoy's dream and the events of chapter 17 to Ivan's dream, with its realistic depiction of the execution at Bald Mountain. How does this comparison highlight the unreality of Communist Moscow and argue for the truth of earlier claims that God and the Devil exist? 2. Discuss Matthew Levi's relationship to Yeshua and his actions at the execution, including his rage at God. What has inspired Matthew Levi to come to the execution and cut down the three bodies? Chapter 18 1. Compare the prophecy that Andrei Fokich Sokov will die of liver cancer, and his response to the prophecy, with the prophecy of Berlioz's death and Berlioz's response. 2. Discuss the difficulty Poplavsky has in seeking to occupy Berlioz's apartment. Why do Azazello and the cat treat him so roughly, and how does their treatment of him contrast with Koroviev's treatment of him? Chapters 19-20 1. Describe Margarita's character as it is revealed by her response to Azazello. Why is she so willing to confront him? 2. Margarita's dream is one of a series of dreams thus far in the novel. Compare her dream to the earlier ones. Why does she respond so optimistically to her dream? Chapter2 21-22 1. Analyze the globe Woland shows Margarita. What is the significance of the globe and the events shown on it? 2. How does Margarita's flight reprise the novel's theme of the mutability of space and time? Chapter 23 1. Describe Margarita's performance as the hostess of Satan's ball. What enables her to withstand the pressures involved with serving as hostess? 2. Discuss the purpose of Satan's ball. Why do the condemned souls emerge once each year? Describe how the d?cor and atmosphere of the ball contrasts with its guests. Chapter 24 1.Woland grants the master and Margarita several wishes. This generosity conflicts with his earlier cruelties. What has inspired his generosity, and what do the couple do with their wishes? 2. Examine the conversation between Margarita, Woland, and his retinue before the master appears. What does it reveal about their personalities? Chapter 25 1. The appearance of the huge dark cloud at the start of the chapter, and the sun's emergence as Aphranius goes to Pilate, exemplifies the role weather has played throughout the novel in setting scenes, highlighting plot movement. Examine this role, and the symbolic presence of weather in the novel thus far. 2. Judas and his love of money is one example of the relationship between money, luxury, and morality explored by the novel. Compare Judas' passion for money and Pilate's desire for material comforts with Yeshua's rejection of a drink before he dies. Chapter 26 1. Niza's betrayal of Judas to Aphranius is part of a web of secrecy and deceit involving Yeshua's execution. How does this web contrast with the character of Yeshua himself? 2. Describe Pilate's moonlight dream. What does it mean? What significance lies in it beginning at midnight? Chapters 27-28 1. How does the inability of the Moscow police to catch Woland and his retinue contrast with the Soviet state's extensive monitoring and control of its citizens, as portrayed in this novel? 2. Margarita is one of several characters who have access to the master's novel about Pilate. What is the impact of his novel being revealed through several different characters? Chapters 29-32 1. Discuss the relationship between the master and Margarita. What characteristics do they have in common? Why is Margarita so devoted to the master? 2. Matthew Levi does not dispute Woland's assertion that evil is essential to life on earth. However, he does seek peace for the master and Margarita. What have they done to deserve peace, and why don't they deserve the light? Epilogue 1. Ivan's fate is inconclusive, in contrast to the decisive fate of most of the novel's characters. What is the significance of his inconclusive fate, and of his actions during the annual festal spring full moon? 2. Discuss the description of the aftermath of Woland's visit to Moscow. How does the persecution of so many alleged perpetrators contrast with the inability to stop Woland and his retinue from wreaking havoc on Moscow? 13. SAMPLE ESSAY OUTLINES Topic #1 The Master and Margarita is set in two cities: Moscow, where Woland, the master, and Margarita are at the center of the plot, and Yershalaim, where the drama of Yeshua and Pilate drives the plot forward. Analyze the juxtaposition of these two plot settings and examine the meaning of this juxtaposition I. Thesis Statement: The Master and Margarita establishes the plot setting in biblical Yershalaim as real and the plot setting in communist Moscow as false. In doing this, it argues that the existence of God and the devil, called absurd by Communist leaders and party members, is real. In contrast, by depicting an array of fantastic, surreal events happening in Moscow, the novel argues that communist Moscow is absurd and fundamentally unreal. II. Realism and credibility of Yershalaim narrative. A. Before beginning to tell Pilate story, Woland declares "Jesus did exist." B. Plain descriptions of Herod's palace, Pilate, Yeshua, and Kaifa. C. Yeshua's simple, non-rhetorical language. D. Crude physicality of the execution scene. E. Pilate's weary, subdued nature, and hunger for sensual pleasure. F. Judas's desire for money and Niza leading him to death. G. Aphranius's grim, violent character. III. Absurdity and deceptiveness in Moscow narrative. A. Surrealism of initial chapter and Berlioz's death. B. Incredible speed of Ivan's chase and Ivan's inability to catch Woland. C. Styopa's removal from apartment 50 to the jetty at Yalta. D. The refusal to believe Ivan's testimony about Berlioz's death. E. Koroviev's bribe and the arrest of Bosoy. F. Woland's bizarre magic show. G. Bosoy's dream of the public exposure of currency hoarders. H. The guests at Satan's ball, and apartment 50's conversion to ballroom. I. Woland and retinue vanish when pursued by police air J. Inability of secret police to catch Woland and retinue or uncover their actions. Persecution of innocent parties in epilogue. IV. Conclusion A. Suffering and realism of Yershalaim characters presents them as believable. B. Fantastic events and duplicity in Moscow make reader question reality and legitimacy of Stalin regime. C. The novel rejects the bizarreness and cruelty of Moscow narrative as false and accepts the biblically inspired Yershalaim narrative as real Topic #2 When Woland and his retinue descend on Moscow, they interact with many different characters. These characters have various responses to their contact with Woland and his retinue. Examine the master and Margarita's response to Woland and its significance for the novel as a whole. I. Thesis Statement: The master and Margarita are the only two Moscow characters who are willing and able to confront the supernatural realm Woland inhabits. In their confrontation, they exhibit the courage all the other Moscow characters lack. By virtue of their courage and devotion, they are the heroes of the novel, and are rewarded by being granted peace at the novel's conclusion. II. The master's response to Woland and the supernatural A. Writes Pilate manuscript. B. Talks with Ivan about the Pilate story and recognizes the professor is Satan. C. Recognizes Woland's true identity immediately. D. Proclaims himself afraid of nothing. E. Anticipates encounter with Pilate and grants Pilate freedom. III. Margarita's response to Woland and the supernatural. A. Interprets dream as meaning she will reunite with the master. B. Agrees to Azazello's invitation to visit Woland. C. Welcomes being transformed into witch. D. Enjoys her reception by the river after ending her flight. E. Successfully carries out her duties as hostess of Satan's ball. F. Grants Frieda her freedom and makes the master appear. G. Suffers no psychic damage from visit to Woland. H. Trusts that everything will turn out well. IV. The master and Margarita's mutual confrontation with Woland and Pilate. A. Granted wish to return to their basement apartment. B. Given peace by Yeshua as reward for Pilate manuscript. C. Come to terms with their death and fly from Moscow with Woland. D. At novel's end, embrace their eternal life in their eternal home. V. Conclusion A. Master and Margarita, both separately and jointly, prove capable of courageously embracing, confronting, and accepting the supernatural. B. Master and Margarita die as result of confrontation with Woland, but in dying they achieve peace as their unique reward. 14. COMPARE AND CONTRAST 1968: Viewing the Vietnam War on television, Americans became more and more suspicious of their government. Atrocities, such as the massacre of hundreds of Vietnamese men, women, and children in the village of My Lai, made Americans feel as distanced from their government as the citizens of Moscow in The Master and Margarita. Today: Americans are still suspicious of the government's honesty and competence, so that any military initiative is met with distrust. 1968: The newly appointed secretary of the Communist Party of Czechoslovakia, Alexander Dubcek, refused to attend conferences in Warsaw and Moscow. In order to keep control of the satellite Communist countries, the Soviet Union sent 200,000 troops into Czechoslovakia. Today: Czechoslovakia no longer exists. After the breakup of the Soviet Union, it divided into two republics: The Czech Republic, with a capital city of Prague, and Slovakia, whose capital is Bratislava. 1968: Race riots swept many of the country's major metropolitan areas after Martin Luther King Jr. was shot dead in Memphis. A total of 21,270 arrests were made across the country. Forty-six people died in the riots. Today: Many social scientists consider the continued divisions between the races to be America's greatest social failure. 15. TOPICS FOR FUTHER STUDY Explain why you think that Woland's associate Behemoth is presented as a cat, while Pilate's closest companion is his dog. List the characteristics of these animals that make them fit the roles that Bulgakov has given them here. Study the treatment of writers in the Soviet Union in the 1930s through the 1960s. Report on the standards to which writers were held by the government, and the punishments that were given to those who disobeyed. Read Faust, by Johann Wolfgang von Goethe, which is openly acknowledged as one of the inspirations for The Master and Margarita. Compare Goethe's version of the devil with Bulgakov's Woland. Which do you think is more dangerous? Which is written to be the more sympathetic figure? Why do you think Bulgakov made the changes to the devil that he made? Study the specific political role played by the Procurator of Judea. How did this position come into existence? What would have been the extent of his powers and responsibilities? 16. MEDIA ADAPTATIONS The Master and Margarita was adapted for video in 1988. This version was directed by Alexandra Petrovich and released by SBS. The video Incident in Judea, directed by Paul Bryers and released by SBS in 1992, is based on material from Bulgakov's TheMaster and Margarita. A Polish version, Mistrz i Malgorzata (The Master and Margarita--with English subtitles--of The Master and Margarita was released on four video cassettes by Contal International in 1990. This version was directed and written by Maciej Wojtyszko. The Master and Margarita was adapted for audio cassette by IU Liubimov, and released by Theater Works in 1991. An audio compact disc called Master and Margarita: Eight Scenes from the Ballet was released by Russian Discs in 1995. 17. WHAT DO I READ NEXT? This book's use of fantasy elements to lampoon social behavior is reminiscent of Lewis Carroll's ever-popular Alice books, Alice's Adventures in Wonderland (1865) and Through the Looking Glass (1872). Bulgakov refers to these books, in fact, in the beginning of chapter eight, when Ivan finds a cylinder in the mental ward labeled "Drink," similar to the mysterious bottle labeled "Drink Me" that Alice finds at the start of her adventure in Wonderland. Many of Bulgakov's ideas, especially his conception of Woland, the devil, are taken directly from German poet Johann Wolfgang von Goethe's two-part poem Faust (published in 1808 and 1832), which he wrote over a span of fifty years. Salman Rushdie's novel The Satanic Verses created a sensation when it was released in 1988, causing an Iranian religious leader to offer a reward for the "blaspheming" author's death. Rushdie himself acknowledged the similarities between his book and The Master and Margarita, noting that "the echoes are there, and not unconsciously." Like Bulgakov's novel, it is the retelling of an ancient religious story within a contemporary story. Aleksandr Solzhenitsyn is a Russian novelist of a generation after Bulgakov's, who grew up within the repression of Lenin's reformed government. He won the Nobel Prize for literature in 1970 and was expelled from Russia in 1974 for denouncing the official government system. Critics consider some of his early fictional works about the Soviet government to be his most powerful, including One Day in the Life of Ivan Denisovich (1962) and The First Circle (1968). Critics have pointed out that the modern trend of "magical realism" in fiction has much in common with The Master and Margarita. This style has been most evident in Latin America since the 1960s, in the works of such writers as Alejo Carpenter, Carlos Fuentes, and Mario Vargas Llosa. The most preeminent novel in this genre is One Hundred Years of Solitude, by Gabriel Garc?a M?rquez, who won the 1982 Nobel Prize for Literature. 18. BIBLIOGRAPHY AND FURTHER READING Sources Mikhail Bulgakov, The Master and Margarita, translated by Diana Burgin and Katherine Tiernan O'Connor, Vintage Books, 1995. Donald Fanger, "Rehabilitation Experimentalist," in The Nation, January 22, 1968, pp. 117-18. Edythe C. Haber, "The Mythic Structure of Bulgakov's 'The Master'," in The Russia Review, October, 1975, pp. 382-409. Pierre S. Hart, " The Master and Margarita as Creative Process," in Modern Fiction Studies, Summer, 1973, pp. 169-78. Vladimir Lakshin, "Mikhail Bulgakov's The Master and Margarita," in Twentieth-Century Russian Literary Criticism, Yale University Press, 1975, pp. 247-83. D.G.B. Piper, "An Approach to Bulgakov's The Master and Margarita," in Forum for Modern Language Studies, Volume VII, No. 2 April, 1971, pp. 134-37. Gleb Strave, Soviet Russian Literature, The University of Oklahoma Press, 1935. Gleb Strave, "The Re-Emergence of Mikhail Bulgakov," in The Russia Review, July, 1968, pp. 338-43. For Further Study J.A.E. Curtis, Manuscripts Don't Burn: Mikhail Bulgakov, a Life in Letters and Diaries, Overlook Press, 1992. A noted Bulgakov scholar presents the history of Bulgakov's life in the author's own words, filling in gaps where appropriate but for the most part presenting long-lost personal papers. Arnold McMillian, "The Devil of a Similarity: The Satanic Verses and Master i Margarita," in Bulgakov: The Novelist-Playwright, edited by Leslie Milne, Harwood Academic Publishers, 1995, pp. 232-41. Compares The Satanic Verses to The Master and Margarita. Nadine Natov, Mikhail Bulgakov, Twayne Publishers, 1985. Examines the life of Mikhail Bulgakov. Ellendea Proffer, Bulgakov, Ardis Press, 1984. A comprehensive study of Bulgakov, his life, and his works available. Joel C. Relihan, Ancient Mennipean Satire, John Hopkins University Press, 1993. Discusses the history of Mennipean satire. Kalpana Sahni, A Mind in Ferment: Mikhail Bulgakov's Prose, Humanities Press, Inc., 1986. Analyzes Bulgakov's writing, including The Master and Margarita. From shovland at mindspring.com Fri Dec 23 15:07:17 2005 From: shovland at mindspring.com (Steve Hovland) Date: Fri, 23 Dec 2005 07:07:17 -0800 Subject: [Paleopsych] The Biology of Belief Message-ID: Bruce H. Lipton, Ph.D. Recent advances in cellular science are heralding an important evolutionary turning point. For almost fifty years we have held the illusion that our health and fate were preprogrammed in our genes, a concept referred to as genetic determinacy. Though mass consciousness is currently imbued with the belief that the character of one?s life is genetically predetermined, a radically new understanding is unfolding at the leading edge of science. Cellular biologists now recognize that the environment (external universe and internal-physiology), and more importantly, our perception of the environment, directly controls the activity of our genes. The lecture will broadly review the molecular mechanisms by which environmental awareness interfaces genetic regulation and guides organismal evolution. The quantum physics behind these mechanisms provide insight into the communication channels that link the mind-body duality. An awareness of how vibrational signatures and resonance impact molecular communication constitutes a master key that unlocks a mechanism by which our thoughts, attitudes and beliefs create the conditions of our body and the external world. This knowledge can be employed to actively redefine our physical and emotional well-being. http://www.brucelipton.com/newbiology.php From checker at panix.com Fri Dec 23 15:20:18 2005 From: checker at panix.com (Premise Checker) Date: Fri, 23 Dec 2005 10:20:18 -0500 (EST) Subject: [Paleopsych] Joseph Smith Bicentennial Package Message-ID: Joseph Smith Bicentennial Package [Here's a representative sampling of the items on Google News. One Mormon told me that the new Bushman biography is fair and objective.] AP: Mormon president writes of Joseph Smith http://www.boston.com/news/local/vermont/articles/2005/12/18/mormon_president_writes_of_joseph_smith?mode=PF By The Associated Press | December 18, 2005 Gordon Hinckley, the president of the Mormon church, writes in a church message this month about Joseph Smith: "That baby boy born 200 years ago this month in humble circumstances in rural Vermont was foreordained to become a great leader in the fulfilling of our Father's plan for His children on Earth. "We do not worship the Prophet. We worship God our Eternal Father and the risen Lord Jesus Christ. But we acknowledge the Prophet; we proclaim him; we respect him; we reverence him as an instrument in the hands of the Almighty in restoring to the Earth the ancient truths of the divine gospel, together with the priesthood through which the authority of God is exercised in the affairs of His Church and for the blessing of His people. "The story of Joseph's life is the story of a miracle. He was born in poverty. He was reared in adversity. He was driven from place to place, falsely accused, and illegally imprisoned. He was murdered at the age of 38. Yet in the brief space of 20 years preceding his death, he accomplished what none other has accomplished in an entire lifetime. He translated and published the Book of Mormon, a volume which has since been retranslated into scores of languages and which is accepted by millions across the Earth as the word of God. The revelations he received and other writings he produced are likewise scripture to these millions. The total in book pages constitutes approximately twice the volume of the entire New Testament of the Bible, and it all came through one man in the space of a few years. "In this same period he established an organization which for 175 years has withstood every adversity and challenge and is as effective today in governing a worldwide membership of some 12 million as it was in governing a membership of 300 in 1830. There are those doubters who have strained to explain this remarkable organization as the product of the times in which he lived. That organization, I submit, was as peculiar, as unique, and as remarkable then as it is today. It was not a product of the times. It came as a revelation from God." -- From the Ensign, December 2005 Salt Lake Tribune: LDS celebration of Joseph Smith will conclude with Gordon B. Hinckley retracing his predecessor's steps http://www.sltrib.com/portlet/article/html/fragments/print_article.jsp?article=3317377 Article Last Updated: 12/16/2005 11:31 PM By Peggy Fletcher Stack On a frigid, snowy December morning a century ago, LDS Church President Joseph F. Smith led an entourage of men in black overcoats and top hats up a hill overlooking Sharon, Vt. Mormon officers had traveled by special train from Salt Lake City to celebrate church founder Joseph Smith's 100th birthday near the log cabin where he was born Dec. 23, 1805. They would dedicate a polished granite shaft cut from a single stone and measuring 38 1/2 feet high - one foot for every year of Smith's life. The monument was hauled on a quarry wagon through winding Vermont roads and finally erected on the hill. It was a moment of triumph and tenderness for the Mormon leaders who gathered there to remember and pay homage to their beloved leader, even as their church was being investigated, ridiculed and attacked by a congressional committee in Washington, D.C. Joseph F. Smith faced grueling questions about the church's involvement in polygamy, its temple ceremonies and loyalty to the United States before the Congress would allow a Utah senator to be seated. Since that dedication, The Church of Jesus Christ of Latter-day Saints has flourished. It has more than 12 million members in more than 100 countries, temples across the globe, one of the largest full-time missionary corps, and a presence in national religious debates - to say nothing about a world-class genealogy library. Next week Gordon B. Hinckley, the church's 15th president, will retrace his predecessor's steps to Sharon for the bicentennial of Smith's birth. On Friday, Hinckley will preach from the site via satellite to the LDS Conference Center in downtown Salt Lake City. During the broadcast, his two counselors in the governing First Presidency will add their own tributes and music will be provided by the Tabernacle Choir and the Orchestra at Temple Square. The program will be broadcast live to LDS stake centers and also can be viewed on KBYU Channel 11. This is the culmination of the church's yearlong celebration of the man Mormons believe "communed with Jehovah." Smith told devotees that when he was 14, God and Jesus visited him in a grove of trees near his home in Palmyra, N.Y. A few years later, he said, an angel led him to gold plates on which were inscribed the history of ancient Hebrews who migrated to the American continent around 600 B.C. With God's help, Smith said, he was able to translate the writings into a text he published as The Book of Mormon. In 1830, Smith organized a church that he said was the restored church of Jesus Christ and spent the next 14 years as its "prophet, seer and revelator." He showered his people with insights he claimed to receive from on high. His unorthodox Christian views drew many followers but just as many detractors. At 38 years old, Smith was killed in 1844 by an angry mob in Carthage, Ill. Throughout 2005, the farm-boy-turned-prophet was scrutinized by scholars meeting at conferences, seminars and symposia, including one at the Library of Congress. Brigham Young University historians scurried to begin publishing a 12-volume collection of Smith's diaries, sermons, speeches and letters, a project endorsed by the National Archives. LDS Church-owned Deseret Book produced a glossy, coffee-table book, Joseph Smith's America: His Life and Times, with stunning historical photos and vivid text by Chad Orton and William Slaughter. Newsweek honored Mormonism's year with a cover story this fall, saying "Joseph Smith founded a booming faith that's confronting its past as it looks to the future." Other publishers pumped out volumes on Smith, including Richard Bushman's Joseph Smith: Rough Stone Rolling. Bushman, a practicing Mormon who is also a professor emeritus of U.S. history at Columbia University, "gives both Smith and his doctrine a sympathetic but perceptive appraisal in this important new study," says Walter Russell Mead in the November/December issue of Foreign Affairs. Meanwhile, thousands of Mormon teens danced their devotion in college stadiums. Artists painted and sculpted his likeness, while various filmmakers retold his story on the big screen. Just this week the church unveiled its new film about Smith's life, which will be be shown continuously in the Legacy Theater of the Joseph Smith Memorial Building in Salt Lake City. At the 1905 dedication at Smith's birthplace, Joseph F. Smith prayed that "peace be . . . unto this monument and unto all who come to visit it with feelings of respect in their hearts." It's a fitting invocation for today's Mormons to contemplate as they consider one more time the man who launched their movement. --- Contact Peggy Fletcher Stack at pstack at sltrib.com or 801-257-8725. Send comments about this article to religioneditor at sltrib.com. Christian Science Monitor: He founded a church and stirred a young nation http://www.csmonitor.com/2005/1220/p13s01-bogn.htm from the December 20, 2005 edition - He founded a church and stirred a young nation A rich, detailed portrait of Joseph Smith, father of Mormonism. By Jane Lampman How did a young man from a poor farm family - who as a boy received minimal education and had little religious background - come to found a church that today boasts millions of members worldwide? A religious leader for only 14 years until his assassination in 1844, Joseph Smith drew thousands during his lifetime to his vision of a theocratic New Jerusalem in the American heartland. Possessing what one critic called a genius for "religion making," Smith wrote new scriptures and created a complex institution that has long survived his death. The Church of Jesus Christ of Latter-day Saints celebrates its 175th anniversary this year, and on December 23, the 200th anniversary of Smith's birth. In Joseph Smith: Rough Stone Rolling, historian Richard Bushman, professor emeritus at Columbia University and a practicing Mormon, fashions a fascinating, definitive biography of the rough-hewn Yankee who stirred controversy from the start. Bushman's intimate, 740-page portrait explores all the corners of controversy but does not resolve them, suggesting that - given the nature of the man and his story - such resolution is never likely to occur. An honest yet sympathetic portrayal, the book is rich in its depiction of developing Mormonism. During an era of revivals and religious ferment, Smith saw himself as a major prophet and revelator - a restorer of the one true church. Despite a story that appeared fantastical to many, Smith's teaching caught the interest of others in search of a faith different from that offered by the churches of the time. As a youth, Smith engaged with family and friends in magic and treasure-digging. He also prayed to know which church to attend. He said later that he was then told by God and Jesus that the existing churches were in apostasy. In a second vision, Smith said, an angel named Moroni directed him to buried golden plates that were to become the source for his Book of Mormon, which he translated from hieroglyphs through the use of a seer stone and spectacles that he called the Urim and Thummim. (The angel later retrieved the plates.) The Book of Mormon is understood by Latter-day Saints to be the history of Jews who traveled to the Western hemisphere around 600 BCE, and of Jesus' visit to them after his resurrection. (The assumption that the Indians of the Americas are the descendants of the people in the book has been upset recently by DNA studies - done by Mormons - which show no connection to the ancient Hebrews.) Smith - called simply "Joseph" by Mormons - published the book in 1830, and later published others ("The Book of Abraham" and "The Book of Moses") purporting to provide true histories that go far beyond the Bible. It was not preaching, but his ongoing "revelations" that shaped the developing religion and its practices. They were full of biblical phrasings, and many practices derived from Old Testament teachings (such as restoration of Aaron's priesthood). The revelations included establishment of a hierarchical priesthood in which all males participate; secret temple rites; the deeding of property to church bishops, to be distributed as appropriate to the needy and toward purchase of land; and the nature of the afterlife, which includes "plural marriage." Some may feel the author sanitizes Smith's motives for establishing polygamy and marrying dozens of wives. Bushman tells an engrossing tale of a charismatic leader who was egalitarian and loved working with others, yet who was sensitive to criticism or dissent. Mormons believed the Second Coming to be imminent, and converts followed their leader from New York to Ohio to Missouri, where Joseph said New Jerusalem was to be situated. But in purchasing large amounts of land for their City of Zion, the Mormons clashed - and even went to war - with other residents. Smith lived in a biblical world where God's laws alone were of concern; He did not acknowledge governments, the nation, or the Constitution, Bushman says, until his flock ran into trouble and needed government protection. He then turned to state governors, and later to the US Congress for aid. The Mormons' story and self-image shifted from one of revelation to persecution. Driven out of Missouri, the Saints regrouped in Nauvoo, Ill., where they built a temple and city, drawing church members from as far away as England. Yet Joseph's polygamous practice stirred controversy even among the faithful (including his first wife, Emma), and a few dissidents were excommunicated. After he destroyed a dissenting Nauvoo newspaper, Smith was jailed in a neighboring city, and he and his brother were killed by a mob and militiamen who were guarding them. (His successor, Brigham Young, led members west to Utah.) Meticulously researched, the detailed nature of this biography may make it of interest mostly to Mormons. Yet Bushman also offers an intriguing exploration of a remarkable development in American religious history. Claims that the church is the fastest growing in the US have recently been questioned (studies show that about the same number are leaving as joining). But its members are increasingly widespread in the US and more visibly influential in political circles (i.e. Senate minority leader Harry Reid (D) of Nevada, and Gov. Mitt Romney (R) of Massachusetts, who may run for president in 2008). This is a work that offers non-Mormons a chance to gain knowledge of a church not their own. It also stirs deeper questions about American religious convictions and how they shape lives and culture. o Jane Lampman is on the Monitor's staff. Joseph Smith: Rough Stone Rolling By Richard Lyman Bushman Knopf740 pp., $35 Deseret Morning News: Joseph Smith's fame http://deseretnews.com/dn/print/1,1442,635165306,00.html Thursday, December 01, 2005 Joseph Smith's fame Scholars around the world are studying the impact of Joseph Smith, attempting to account for the growth of The Church of Jesus Christ of Latter-day Saints. Here's what some have said in academic settings this year, the 200th anniversary of Smith's birth: Jason Lase, director general of Indonesia's Department of Religious Affairs, called Joseph Smith "a modern religious genius" who created "one of the most stable and well-organized religious organizations" during a May speech at Parliament House in Sydney, Australia. Arun Joshi, a Hindu journalist from India, concluded "the message of Joseph Smith is more relevant . . . today than ever before" in a paper titled "Mormon Ways of Family Life Can Resolve Conflicts in World" and delivered at National Taiwan University in August. Terryl Givens, a religion scholar at the University of Richmond, speaking Tuesday at BYU: "It can be heady stuff for members of a previously marginalized religion of modest size to find their faith and founder the subject of symposia, celebration and scholarly interest. Some have even predicted a new world religion will emerge out of these accelerating developments. KUTV: A Different Take On Joseph Smiths Death http://kutv.com/topstories/local_story_339194713.html (KUTV) To LDS faithful, Joseph Smith was murdered because he was a prophet of God. A new book on Joseph Smith has different take on his death. Dan Rascon has more. According to the authors of a new book, Joseph Smith was a presidential candidate in 1844 and was threatening one of the political parties so he had to be taken out. He's one of the most written about religious leaders in American history and now comes one more book about LDS founder Joseph Smith. A new revelation about Joseph's murder is the subject behind this one. "It was a planned event by a number of politicians," said Dr. Robert Wicks in a phone interview from oxford Ohio where he is the director of the Miami University Art Museum. Dr. Wicks is one of the co-authors of the book called Junius and Joseph: Presidential Politics and the Assassination of the First Mormon Prophet. According to Wicks' research, after Joseph busted through the window at Cartridge jail, he landed on the ground and crawled to a nearby well and was assassinated by a four-man firing squadnot for just his beliefs, but because he was a presidential candidate. "It was basically political to get Joseph Smith out of the way so he would not impact the outcome of the 1844 election," said Dr. Wicks. Wicks says it was presidential candidate Henry Clay's party that was concerned. When asked if he believes Henry Clay was involved Wicks replied, "We don't know. We know people under Clay were involved. That's stated very clear in a number of sources. I'm not making it up." "They don't know that. They can't know that, said Dr. Steven Harper. Dr. Harper is a church history professor at BYU who's just written his own book on Joseph Smith called Joseph the Seer. It is true Joseph was a presidential candidate and that his political platform played a role, but Harper doesn't believe in a conspiracya planned out assassination. "Where I think they go too far is reaching too much into their sources," said Dr. Harper. Wicks says his research started ten years ago when he came across some documents of a man named John Elliott who he says was one of the assassins and had political ties to Henry Clay. The Arizona Republic: Mormon role vital to West http://www.azcentral.com/arizonarepublic/opinions/articles/1223cox-lds.html December 23, 2005 Wendell Cox: My Turn Part of the uniqueness of the American West can be attributed to a man who never got past Missouri during his 38-year lifespan. That man, Joseph Smith, was the founder of the Church of Jesus Christ of Latter-day Saints, and today, Latter-day Saints (or Mormons) will celebrate the 200th anniversary of his birth in Vermont. The West as we know it and the LDS Church have always been inextricably linked; the West simply would not be the West without Joseph Smith. Neither would Arizona. The Latter-day Saints settled 34 Mormon colonies throughout the state, including Mesa, which has become one of the largest suburban cities in the United States. Today, Arizona has the fourth-highest LDS population in the country, with more than 360,000 members and two temples. In early 1846, the United States held undisputed title to very little of what was to become the American West. Most of the Southwest, from Colorado and New Mexico to California, was part of Mexico. Most of the Pacific Northwest, the Oregon Country, from Montana and Wyoming to Washington and near Alaska was claimed by both the British and the Americans. It was this unsettled environment that the Latter-day Saints would soon be joining. Political turmoil was afoot in Mexico. Texas had gained independence from Mexico and was annexed by the United States in 1845. Soon afterward, the Mexican-American War was in full tilt. Meanwhile, the Latter-day Saints were spending a dismal winter in Nauvoo, Ill. An angry mob had murdered Joseph Smith in 1844, and more mobs were burning Mormon homes, seeking to drive them out. Stepping into the vacuum left by the martyrdom of Joseph Smith, new church leader Brigham Young resolved to leave the persecution behind and take the Church west, to the Great Salt Lake Valley, which was still a part of Mexico. So on Feb. 4, 1846, the first Mormon contingent headed west. Progress was slow for the Latter-day Saints, who were forced to spend the cold winter on the banks of the Missouri River in what is now the Omaha area. In the spring of 1847, the trek resumed and on July 24, 1847, the Latter-day Saints entered the Great Salt Lake Valley. In the fall of 1847, the West was still relatively empty. Nearly all of the West's population was in small Mexican towns, the largest of which was Santa Fe. San Francisco still had fewer than 500 residents. Available data suggests that the Great Salt Lake Valley may have been the West's largest settlement as winter began. But it was not to last. Five months to the day after the Latter-day Saints reached the Salt Lake Valley (Jan. 24, 1848), gold was discovered at Sutter's Mill, near Sacramento, by a group that included six recently discharged members of the Mormon Battalion, and marked the beginning of California's rapid growth. In two years, San Francisco grew to a population of 25,000 and became the West's premier urban area until the ascendancy of Los Angeles in the 1920s. These were perhaps the most momentous years for both the American West and the Latter-day Saints. America added more than a third to its land area in the two new acquisitions of Oregon and the Southwest. While millions of people traveled westward on the Mormon, the Oregon, and the California Trails, the Latter-day Saints were settling more than 700 communities in the West, including Mesa, Las Vegas, San Bernardino, Calif., and Pueblo, Colo. That was just the beginning, however. The West emerged as the nation's fastest-growing region over the next 150 years. Even faster growth was to come following World War II, as the shared population living in the West rose by more than 50 percent. The church, too, has experienced strong growth. Between 1950 and 2000, the percentage of the nation's population counted as Latter-day Saints nearly tripled. Like the West, its influence has been international. Today, the hearty little church that found its permanent home in the Great Salt Lake Valley claims more than 12 million members, most of whom live outside the United States. This weekend, as Christians celebrate the birth of Jesus, Latter-day Saints will also take a moment to remember the man that made their church possible and helped build up the West without ever seeing it. Wendell Cox is principal of Demographia, an international demographics and public policy firm in the St. Louis area. He also serves as a visiting professor at the Conservatoire National des Arts et Metiers, a national university in Paris. Knight Ridder: Mormons gain diverse members in inner cities http://www.fortwayne.com/mld/newssentinel/living/13456583.htm?template=contentModules/printstory.jsp Posted on Wed, Dec. 21, 2005 [spacer.gif] BY MIRIAM HILL Knight Ridder Newspapers PHILADELPHIA - Donte Holland, a 30-year-old carpenter's apprentice, joined the Mormon church in Philadelphia two years ago because it gave him "the fruits of the spirit. Peace. A good feeling inside." Holland and his wife, Rosalyn, are both black. The Mormon church is as white as its most famous members, Donny and Marie Osmond, and in Philadelphia, Eagles Coach Andy Reid. But for the last decade or so, the Mormons, officially known as the Church of Jesus Christ of Latter-day Saints, or LDS, have been expanding in city neighborhoods with large black and Spanish-speaking populations. The Hollands joined after some missionaries knocked on their door and explained the faith. Last month, the Mormons opened a five-story meeting house for 900 members on Malcolm X Boulevard in Harlem. Recently, a new Mormon church building at Broad and Wyoming streets in Philadelphia held an open house with food and information about the church and about other topics, including medical care and financial planning. The Harlem and Philadelphia churches follow an earlier expansion in Detroit. The church does not record members' racial or ethnic backgrounds, but experts estimate that black Mormons number 5,000 to 10,000 in the United States, up from almost none in 30 years ago. The church says 130,000 people belong to its Spanish-speaking congregations, up from 92,600 in 1995. The Broad and Wyoming location includes a Spanish-speaking service, and attendance at that has grown from 60 to about 110 since the new building opened earlier this year. Mormons count 12 million members around the world, 5.5 million of them in the United States, so the minority figures are small but growing. "There is a kind of changing face of the LDS church because of its continuing commitment to work in the inner cities," said Melvyn Hammarberg, an associate professor of anthropology at the University of Pennsylvania who has studied the Mormons. The growth has occurred recently in part because, like many American churches, the Mormon church has had racist chapters in its past. Its founder, Joseph Smith is believed to have ordained a black man, Elijah Abel in 1836. But his successor, Brigham Young, decreed that black men were not worthy of being priests. While in most churches the priesthood involves a small, select group, in the Mormon church, it is a prerequisite for full membership for men in the church. In 1978, church leaders in Salt Lake City had what Mormons call a "revelation," which church members believe comes directly from god. The revelation proclaimed that "all worthy men ... without regard for race or color" could be ordained for the priesthood. Some members, however, say the church needs to go further, repudiating old beliefs, such as the one that said blacks were cursed and so could not be priests. "If the church would apologize, it would do wonders for proselytizing among blacks," said Darron Smith, a black Mormon and author of "Black and Mormon." Other churches, he noted, including Southern Baptists, have apologized for racist histories. Smith joined the church as a teenager in 1980 because he liked the answers it offered about family and the afterlife. Church members also were very friendly, he said. But when he has criticized the church's previous attitudes towards blacks, white Mormons often brought up old teachings to justify the one-time ban on black priests. "This is systemic," Smith said. "This is a part of how people have learned to understand these issues," he said. Church officials said they emphasize the importance of diversity and the dangers of discrimination in current teachings. Despite the church's history, several black members said only the Mormon church ever felt like home. Ahmad S. Corbitt, who grew up in Philadelphia and graduated from John Bartram High School, got his Arab name from his parents, who were involved with the Nation of Islam and knew Malcolm X. The family later converted to the Methodist Church. But when Mormons knocked on his family's door in South Jersey in 1980, "my mother felt a peaceful, spiritual feeling immediately." Missionary work - going door to door in cities and neighborhoods across the world - is a key component of the Mormon faith. Many young Mormons spend two years in a mission away from home. Corbitt, who was 17 when the Mormons came calling, shared his mother's feelings and converted, too. He knew about the pre-1978 ban but said it didn't bother him. "It was something that the church had clearly moved beyond," he said. "It was clear to me that the church was moving forward and I was willing to judge it by its fruits." After several years as a lawyer and public relations executive, he became director of the New York Office of Public and International Affairs of the church. Last month, he became Stake President for the Church in South Jersey, a promotion roughly equivalent to becoming a bishop in the Catholic Church. Corbitt, 43, is one of a handful of black stake presidents in the United States. At one of the new Philadelphia church's first services, the crowd of about 100 people appeared to be about 30 percent black. The surrounding neighborhood is about 80 percent black. Services last three hours, with one hour devoted to singing, preaching and confirmations of people recently baptized as members. Men and women separate for the remaining portion. Each group discusses ways to improve their lives and the church. Celeste Smith and Carolyn Frye, two North Philadelphia residents, were checking out the church. Neither is Mormon, but Smith said her teenage son had started coming to the church after some missionaries knocked on their door, so she wanted to check it out. She had still not decided whether to join. "It's working for me right now," she said. Frye liked the diversity of the crowd but said the relatively staid service "just didn't move me. I'm used to more foot-stomping and that kind of thing." Those who do join say the church's emphasis on family attracted them. Church services overflow with children and conversations and classes often aim at improving family life. "I have seen so many lives blessed by the power of the gospel," said Ingrid Shepard, president of the Mormon Relief Society, a women's auxiliary group, at the Broad and Wyoming church. She is black but said the church's history is less important to her than her experience in it. "I guess having been a member of the church my entire life I have never felt that it is racist," she said. Belief Net: Linda Hoffman Kimball on Joseph Smith--Mormon, Latter-Day Saints, Prophet, LDS http://www.beliefnet.com/story/181/story_18152_1.html Brother Joseph Unlike many Mormons, I can't get on the Joseph Smith veneration bandwagon. But I still deeply appreciate this complex man. As a convert to the Church of Jesus Christ of Latter-Day Saints, I found Joseph Smith to be a stumbling block. Reared an authority-phobic Protestant, I knew problems arise when one person claims to be the definitive speaker for God. Mormons talked about him in such reverential terms, refer to him in one long honorific as "The Prophet Joseph Smith." They sometimes sing songs about him--even in worship services!--praising "the man who communed with Jehovah": Praise to his memory, he died as a martyr; Honored and blest be his ever great name! Long shall his blood, which was shed by assassins, Plead unto heaven while the earth lauds his fame. As an Illinoisan, I clenched when I learned that the original words to that verse, written not long after the assassination of Joseph Smith in Carthage, Ill., in 1844, were "Long shall the blood which was shed by assassins, stain Illinois ." While I was assured by church members that Mormons don't worship Joseph Smith, I was (and still am) squeamish at the constant exaltations. I shy from endorsing anything that carries a whiff of displacing God the Father, Jesus Christ, and the Holy Ghost as worthy of our highest praise. I am grateful. I can't adequately articulate my appreciation for Joseph Smith's role in the "founding miracles" of the Restoration of the Gospel, the priesthood, and the Book of Mormon (see R.L. Bushman). But I can't get on the "exalt the Prophet Joseph Smith" bandwagon. As a Mormon by commitment, covenant, and conversion, this makes me a bit of an odd duck. But then again, so was Joseph Smith. That's what draws me to him, in fact. Back in the early 1980s odd letters began emerging about folk magic and "white salamanders" having influenced Joseph Smith, and many in the church were horrified at the besmirching of his character. How satisfying to then discover that these letters were fakes placed by Mark Hofmann, forger, bomber, and murderer, intent on discrediting the church's history. But the truth remains that Joseph Smith was accustomed to the folk magic of his rural 19th-century culture. In his early years he had been a "treasure seeker." He was familiar with the concept of looking through stones or using "divining rods" to locate things and learn. Knowing these things about him doesn't diminish him in my eyes; they make him more real and knowable, not whitewashed and "prettified." In fact, as Richard Bushman, and scholar and LDS member, says in his new biography "Joseph Smith: Rough Stone Rolling" (p. 131), "Neither his education nor his Christian upbringing prepared Joseph to translate a book, but the magic culture may have.... Practice with his scrying stones carried over to translation of the gold plates. In fact, as work on the Book of Mormon proceeded, a seerstoneaid[ed] in the work, blending magic with inspired translation." A man who can be brawny, perplexing, rough-edged, and outrageous Belief Net: Joseph Smith: Prophet, Revelator, Human; Interview with Richard Lyman Bushman http://www.beliefnet.com/story/181/story_18153_1.html Joseph Smith: Rough Stone Rolling By Richard Lyman Bushman Joseph Smith: Prophet, Revelator, Human As Mormons celebrate the bicentennial of their church's founder, a new biography explores his achievements--and shortcomings. Interview by Michael Kress When Joseph Smith founded the Church of Jesus Christ of Latter-Day Saints--commonly known as the Mormons--in 1830, there were few signs that this group of six people would grow into an international religious movement that today claims 11 million members and is the fourth-largest denomination in the U.S. Dec. 23 marks 200 years since Smith's birth in Sharon, Vt., and his spiritual heirs have been commemorating his bicentennial throughout this year. Among the events marking the anniversary was the publication of a new scholarly biography, "Joseph Smith: Rough Stone Rolling," by Richard Lyman Bushman, a professor emeritus at Columbia University and a practicing Mormon. Bushman spoke with Beliefnet about his book and about the man who founded the Mormon Church. Can you explain your book's subhead, "Rough Stone Rolling"? These are words Joseph Smith used to describe himself, and then Brigham Young repeated them. I was drawn to them because I think they capture the incongruity of his inadequate preparation for any kind of leadership role and the rough style of personality and method that continue to the end of his life. And I think it points out the incongruities of a person with so little background who achieved so much. How did someone with those incongruities create such a lasting institution? It's the great puzzle of his life. Those who study prophetic figures in history--American as well as ancient history--point out the immense energy that floods into a person who comes to believe that God is speaking through them and that they are chosen instruments for some divine purpose. That confidence of Joseph Smith gave him all sorts of powers he might otherwise not have commanded. It overcame the intimidation he might have felt because of his lack of education and social standing. He just boldly went forward with these extravagant plans for a church and a city of Zion and a temple, and I think that sprang from his confidence that God was with him. He also had a knack for speaking to the deep religious issues of his time--one of these being a hunger to return of biblical powers. This is a Bible-blazing people, and it's quite obvious that all the gifts that are promised in the New Testament and the tradition of direct revelation had petered out by their time, and there were a lot of people who wanted these returned. And Joseph Smith gave them what they were looking for: a prophet speaking for God. What were the negative effects of his inadequate preparation? For someone so unprepossessing to claim so much made him appear like a fraud. How can anyone say God has spoken to him when he has so few qualifications? And so people ridiculed him immediately, and even more were suspicious of him, thought this was a con operation and he was actually dangerous. So that incongruity set up great suspicions in the people who saw him in operation. We think of Smith as a man with supreme confidence, but you write about a man with human doubts and insecurities. What were some of these? This came as a surprise to me because he does seem so bold, almost impregnable, in his confidence, and it is true that people didn't intimidate him. But he needed people around him, I concluded. He was at his best when he was surrounded by people, believers or unbelievers. When he was alone, he became blue, as he said. He fell into melancholy. He had a kind of Abraham Lincoln character about him, and all the sorrows of his past and his mistakes would flood in on him, and he felt like he was very dependent on God to restore him, because he felt so weak and ineffective. You've said that scholars are beginning to think of Smith in the context of a tradition of American prophecy. What do you mean by this? Scholars are beginning to recognize that the prophetic voice recurs in America. It begins with Anne Hutchinson, who says quite bluntly that God was revealing his truth to her. This role is accessible in a Bible-believing culture, and the Bible is, of course, as significant as the U.S. Constitution for establishing the primers of American culture. So there are people who picked up that role, and Joseph Smith is preeminent among them. No one exceeds him in claiming prophetic powers. He produces Scripture and revives the biblical role. So that's one way to think of Joseph Smith, as stepping out into a tradition of American prophets. How has Smith's image changed over the years among academics and the general public? There are certain traditions that just persist forever. One is that he was a "colorful fraud," and even a "dangerous fraud," which was a stereotype that was locked on him almost immediately. He was classed with Muhammad as a man who thought he spoke for God and therefore wished to impose his will by force on people around him, and he was frequently compared to Muhammad in his own lifetime. That remains. I do think there is a growing willingness to respect Joseph Smith because of the success of the Mormon Church. With so many sensible, likeable people who are Mormons and who believe in him, it's not as easy to dismiss him as it was in the 19th century. So there's a look-and-see attitude: Hard to believe he did the things he claimed to do--seeing an angel and translating--and still, here are the consequences, the Mormon people. So there's a suspension of disbelief among some observers. Belief Net: The Mormon Moment--Church of Jesus Christ of Latter-day Saints, Missionaries, Romney, DNA Evidence http://www.beliefnet.com/story/171/story_17146_1.html The Mormon Moment Boom times for the once-persecuted Latter-day Saints By Michael Kress As they mark Pioneer Day this weekend, members of the Church of Jesus Christ of Latter-day Saints--better known to most Americans as Mormons--have a lot to celebrate. The holiday commemorates Mormons' arrival in the Valley of the Great Salt Lake on July 24, 1847, after an arduous journey from their previous home in Nauvoo, Illinois. The church's massive worldwide growth recently, Mormons' increased prominence in American public life, and this year's bicentennial of the birth of church founder Joseph Smith, add up to particularly heady days for a church whose members were once persecuted for their faith. At Smith's centennial 100 years ago, "The Latter-day Saints were so feared and hated that their missionaries were still being tied to trees and horsewhipped in the American South, and some were being shot," said Kathleen Flake, a professor at Vanderbilt University and author of "The Politics of Religious Identity: the Seating of Senator Reed Smoot, Mormon Apostle." What a difference a century makes. Today, the leader of Senate Democrats, Harry Reid, is a Mormon. So are Mike Leavitt, Secretary of Health and Human Services, and Mitt Romney, governor of Massachusetts. And Mormons--along with plenty of non-Mormons--are abuzz about the possibility of the ultimate political prize: Romney is widely expected to run for president in 2008. Joseph Smith's bicentennial is being marked in places like the Library of Congress, which co-sponsored with Brigham Young University a symposium on Smith's life and teachings. Several new academic biographies are being published, and the first volume of the Joseph Smith papers--a complete compilation of his writings--will be issued next year. And with 5.5 million members in the United States, the LDS church has become the fourth-largest denomination in the country (up from fifth a year ago, having passed the Church of God in Christ), according to the National Council of Churches. Outside the U.S. and Canada, the church has grown more than fivefold to 6.3 million members since 1980--with nearly 10 percent of that in the past five years, according to figures provided by Mormon officials. "The church has migrated from a provincial faith to a faith that can make itself at home in any space and every culture," said Jan Shipps, professor emeritus at Indiana University-Purdue University in Indianapolis. Growth is strongest today in Latin America and West Africa-- ironic since blacks were not allowed to join the Mormon priesthood (a term used for virtually all male church members) until 1978. "In those places it is a period of rapid cultural and economic change, and when that happens, there's always an openness to new movements," said Armand Mauss, professor emeritus of sociology and religion at Washington State University. Belief Net: How Mormonism Differs from Traditional Christianity http://www.beliefnet.com/features/mormonism.html Mormonism vs. Traditional Christianity How Mormon beliefs differ from those of traditional Christianity. MORMON SECTION Christianity Mormonism Scripture: Most Christians accept only the Bible as authoritative scripture. Mormons believe the Bible is sacred. They add three other documents-The Pearl of Great Price, The Doctrine and Covenants, and The Book of Mormon-to their canon. Creeds: For most Christians, church teachings stem from scripture. Leaders of the early church sought to specify the core of Christian belief in order to ensure the soundness of Christian teaching. At meetings in Nicea and Chalcedon in the fourth and fifth centuries, these leaders established the canon of scripture and proclaimed the basic elements of acceptable Christian doctrine. Mormons do not affirm any of the creeds as stated, though they share some of the theological ideas in the creeds. They believe that after the death of the early apostles, the Christian church fell into apostasy. The church needed to be restored in the latter days, which Mormons believe were begun in 1820, when Mormon founder Joseph Smith was visited by God the father and Jesus Christ. Nature of Godhead: For most Christians, the Godhead is composed of three persons of one substance, power, and eternitythe Father, Son, and Holy Ghost. These Three are One. This triune God is without "body, parts or passions." The LDS Church also teaches that the Father, Son, and Holy Ghost comprise the Godhead. But Mormons believe that God the Father and Jesus Christ have bodies of flesh and bones as tangible as human beings, while the Holy Ghost "is a personage of Spirit." Mormonism also teaches that God the Father was once a man. He is married to a "heavenly mother" and is the literal father of all mortal spirits. Christ: Most Christians believe that Jesus Christ was "truly God and truly man, in whom the divine and human natures are perfectly and inseparably united." He is the only begotten Son of the Father, born of the Virgin Mary by the power of the Holy Spirit. Mormons believe that Jesus is the Son of God in the most literal sense. He is eldest brother of all mortals and firstborn spirit child of God. He was Jehovah of the Old Testament but became Jesus Christ of the New Testament when he was born into mortality. They believe that from Mary, a mortal woman, he inherited the capacity to die, and from God, an exalted being, he inherited the capacity to live forever. Salvation: According to the historic, apostolic Christian faith, salvation comes only by the grace of Christ, who "suffered, was crucified, died and was buried, to reconcile his Father to us, and to be a sacrifice, not only for original guilt, but also for actual sins of men." Mormons also believe that salvation comes through Christ's atoning sacrifice. But they dont believe in "original sin" or in human depravity. Still, Latter-day Saints believe that fallen men and women do need redemption. And while works are a necessary condition, they are insufficient for salvation. Suggested Reading: How Wide the Divide?: A Mormon and an Evangelical in Conversation By Craig L. Blomberg and Stephen E. Robinson Text by Peggy Stack Weekly Standard: Mass. Gov. Mitt Romney could become the first Mormon in the White House. http://www.beliefnet.com/story/171/story_17147_1.html In 2008, Will It Be Mormon in America? Mass. Gov. Mitt Romney could become the first Mormon in the White House. By Terry Eastland Excerpted with permission from The Weekly Standard. You remember, or perhaps you don't, Sen. Orrin Hatch's 2000 presidential campaign. The senator talks about it in soft inflections, recalling this event and that debate. But especially he talks about what motivated him to run. Hatch, a member of the Church of Jesus Christ of Latter-day Saints, cites polling data from 1999 suggesting that 17 percent of Americans wouldn't vote for a Mormon for president under any circumstances. "One reason I ran was to knock down the prejudicial wall that exists" against Mormons, he says. "I wanted to make it easier for the next candidate of my faith." That next candidate just might be Mitt Romney, the Republican governor of Massachusetts. But would his religion hurt him? Would he run into a prejudicial wall? Maybe, though there are reasons to think otherwise. Apparently some people so dislike Mormonism, or find it so odd, that they wouldn't vote for a Mormon. You can speculate about why that is. Maybe it's the hierarchical character of the church. Or maybe it's the church's secrecy about things like finances or temple rituals. Then there's polygamy, introduced by Joseph Smith (who had 49 wives) and practiced until, a century ago, the church finally realized that the federal government would not tolerate it. Church and State And there's church and state: Some people fear that, deep down, Mormons want to gain control of the government and turn the United States into their kingdom of God. Some of those objections might fade if voters got to know a Mormon of compelling political credentials, and came to feel comfortable with him. Other objections might have to be answered directly. In regard to polygamy, for example, it would be unfair to hang that history around the neck of Romney, the husband of one and only one wife since their marriage 36 years ago. As for church and state, Mormons don't seem especially threatening to the prevailing order. The church doesn't endorse candidates. It stays out of partisan matters, refusing even to let individual churches or their membership lists be used for partisan purposes. It does encourage citizens to vote: Before elections the church urges members to consider the issues and candidates, "and then vote for the people that best represent their ideas of good government," according to a spokesman. Like most churches, it participates in law cases raising religious liberty issues, often partnering with religious bodies of diverse beliefs. Here, in a friend-of-the court capacity, the church seeks to protect its ability to proselytize and to hire church officials and employees. The church does occasionally speak out on what it calls "matters of principle." In the 1970s and early 1980s, it helped defeat the Equal Rights Amendment. More recently it has affirmed the traditional definition of marriage and contributed to referendum drives banning same-sex unions. The church seems to distinguish ballot-measures from elections for office, seeing only the latter as partisan. In any case, the church's efforts in these respects have a common theme--protection of the traditional family. Policy and Faith Romney hasn't felt compelled to regard the church's guidance to its members as sufficient in matters of public policy. He emphasizes his independence in assessing issues. He points out that he doesn't drink, consistent with what his church advises, yet he signed a bill permitting liquor sales on Sunday because "there is nothing wrong with drinking alcohol if you do it properly and responsibly." AP: Hinckley Discusses Legacy Of Joseph Smith http://kutv.com/topstories/local_story_356185104.html SHARON, Vt. Mormon church President Gordon B. Hinckley paused and looked up at the granite obelisk, erected on a Vermont hillside, where church founder Joseph Smith is believed to have been born. ``Quite a monument,'' Hinckley said. ``Beautiful.'' On Friday, Mormons will celebrate the 200th anniversary of Smith's birth. Records from Smith family diaries place the birth here on Dec. 23, 1805. A hearthstone and a moss-covered front step are all that remain of the original 24-foot-by-22-foot home where the Smith family ran a small farm. The Church of Jesus Christ of Latter-day Saints built and dedicated the 38 1/2-foot monument to Smith _ one foot of granite for each year of his life _ in 1905. Hinckley, the 15th president of the church, was joined by one of his sons, six of his grandchildren and two great-grandchildren when he paid an early visit to the monument Thursday. ``This is the ground. This is the place. This is where it happened, this was the starting place,'' Hinckley told reporters at the monument's visitors center. Smith founded the church April 6, 1830 in Palmyra, N.Y. He claimed God had appeared to him in a vision 10 years earlier, instructing him to restore the ancient church to the Earth. He later said an angel, Moroni, also appeared to him and led him to a set of buried gold plates, which Smith translated into the Book of Mormon, the faith's foundational text. Today the church numbers a reported 12 million members in 160 countries. Hinckley will speak to Mormons from Sharon, via satellite Friday, part of a celebration that will originate from church's Salt Lake City conference center and be broadcast into church meeting houses around the world. ``There's something very significant about the fact that we're reaching back and forth, almost across this whole continent, in this bicentennial celebration, `` Hinckley said. Hinckley said he didn't know what Smith would make of such communications. ``I can't read his mind,'' Hinckley said, drawing laughter. ``I think he would be very much amazed.'' From checker at panix.com Fri Dec 23 20:46:20 2005 From: checker at panix.com (Premise Checker) Date: Fri, 23 Dec 2005 15:46:20 -0500 (EST) Subject: [Paleopsych] A Non-Transhumanist Future Message-ID: Frank Forman here: We should ask at every opportunity when facing a biocon to ask what their vision of the future is. I think they would be embarrassed to say that the future will be very much like the present (or for the reactionary conservatives, like some ideal point in the past). No more "Endless Frontiers" as imagained by Vannevar Bush in 1945. I'll send along Bush's (no relation to either George Bush, or we'd have heard about it) paper in a moment. It is full of the Central Planning fallacies that we perhaps at high tide in 1945, to be sure, but it's inspirational all the same. From checker at panix.com Fri Dec 23 20:52:25 2005 From: checker at panix.com (Premise Checker) Date: Fri, 23 Dec 2005 15:52:25 -0500 (EST) Subject: [Paleopsych] Vannevar Bush: Science the Endless Frontier Message-ID: Vannevar Bush: Science the Endless Frontier http://www.nsf.gov/about/history/vbush1945.htm A Report to the President by Vannevar Bush, Director of the Office of Scientific Research and Development, July 1945 (United States Government Printing Office, Washington: 1945) TABLE OF CONTENTS * Letter of Transmittal * President Roosevelt's Letter * Summary of the Report * 1. Introduction: + Scientific Progress is Essential + Science is a Proper Concern of Government + Government Relations to Science - Past and Future + Freedom of Inquiry Must be Preserved * 2. The War Against Disease: + In War + In Peace + Unsolved Problems + Broad and Basic Studies Needed + Coordinated Attack on Special Problems + Action is Necessary * 3. Science and the Public Welfare: + Relation to National Security + Science and Jobs + The Importance of Basic Research + Centers of Basic Research + Research Within the Government + Industrial Research + International Exchange of Scientific Information + The Special Need for Federal Support + The Cost of a Program * 4. Renewal of our Scientific Talent: + Nature of the Problem + A Note of Warning + The Wartime Deficit + Improve the Quality + Remove the Barriers + The Generation in Uniform Must Not be Lost + A Program * 5. A Problem of Scientific Reconversion: + Effects of Mobilization of Science for War + Security Restrictions Should be Lifted Promptly + Need for Coordination + A Board to Control Release + Publication Should be Encouraged * 6. The Means to the End: + New Responsibilities for Government + The Mechanism + Five Fundamentals + Military Research + National Research Foundation o I. Purposes o II. Members o III. Organizations o IV. Functions o V. Patent Policy o VI. Special Authority o VII. Budget + Action by Congress * Appendices + 1. Committees Consulted + 2. Report of the Medical Advisory Committee, Dr. W. W. Palmer, Chairman + 3. Report of the Committee on Science and the Public Welfare, Dr. Isaiah Bowman, Chairman + 4. Report of the Committee on Discovery and Development of Scientific Talent, Mr. Henry Allen Moe, Chairman + 5. Report of the Committee on Publication of Scientific Information, Dr. Irvin Stewart, Chairman ___ LETTER OF TRANSMITTAL OFFICE OF SCIENTIFIC RESEARCH AND DEVELOPMENT 1530 P Street, NW. Washington 25, D.C. JULY 25, 1945 DEAR MR. PRESIDENT: In a letter dated November 17, 1944, President Roosevelt requested my recommendations on the following points: (1) What can be done, consistent with military security, and with the prior approval of the military authorities, to make known to the world as soon as possible the contributions which have been made during our war effort to scientific knowledge? (2) With particular reference to the war of science against disease, what can be done now to organize a program for continuing in the future the work which has been done in medicine and related sciences? (3) What can the Government do now and in the future to aid research activities by public and private organizations? (4) Can an effective program be proposed for discovering and developing scientific talent in American youth so that the continuing future of scientific research in this country may be assured on a level comparable to what has been done during the war? It is clear from President Roosevelt's letter that in speaking of science that he had in mind the natural sciences, including biology and medicine, and I have so interpreted his questions. Progress in other fields, such as the social sciences and the humanities, is likewise important; but the program for science presented in my report warrants immediate attention. In seeking answers to President Roosevelt's questions I have had the assistance of distinguished committees specially qualified to advise in respect to these subjects. The committees have given these matters the serious attention they deserve; indeed, they have regarded this as an opportunity to participate in shaping the policy of the country with reference to scientific research. They have had many meetings and have submitted formal reports. I have been in close touch with the work of the committees and with their members throughout. I have examined all of the data they assembled and the suggestions they submitted on the points raised in President Roosevelt's letter. Although the report which I submit herewith is my own, the facts, conclusions, and recommendations are based on the findings of the committees which have studied these questions. Since my report is necessarily brief, I am including as appendices the full reports of the committees. A single mechanism for implementing the recommendations of the several committees is essential. In proposing such a mechanism I have departed somewhat from the specific recommendations of the committees, but I have since been assured that the plan I am proposing is fully acceptable to the committee members. The pioneer spirit is still vigorous within this nation. Science offers a largely unexplored hinterland for the pioneer who has the tools for his task. The rewards of such exploration both for the Nation and the individual are great. Scientific progress is one essential key to our security as a nation, to our better health, to more jobs, to a higher standard of living, and to our cultural progress. Respectfully yours, (s) V. Bush, Director THE PRESIDENT OF THE UNITED STATES, The White House, Washington, D. C. ___ PRESIDENT ROOSEVELT'S LETTER THE WHITE HOUSE Washington, D. C. November 17, 1944 DEAR DR. BUSH: The Office of Scientific Research and Development, of which you are the Director, represents a unique experiment of team-work and cooperation in coordinating scientific research and in applying existing scientific knowledge to the solution of the technical problems paramount in war. Its work has been conducted in the utmost secrecy and carried on without public recognition of any kind; but its tangible results can be found in the communiques coming in from the battlefronts all over the world. Some day the full story of its achievements can be told. There is, however, no reason why the lessons to be found in this experiment cannot be profitably employed in times of peace. The information, the techniques, and the research experience developed by the Office of Scientific Research and Development and by the thousands of scientists in the universities and in private industry, should be used in the days of peace ahead for the improvement of the national health, the creation of new enterprises bringing new jobs, and the betterment of the national standard of living. It is with that objective in mind that I would like to have your recommendations on the following four major points: First: What can be done, consistent with military security, and with the prior approval of the military authorities, to make known to the world as soon as possible the contributions which have been made during our war effort to scientific knowledge? The diffusion of such knowledge should help us stimulate new enterprises, provide jobs four our returning servicemen and other workers, and make possible great strides for the improvement of the national well-being. Second: With particular reference to the war of science against disease, what can be done now to organize a program for continuing in the future the work which has been done in medicine and related sciences? The fact that the annual deaths in this country from one or two diseases alone are far in excess of the total number of lives lost by us in battle during this war should make us conscious of the duty we owe future generations. Third: What can the Government do now and in the future to aid research activities by public and private organizations? The proper roles of public and of private research, and their interrelation, should be carefully considered. Fourth: Can an effective program be proposed for discovering and developing scientific talent in American youth so that the continuing future of scientific research in this country may be assured on a level comparable to what has been done during the war? New frontiers of the mind are before us, and if they are pioneered with the same vision, boldness, and drive with which we have waged this war we can create a fuller and more fruitful employment and a fuller and more fruitful life. I hope that, after such consultation as you may deem advisable with your associates and others, you can let me have your considered judgment on these matters as soon as convenient - reporting on each when you are ready, rather than waiting for completion of your studies in all. Very sincerely yours, (s) FRANKLIN D. ROOSEVELT Dr. VANNEVAR BUSH, Office of Scientific Research and Development, Washington, D. C. __ SCIENCE - THE ENDLESS FRONTIER "New frontiers of the mind are before us, and if they are pioneered with the same vision, boldness, and drive with which we have waged this war we can create a fuller and more fruitful employment and a fuller and more fruitful life."-- FRANKLIN D. ROOSEVELT November 17, 1944. ___ SUMMARY OF THE REPORT --- SCIENTIFIC PROGRESS IS ESSENTIAL Progress in the war against disease depends upon a flow of new scientific knowledge. New products, new industries, and more jobs require continuous additions to knowledge of the laws of nature, and the application of that knowledge to practical purposes. Similarly, our defense against aggression demands new knowledge so that we can develop new and improved weapons. This essential, new knowledge can be obtained only through basic scientific research. Science can be effective in the national welfare only as a member of a team, whether the conditions be peace or war. But without scientific progress no amount of achievement in other directions can insure our health, prosperity, and security as a nation in the modern world. For the War Against Disease We have taken great strides in the war against disease. The death rate for all diseases in the Army, including overseas forces, has been reduced from 14.1 per thousand in the last war to 0.6 per thousand in this war. In the last 40 years life expectancy has increased from 49 to 65 years, largely as a consequence of the reduction in the death rates of infants and children. But we are far from the goal. The annual deaths from one or two diseases far exceed the total number of American lives lost in battle during this war. A large fraction of these deaths in our civilian population cut short the useful lives of our citizens. Approximately 7,000,000 persons in the United States are mentally ill and their care costs the public over $175,000,000 a year. Clearly much illness remains for which adequate means of prevention and cure are not yet known. The responsibility for basic research in medicine and the underlying sciences, so essential to progress in the war against disease, falls primarily upon the medical schools and universities. Yet we find that the traditional sources of support for medical research in the medical schools and universities, largely endowment income, foundation grants, and private donations, are diminishing and there is no immediate prospect of a change in this trend. Meanwhile, the cost of medical research has been rising. If we are to maintain the progress in medicine which has marked the last 25 years, the Government should extend financial support to basic medical research in the medical schools and in universities. For Our National Security The bitter and dangerous battle against the U-boat was a battle of scientific techniques - and our margin of success was dangerously small. The new eyes which radar has supplied can sometimes be blinded by new scientific developments. V-2 was countered only by capture of the launching sites. We cannot again rely on our allies to hold off the enemy while we struggle to catch up. There must be more - and more adequate - military research in peacetime. It is essential that the civilian scientists continue in peacetime some portion of those contributions to national security which they have made so effectively during the war. This can best be done through a civilian-controlled organization with close liaison with the Army and Navy, but with funds direct from Congress, and the clear power to initiate military research which will supplement and strengthen that carried on directly under the control of the Army and Navy. And for the Public Welfare One of our hopes is that after the war there will be full employment. To reach that goal the full creative and productive energies of the American people must be released. To create more jobs we must make new and better and cheaper products. We want plenty of new, vigorous enterprises. But new products and processes are not born full-grown. They are founded on new principles and new conceptions which in turn result from basic scientific research. Basic scientific research is scientific capital. Moreover, we cannot any longer depend upon Europe as a major source of this scientific capital. Clearly, more and better scientific research is one essential to the achievement of our goal of full employment. How do we increase this scientific capital? First, we must have plenty of men and women trained in science, for upon them depends both the creation of new knowledge and its application to practical purposes. Second, we must strengthen the centers of basic research which are principally the colleges, universities, and research institutes. These institutions provide the environment which is most conducive to the creation of new scientific knowledge and least under pressure for immediate, tangible results. With some notable exceptions, most research in industry and Government involves application of existing scientific knowledge to practical problems. It is only the colleges, universities, and a few research institutes that devote most of their research efforts to expanding the frontiers of knowledge. Expenditures for scientific research by industry and Government increased from $140,000,000 in 1930 to $309,000,000 in 1940. Those for the colleges and universities increased from $20,000,000 to $31,000,000, while those for the research institutes declined from $5,200,000 to $4,500,000 during the same period. If the colleges, universities, and research institutes are to meet the rapidly increasing demands of industry and Government for new scientific knowledge, their basic research should be strengthened by use of public funds. For science to serve as a powerful factor in our national welfare, applied research both in Government and in industry must be vigorous. To improve the quality of scientific research within the Government, steps should be taken to modify the procedures for recruiting, classifying, and compensating scientific personnel in order to reduce the present handicap of governmental scientific bureaus in competing with industry and the universities for top-grade scientific talent. To provide coordination of the common scientific activities of these governmental agencies as to policies and budgets, a permanent Science Advisory Board should be created to advise the executive and legislative branches of Government on these matters. The most important ways in which the Government can promote industrial research are to increase the flow of new scientific knowledge through support of basic research, and to aid in the development of scientific talent. In addition, the Government should provide suitable incentives to industry to conduct research, (a) by clarification of present uncertainties in the Internal Revenue Code in regard to the deductibility of research and development expenditures as current charges against net income, and (b) by strengthening the patent system so as to eliminate uncertainties which now bear heavily on small industries and so as to prevent abuses which reflect discredit upon a basically sound system. In addition, ways should be found to cause the benefits of basic research to reach industries which do not now utilize new scientific knowledge. WE MUST RENEW OUR SCIENTIFIC TALENT The responsibility for the creation of new scientific knowledge - and for most of its application - rests on that small body of men and women who understand the fundamental laws of nature and are skilled in the techniques of scientific research. We shall have rapid or slow advance on any scientific frontier depending on the number of highly qualified and trained scientists exploring it. The deficit of science and technology students who, but for the war, would have received bachelor's degrees is about 150,000. It is estimated that the deficit of those obtaining advanced degrees in these fields will amount in 1955 to about 17,000 - for it takes at least 6 years from college entry to achieve a doctor's degree or its equivalent in science or engineering. The real ceiling on our productivity of new scientific knowledge and its application in the war against disease, and the development of new products and new industries, is the number of trained scientists available. The training of a scientist is a long and expensive process. Studies clearly show that there are talented individuals in every part of the population, but with few exceptions, those without the means of buying higher education go without it. If ability, and not the circumstance of family fortune, determines who shall receive higher education in science, then we shall be assured of constantly improving quality at every level of scientific activity. The Government should provide a reasonable number of undergraduate scholarships and graduate fellowships in order to develop scientific talent in American youth. The plans should be designed to attract into science only that proportion of youthful talent appropriate to the needs of science in relation to the other needs of the nation for high abilities. Including Those in Uniform The most immediate prospect of making up the deficit in scientific personnel is to develop the scientific talent in the generation now in uniform. Even if we should start now to train the current crop of high-school graduates none would complete graduate studies before 1951. The Armed Services should comb their records for men who, prior to or during the war, have given evidence of talent for science, and make prompt arrangements, consistent with current discharge plans, for ordering those who remain in uniform, as soon as militarily possible, to duty at institutions here and overseas where they can continue their scientific education. Moreover, the Services should see that those who study overseas have the benefit of the latest scientific information resulting from research during the war. THE LID MUST BE LIFTED While most of the war research has involved the application of existing scientific knowledge to the problems of war, rather than basic research, there has been accumulated a vast amount of information relating to the application of science to particular problems. Much of this can be used by industry. It is also needed for teaching in the colleges and universities here and in the Armed Forces Institutes overseas. Some of this information must remain secret, but most of it should be made public as soon as there is ground for belief that the enemy will not be able to turn it against us in this war. To select that portion which should be made public, to coordinate its release, and definitely to encourage its publication, a Board composed of Army, Navy, and civilian scientific members should be promptly established. A PROGRAM FOR ACTION The Government should accept new responsibilities for promoting the flow of new scientific knowledge and the development of scientific talent in our youth. These responsibilities are the proper concern of the Government, for they vitally affect our health, our jobs, and our national security. It is in keeping also with basic United States policy that the Government should foster the opening of new frontiers and this is the modern way to do it. For many years the Government has wisely supported research in the agricultural colleges and the benefits have been great. The time has come when such support should be extended to other fields. The effective discharge of these new responsibilities will require the full attention of some over-all agency devoted to that purpose. There is not now in the permanent Governmental structure receiving its funds from Congress an agency adapted to supplementing the support of basic research in the colleges, universities, and research institutes, both in medicine and the natural sciences, adapted to supporting research on new weapons for both Services, or adapted to administering a program of science scholarships and fellowships. Therefore I recommend that a new agency for these purposes be established. Such an agency should be composed of persons of broad interest and experience, having an understanding of the peculiarities of scientific research and scientific education. It should have stability of funds so that long-range programs may be undertaken. It should recognize that freedom of inquiry must be preserved and should leave internal control of policy, personnel, and the method and scope of research to the institutions in which it is carried on. It should be fully responsible to the President and through him to the Congress for its program. Early action on these recommendations is imperative if this nation is to meet the challenge of science in the crucial years ahead. On the wisdom with which we bring science to bear in the war against disease, in the creation of new industries, and in the strengthening of our Armed Forces depends in large measure our future as a nation. Chapter 1 INTRODUCTION Scientific Progress is Essential We all know how much the new drug, penicillin, has meant to our grievously wounded men on the grim battlefronts of this war - the countless lives it has saved - the incalculable suffering which its use has prevented. Science and the great practical genius of this nation made this achievement possible. Some of us know the vital role which radar has played in bringing the United Nations to victory over Nazi Germany and in driving the Japanese steadily back from their island bastions. Again it was painstaking scientific research over many years that made radar possible. What we often forget are the millions of pay envelopes on a peacetime Saturday night which are filled because new products and new industries have provided jobs for countless Americans. Science made that possible, too. In 1939 millions of people were employed in industries which did not even exist at the close of the last war - radio, air conditioning, rayon and other synthetic fibers, and plastics are examples of the products of these industries. But these things do not mark the end of progress - they are but the beginning if we make full use of our scientific resources. New manufacturing industries can be started and many older industries greatly strengthened and expanded if we continue to study nature's laws and apply new knowledge to practical purposes. Great advances in agriculture are also based upon scientific research. Plants which are more resistant to disease and are adapted to short growing season, the prevention and cure of livestock diseases, the control of our insect enemies, better fertilizers, and improved agricultural practices, all stem from painstaking scientific research. Advances in science when put to practical use mean more jobs, higher wages, shorter hours, more abundant crops, more leisure for recreation, for study, for learning how to live without the deadening drudgery which has been the burden of the common man for ages past. Advances in science will also bring higher standards of living, will lead to the prevention or cure of diseases, will promote conservation of our limited national resources, and will assure means of defense against aggression. But to achieve these objectives - to secure a high level of employment, to maintain a position of world leadership - the flow of new scientific knowledge must be both continuous and substantial. Our population increased from 75 million to 130 million between 1900 and 1940. In some countries comparable increases have been accompanied by famine. In this country the increase has been accompanied by more abundant food supply, better living, more leisure, longer life, and better health. This is, largely, the product of three factors - the free play of initiative of a vigorous people under democracy, the heritage of great national wealth, and the advance of science and its application. Science, by itself, provides no panacea for individual, social, and economic ills. It can be effective in the national welfare only as a member of a team, whether the conditions be peace or war. But without scientific progress no amount of achievement in other directions can insure our health, prosperity, and security as a nation in the modern world. Science Is a Proper Concern of Government It has been basic United States policy that Government should foster the opening of new frontiers. It opened the seas to clipper ships and furnished land for pioneers. Although these frontiers have more or less disappeared, the frontier of science remains. It is in keeping with the American tradition - one which has made the United States great - that new frontiers shall be made accessible for development by all American citizens. Moreover, since health, well-being, and security are proper concerns of Government, scientific progress is, and must be, of vital interest to Government. Without scientific progress the national health would deteriorate; without scientific progress we could not hope for improvement in our standard of living or for an increased number of jobs for our citizens; and without scientific progress we could not have maintained our liberties against tyranny. Government Relations to Science - Past and Future >From early days the Government has taken an active interest in scientific matters. During the nineteenth century the Coast and Geodetic Survey, the Naval Observatory, the Department of Agriculture, and the Geological Survey were established. Through the Land Grant College acts the Government has supported research in state institutions for more than 80 years on a gradually increasing scale. Since 1900 a large number of scientific agencies have been established within the Federal Government, until in 1939 they numbered more than 40. Much of the scientific research done by Government agencies is intermediate in character between the two types of work commonly referred to as basic and applied research. Almost all Government scientific work has ultimate practical objectives but, in many fields of broad national concern, it commonly involves long-term investigation of a fundamental nature. Generally speaking, the scientific agencies of Government are not so concerned with immediate practical objectives as are the laboratories of industry nor, on the other hand, are they as free to explore any natural phenomena without regard to possible economic applications as are the educational and private research institutions. Government scientific agencies have splendid records of achievement, but they are limited in function. We have no national policy for science. The Government has only begun to utilize science in the nation's welfare. There is no body within the Government charged with formulating or executing a national science policy. There are no standing committees of the Congress devoted to this important subject. Science has been in the wings. It should be brought to the center of the stage - for in it lies much of our hope for the future. There are areas of science in which the public interest is acute but which are likely to be cultivated inadequately if left without more support than will come from private sources. These areas - such as research on military problems, agriculture, housing, public health, certain medical research, and research involving expensive capital facilities beyond the capacity of private institutions - should be advanced by active Government support. To date, with the exception of the intensive war research conducted by the Office of Scientific Research and Development, such support has been meager and intermittent. For reasons presented in this report we are entering a period when science needs and deserves increased support from public funds. Freedom of Inquiry Must Be Preserved The publicly and privately supported colleges, universities, and research institutes are the centers of basic research. They are the wellsprings of knowledge and understanding. As long as they are vigorous and healthy and their scientists are free to pursue the truth wherever it may lead, there will be a flow of new scientific knowledge to those who can apply it to practical problems in Government, in industry, or elsewhere. Many of the lessons learned in the war-time application of science under Government can be profitably applied in peace. The Government is peculiarly fitted to perform certain functions, such as the coordination and support of broad programs on problems of great national importance. But we must proceed with caution in carrying over the methods which work in wartime to the very different conditions of peace. We must remove the rigid controls which we have had to impose, and recover freedom of inquiry and that healthy competitive scientific spirit so necessary for expansion of the frontiers of scientific knowledge. Scientific progress on a broad front results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown. Freedom of inquiry must be preserved under any plan for Government support of science in accordance with the Five Fundamentals listed on page 26 . The study of the momentous questions presented in President Roosevelt's letter has been made by able committees working diligently. This report presents conclusions and recommendations based upon the studies of these committees which appear in full as the appendices. Only in the creation of one over-all mechanism rather than several does this report depart from the specific recommendations of the committees. The members of the committees have reviewed the recommendations in regard to the single mechanism and have found this plan thoroughly acceptable. Chapter 2 THE WAR AGAINST DISEASE In War The death rate for all diseases in the Army, including the overseas forces, has been reduced from 14.1 per thousand in the last war to 0.6 per thousand in this war. Such ravaging diseases as yellow fever, dysentery, typhus, tetanus, pneumonia, and meningitis have been all but conquered by penicillin and the sulfa drugs, the insecticide DDT, better vaccines, and improved hygenic measures. Malaria has been controlled. There has been dramatic progress in surgery. The striking advances in medicine during the war have been possible only because we had a large backlog of scientific data accumulated through basic research in many scientific fields in the years before the war. In Peace In the last 40 years life expectancy in the United States has increased from 49 to 65 years largely as a consequence of the reduction in the death rates of infants and children; in the last 20 years the death rate from the diseases of childhood has been reduced 87 percent. Diabetes has been brought under control by insulin, pernicious anemia by liver extracts; and the once widespread deficiency diseases have been much reduced, even in the lowest income groups, by accessory food factors and improvement of diet. Notable advances have been made in the early diagnosis of cancer, and in the surgical and radiation treatment of the disease. These results have been achieved through a great amount of basic research in medicine and the preclinical sciences, and by the dissemination of this new scientific knowledge through the physicians and medical services and public health agencies of the country. In this cooperative endeavour the pharmaceutical industry has played an important role, especially during the war. All of the medical and public health groups share credit for these achievements; they form interdependent members of a team. Progress in combating disease depends upon an expanding body of new scientific knowledge. Unsolved Problems As President Roosevelt observed, the annual deaths from one or two diseases are far in excess of the total number of American lives lost in battle during this war. A large fraction of these deaths in our civilian population cut short the useful lives of our citizens. This is our present position despite the fact that in the last three decades notable progress has been made in civilian medicine. The reduction in death rate from diseases of childhood has shifted the emphasis to the middle and old age groups, particularly to the malignant diseases and the degenerative processes prominent in later life. Cardiovascular disease, including chronic disease of the kidneys, arteriosclerosis, and cerebral hemorrhage, now account for 45 percent of the deaths in the United States. Second are the infectious diseases, and third is cancer. Added to these are many maladies (for example, the common cold, arthritis, asthma and hay fever, peptic ulcer) which, through infrequently fatal, cause incalculable disability. Another aspect of the changing emphasis is the increase of mental diseases. Approximately 7 million persons in the United States are mentally ill; more than one-third of the hospital beds are occupied by such persons, at a cost of $175 million a year. Each year 125,000 new mental cases are hospitalized. Notwithstanding great progress in prolonging the span of life and relief of suffering, much illness remains for which adequate means of prevention and cure are not yet known. While additional physicians, hospitals, and health programs are needed, their full usefulness cannot be attained unless we enlarge our knowledge of the human organism and the nature of disease. Any extension of medical facilities must be accompanied by an expanded program of medical training and research. Broad and Basic Studies Needed Discoveries pertinent to medical progress have often come from remote and unexpected sources, and it is certain that this will be true in the future. It is wholly probable that progress in the treatment of cardiovascular disease, renal disease, cancer, and similar refractory diseases will be made as the result of fundamental discoveries in subjects unrelated to those diseases, and perhaps entirely unexpected by the investigator. Further progress requires that the entire front of medicine and the underlying sciences of chemistry, physics, anatomy, biochemistry, physiology, pharmacology, bacteriology, pathology, parasitology, etc., be broadly developed. Progress in the war against disease results from discoveries in remote and unexpected fields of medicine and the underlying sciences. Coordinated Attack on Special Problems Penicillin reached our troops in time to save countless lives because the Government coordinated and supported the program of research and development on the drug. The development moved from the early laboratory stage to large scale production and use in a fraction of the time it would have taken without such leadership. The search for better anti-malarials, which proceeded at a moderate tempo for many years, has been accelerated enormously by Government support during the war. Other examples can be cited in which medical progress has been similarly advanced. In achieving these results, the Government has provided over-all coordination and support; it has not dictated how the work should be done within any cooperating institution. Discovery of new therapeutic agents and methods usually results from basic studies in medicine and the underlying sciences. The development of such materials and methods to the point at which they become available to medical practitioners requires teamwork involving the medical schools, the science departments of universities, Government and the pharmaceutical industry. Government initiative, support, and coordination can be very effective in this development phase. Government initiative and support for the development of newly discovered therapeutic materials and methods can reduce the time required to bring the benefits to the public. Action is Necessary The primary place for medical research is in the medical schools and universities. In some cases coordinated direct attack on special problems may be made by teams of investigators, supplementing similar attacks carried on by the Army, Navy, Public Health Service, and other organizations. Apart from teaching, however, the primary obligation of the medical schools and universities is to continue the traditional function of such institutions, namely, to provide the individual worker with an opportunity for free, untrammeled study of nature, in the directions and by the methods suggested by his interests, curiosity, and imagination. The history of medical science teaches clearly the supreme importance of affording the prepared mind complete freedom for the exercise of initiative. It is the special province of the medical schools and universities to foster medical research in this way - a duty which cannot be shifted to government agencies, industrial organizations, or to any other institutions. Where clinical investigations of the human body are required, the medical schools are in a unique position, because of their close relationship to teaching hospitals, to integrate such investigations with the work of the departments of preclinical science, and to impart new knowledge to physicians in training. At the same time, the teaching hospitals are especially well qualified to carry on medical research because of their close connection with the medical schools, on which they depend for staff and supervision. Between World War I and World War II the United States overtook all other nations in medical research and assumed a position of world leadership. To a considerable extent this progress reflected the liberal financial support from university endowment income, gifts from individuals, and foundation grants in the 20's. The growth of research departments in medical schools ahs been very uneven, however, and in consequence most of the important work has been done in a few large schools. This should be corrected by building up the weaker institutions, especially in regions which now have no strong medical research activities. The traditional sources of support for medical research, largely endowment income, foundation grants, and private donations, are diminishing, and there is no immediate prospect of a change in this trend. Meanwhile, research costs have steadily risen. More elaborate and expensive equipment is required, supplies are more costly, and the wages of assistants are higher. Industry is only to a limited extent a source of funds for basic medical research. It is clear that if we are to maintain the progress in medicine which has marked the last 25 years, the Government should extend financial support to basic medical research in the medical schools and in the universities, through grants both for research and for fellowships. The amount which can be effectively spent in the first year should not exceed 5 million dollars. After a program is under way perhaps 20 million dollars a year can be spent effectively. Chapter 3 SCIENCE AND THE PUBLIC WELFARE Relation to National Security In this war it has become clear beyond all doubt that scientific research is absolutely essential to national security. The bitter and dangerous battle against the U-boat was a battle of scientific techniques - and our margin of success was dangerously small. The new eyes which radar supplied to our fighting forces quickly evoked the development of scientific countermeasures which could often blind them. This again represents the ever continuing battle of techniques. The V-1 attack on London was finally defeated by three devices developed during this war and used superbly in the field. V-2 was countered only by the capture of the launching sites. The Secretaries of War and Navy recently stated in a joint letter to the National Academy of Sciences: This war emphasizes three facts of supreme importance to national security: (1) Powerful new tactics of defense and offense are developed around new weapons created by scientific and engineering research; (2) the competitive time element in developing those weapons and tactics may be decisive; (3) war is increasingly total war, in which the armed services must be supplemented by active participation of every element of civilian population. To insure continued preparedness along farsighted technical lines, the research scientists of the country must be called upon to continue in peacetime some substantial portion of those types of contribution to national security which they have made so effectively during the stress of the present war * * *. There must be more - and more adequate - military research during peacetime. We cannot again rely on our allies to hold off the enemy while we struggle to catch up. Further, it is clear that only the Government can undertake military research; for it must be carried on in secret, much of it has no commercial value, and it is expensive. The obligation of Government to support research on military problems is inescapable. Modern war requires the use of the most advanced scientific techniques. Many of the leaders in the development of radar are scientists who before the war had been exploring the nucleus of the atom. While there must be increased emphasis on science in the future training of officers for both the Army and Navy, such men cannot be expected to be specialists in scientific research. Therefore a professional partnership between the officers in the Services and civilian scientists is needed. The Army and Navy should continue to carry on research and development on the improvement of current weapons. For many years the National Advisory Committee for Aeronautics has supplemented the work of the Army and Navy by conducting basic research on the problems of flight. There should now be permanent civilian activity to supplement the research work of the Services in other scientific fields so as to carry on in time of peace some part of the activities of the emergency war-time Office of Scientific Research and Development. Military preparedness requires a permanent independent, civilian-controlled organization, having close liaison with the Army and Navy, but with funds directly from Congress and with the clear power to initiate military research which will supplement and strengthen that carried on directly under the control of the Army and Navy. Military preparedness requires a permanent independent, civilian-controlled organization, having close liaison with the Army and Navy, but with funds directly from Congress and with the clear power to initiate military research which will supplement and strengthen that carried on directly under the control of the Army and Navy. Science and Jobs One of our hopes is that after the war there will be full employment, and that the production of goods and services will serve to raise our standard of living. We do not know yet how we shall reach that goal, but it is certain that it can be achieved only by releasing the full creative and productive energies of the American people. Surely we will not get there by standing still, merely by making the same things we made before and selling them at the same or higher prices. We will not get ahead in international trade unless we offer new and more attractive and cheaper products. Where will these new products come from? How will we find ways to make better products at lower cost? The answer is clear. There must be a stream of new scientific knowledge to turn the wheels of private and public enterprise. There must be plenty of men and women trained in science and technology for upon them depend both the creation of new knowledge and its application to practical purposes. More and better scientific research is essential to the achievement of our goal of full employment. The Importance of Basic Research Basic research is performed without thought of practical ends. It results in general knowledge and an understanding of nature and its laws. This general knowledge provides the means of answering a large number of important practical problems, though it may not give a complete specific answer to any one of them. The function of applied research is to provide such complete answers. The scientist doing basic research may not be at all interested in the practical applications of his work, yet the further progress of industrial development would eventually stagnate if basic scientific research were long neglected. One of the peculiarities of basic science is the variety of paths which lead to productive advance. Many of the most important discoveries have come as a result of experiments undertaken with very different purposes in mind. Statistically it is certain that important and highly useful discoveries will result from some fraction of the undertakings in basic science; but the results of any one particular investigation cannot be predicted with accuracy. Basic research leads to new knowledge. It provides scientific capital. It creates the fund from which the practical applications of knowledge must be drawn. New products and new processes do not appear full-grown. They are founded on new principles and new conceptions, which in turn are painstakingly developed by research in the purest realms of science. Today, it is truer than ever that basic research is the pacemaker of technological progress. In the nineteenth century, Yankee mechanical ingenuity, building largely upon the basic discoveries of European scientists, could greatly advance the technical arts. Now the situation is different. A nation which depends upon others for its new basic scientific knowledge will be slow in its industrial progress and weak in its competitive position in world trade, regardless of its mechanical skill. Centers of Basic Research Publicly and privately supported colleges and universities and the endowed research institutes must furnish both the new scientific knowledge and the trained research workers. These institutions are uniquely qualified by tradition and by their special characteristics to carry on basic research. They are charged with the responsibility of conserving the knowledge accumulated by the past, imparting that knowledge to students, and contributing new knowledge of all kinds. It is chiefly in these institutions that scientists may work in an atmosphere which is relatively free from the adverse pressure of convention, prejudice, or commercial necessity. At their best they provide the scientific worker with a strong sense of solidarity and security, as well as a substantial degree of personal intellectual freedom. All of these factors are of great importance in the development of new knowledge, since much of new knowledge is certain to arouse opposition because of its tendency to challenge current beliefs or practice. Industry is generally inhibited by preconceived goals, by its own clearly defined standards, and by the constant pressure of commercial necessity. Satisfactory progress in basic science seldom occurs under conditions prevailing in the normal industrial laboratory. There are some notable exceptions, it is true, but even in such cases it is rarely possible to match the universities in respect to the freedom which is so important to scientific discovery. To serve effectively as the centers of basic research these institutions must be strong and healthy. They must attract our best scientists as teachers and investigators. They must offer research opportunities and sufficient compensation to enable them to compete with industry and government for the cream of scientific talent. During the past 25 years there has been a great increase in industrial research involving the application of scientific knowledge to a multitude of practical purposes - thus providing new products, new industries, new investment opportunities, and millions of jobs. During the same period research within Government - again largely applied research - has also been greatly expanded. In the decade from 1930 to 1940 expenditures for industrial research increased from $116,000,000 to $240,000,000 and those for scientific research in Government rose from $24,000,000 to $69,000,000. During the same period expenditures for scientific research in the colleges and universities increased from $20,000,000 to $31,000,000, while those in the endowed research institutes declined from $5,200,000 to $4,500,000. These are the best estimates available. The figures have been taken from a variety of sources and arbitrary definitions have necessarily been applied, but it is believed that they may be accepted as indicating the following trends: * (a) Expenditures for scientific research by industry and Government - almost entirely applied research - have more than doubled between 1930 and 1940. Whereas in 1930 they were six times as large as the research expenditures of the colleges, universities, and research institutes, by 1940 they were nearly ten times as large. * (b) While expenditures for scientific research in the colleges and universities increased by one-half during this period, those for the endowed research institutes have slowly declined. If the colleges, universities, and research institutes are to meet the rapidly increasing demands of industry and Government for new scientific knowledge, their basic research should be strengthened by use of public funds. Research Within the Government Although there are some notable exceptions, most research conducted within governmental laboratories is of an applied nature. This has always been true and is likely to remain so. Hence Government, like industry, is dependent on the colleges, universities, and research institutes to expand the basic scientific frontiers and to furnish trained scientific investigators. Research within the Government represents an important part of our total research activity and needs to be strengthened and expanded after the war. Such expansion should be directed to fields of inquiry and service which are of public importance and are not adequately carried on by private organizations. The most important single factor in scientific and technical work is the quality of the personnel employed. The procedures currently followed within the Government for recruiting, classifying and compensating such personnel place the Government under a severe handicap in competing with industry and the universities for first-class scientific talent. Steps should be taken to reduce that handicap. In the Government the arrangement whereby the numerous scientific agencies form parts of larger departments has both advantages and disadvantages. but the present pattern is firmly established and there is much to be said for it. There is, however, a very real need for some measure of coordination of the common scientific activities of these agencies, both as to policies and budgets, and at present no such means exist. A permanent Science Advisory Board should be created to consult with these scientific bureaus and to advise the executive and legislative branches of Government as to the policies and budgets of Government agencies engaged in scientific research. This board should be composed of disinterested scientists who have no connection with the affairs of any Government agency. Industrial Research The simplest and most effective way in which the Government can strengthen industrial research is to support basic research and to develop scientific talent. The benefits of basic research do not reach all industries equally or at the same speed. Some small enterprises never receive any of the benefits. It has been suggested that the benefits might be better utilized if "research clinics" for such enterprises were to be established. Businessmen would thus be able to make more use of research than they now do. This proposal is certainly worthy of further study. One of the most important factors affecting the amount of industrial research is the income-tax law. Government action in respect to this subject will affect the rate of technical progress in industry. Uncertainties as to the attitude of the Bureau of Internal Revenue regarding the deduction of research and development expenses are a deterrent to research expenditure. These uncertainties arise from lack of clarity of the tax law as to the proper treatment of such costs. The Internal Revenue Code should be amended to remove present uncertainties in regard to the deductibility of research and development expenditures as current charges against net income. Research is also affected by the patent laws. They stimulate new invention and they make it possible for new industries to be built around new devices or new processes. These industries generate new jobs and new products, all of which contribute to the welfare and the strength of the country. Yet, uncertainties in the operation of the patent laws have impaired the ability of small industries to translate new ideas into processes and products of value to the nation. These uncertainties are, in part, attributable to the difficulties and expense incident to the operation of the patent system as it presently exists. These uncertainties are also attributable to the existence of certain abuses, which have appeared in the use of patents. The abuses should be corrected. They have led to extravagantly critical attacks which tend to discredit a basically sound system. It is important that the patent system continue to serve the country in the manner intended by the Constitution, for it has been a vital element in the industrial vigor which has distinguished this nation. The National Patent Planning Commission has reported on this subject. In addition, a detailed study, with recommendations concerning the extent to which modifications should be made in our patent laws is currently being made under the leadership of the Secretary of Commerce. It is recommended, therefore, that specific action with regard to the patent laws be withheld pending the submission of the report devoted exclusively to that subject. International Exchange of Scientific Information International exchange of scientific information is of growing importance. Increasing specialization of science will make it more important than ever that scientists in this country keep continually ahead of developments abroad. In addition a flow of scientific information constitutes one facet of general international accord which should be cultivated. The Government can accomplish significant results in several ways: by aiding in the arrangement of international science congresses, in the official accrediting of American scientists to such gatherings, in the official reception of foreign scientists of standing in this country, in making possible a rapid flow of technical information, including translation service, and possibly in the provision of international fellowships. Private foundations and other groups partially fulfill some of these functions at present, but their scope is incomplete and inadequate. The Government should take an active role in promoting the international flow of scientific information. The Special Need for Federal Support We can no longer count on ravaged Europe as a source of fundamental knowledge. In the past we have devoted much of our best efforts to the application of such knowledge which has been discovered abroad. In the future we must pay increased attention to discovering this knowledge for ourselves particularly since the scientific applications of the future will be more than ever dependent upon such basic knowledge. New impetus must be given to research in our country. Such impetus can come promptly only from the Government. Expenditures for research in the colleges, universities, and research institutes will otherwise not be able to meet the additional demands of increased public need for research. Further, we cannot expect industry adequately to fill the gap. Industry will fully rise to the challenge of applying new knowledge to new products. The commercial incentive can be relied upon for that. But basic research is essentially noncommercial in nature. It will not receive the attention it requires if left to industry. For many years the Government has wisely supported research in the agricultural colleges and the benefits have been great. The time has come when such support should be extended to other fields. In providing government support, however, we must endeavor to preserve as far as possible the private support of research both in industry and in the colleges, universities, and research institutes. These private sources should continue to carry their share of the financial burden. The Cost of a Program It is estimated that an adequate program for Federal support of basic research in the colleges, universities, and research institutes and for financing important applied research in the public interest, will cost about 10 million dollars at the outset and may rise to about 50 million dollars annually when fully underway at the end of perhaps 5 years. Chapter 4 RENEWAL OF OUR SCIENTIFIC TALENT Nature of the Problem The responsibility for the creation of new scientific knowledge rests on that small body of men and women who understand the fundamental laws of nature and are skilled in the techniques of scientific research. While there will always be the rare individual who will rise to the top without benefit of formal education and training, he is the exception and even he might make a more notable contribution if he had the benefit of the best education we have to offer. I cannot improve on President Conant's statement that: "* * * in every section of the entire area where the word science may properly be applied, the limiting factor is a human one. We shall have rapid or slow advance in this direction or in that depending on the number of really first-class men who are engaged in the work in question. * * * So in the last analysis, the future of science in this country will be determined by our basic educational policy." A Note of Warning It would be folly to set up a program under which research in the natural sciences and medicine was expanded at the cost of the social sciences, humanities, and other studies so essential to national well-being. This point has been well stated by the Moe Committee as follows: " As citizens, as good citizens, we therefore think that we must have in mind while examining the question before us - the discovery and development of scientific talent - the needs of the whole national welfare. We could not suggest to you a program which would syphon into science and technology a disproportionately large share of the nation's highest abilities, without doing harm to the nation, nor, indeed, without crippling science. * * * Science cannot live by and unto itself alone." * * * * * * * * * * * * * * "The uses to which high ability in youth can be put are various and, to a large extent, are determined by social pressures and rewards. When aided by selective devices for picking out scientifically talented youth, it is clear that large sums of money for scholarships and fellowships and monetary and other rewards in disproportionate amounts might draw into science too large a percentage of the nation's high ability, with a result highly detrimental to the nation and to science. Plans for the discovery and development of scientific talent must be related to the other needs of society for high ability. * * * There is never enough ability at high levels to satisfy all the needs of the nation; we would not seek to draw into science any more of it than science's proportionate share." The Wartime Deficit Among the young men and women qualified to take up scientific work, since 1940 there have been few students over 18, except some in medicine and engineering in Army and Navy programs and a few 4-F's, who have followed an integrated scientific course of studies. Neither our allies nor, so far as we know, our enemies have done anything so radical as thus to suspend almost completely their educational activities in scientific pursuits during the war period. Two great principles have guided us in this country as we have turned our full efforts to war. First, the sound democratic principle that there should be no favored classes or special privilege in a time of peril, that all should be ready to sacrifice equally; second, the tenet that every man should serve in the capacity in which his talents and experience can best be applied for the prosecution of the war effort. In general we have held these principles well in balance. In my opinion, however, we have drawn too heavily for nonscientific purposes upon the great natural resource which resides in our trained young scientists and engineers. For the general good of the country too many such men have gone into uniform, and their talents have not always been fully utilized. With the exception of those men engaged in war research, all physically fit students at graduate level have been taken into the armed forces. Those ready for college training in the sciences have not been permitted to enter upon that training. There is thus an accumulating deficit of trained research personnel which will continue for many years. The deficit of science and technology students who, but for the war, would have received bachelor's degrees is about 150,000. The deficit of those holding advanced degrees - that is, young scholars trained to the point where they are capable of carrying on original work - has been estimated as amounting to about 17,000 by 1955 in chemistry, engineering, geology, mathematics, physics, psychology, and the biological sciences. With mounting demands for scientists both for teaching and for research, we will enter the post-war period with a serious deficit in our trained scientific personnel. Improve the Quality Confronted with these deficits, we are compelled to look to the use of our basic human resources and formulate a program which will assure their conservation and effective development. The committee advising me on scientific personnel has stated the following principle which should guide our planning: "If we were all-knowing and all-wise we might, but we think probably not, write you a plan whereby there might be selected for training, which they otherwise would not get, those who, 20 years hence, would be scientific leaders, and we might not bother about any lesser manifestations of scientific ability. But in the present state of knowledge a plan cannot be made which will select, and assist, only those young men and women who will give the top future leadership to science. To get top leadership there must be a relatively large base of high ability selected for development and then successive skimmings of the cream of ability at successive times and at higher levels. No one can select from the bottom those who will be the leaders at the top because unmeasured and unknown factors enter into scientific, or any, leadership. There are brains and character, strength and health, happiness and spiritual vitality, interest and motivation, and no one knows what else, that must needs enter into this supra-mathematical calculus. "We think we probably would not, even if we were all-wise and all-knowing, write you a plan whereby you would be assured of scientific leadership at one stroke. We think as we think because we are not interested in setting up an elect. We think it much the best plan, in this constitutional Republic, that opportunity be held out to all kinds and conditions of men whereby they can better themselves. This is the American way; this is the way the United States has become what it is. We think it very important that circumstances be such that there be no ceilings, other than ability itself, to intellectual ambition. We think it very important that every boy and girl shall know that, if he shows that he has what it takes, the sky is the limit. Even if it be shown subsequently that he has not what it takes to go to the top, he will go further than he would otherwise go if there had been a ceiling beyond which he always knew he could not aspire. "By proceeding from point to point and taking stock on the way, by giving further opportunity to those who show themselves worthy of further opportunity, by giving the most opportunity to those who show themselves continually developing - this is the way we propose. This is the American way: a man work for what he gets." Remove the Barriers Higher education in this country is largely for those who have the means. If those who have the means coincided entirely with those persons who have the talent we should not be squandering a part of our higher education on those undeserving of it, nor neglecting great talent among those who fail to attend college for economic reasons. There are talented individuals in every segment of the population, but with few exceptions those without the means of buying higher education go without it. Here is a tremendous waste of the greatest resource of a nation - the intelligence of its citizens. If ability, and not the circumstance of family fortune, is made to determine who shall receive higher education in science, then we shall be assured of constantly improving quality at every level of scientific activity. The Generation in Uniform Must Not Be Lost We have a serious deficit in scientific personnel partly because the men who would have studied science in the colleges and universities have been serving in the Armed Forces. Many had begun their studies before they went to war. Others with capacity for scientific education went to war after finishing high school. The most immediate prospect of making up some of the deficit in scientific personnel is by salvaging scientific talent from the generation in uniform. For even if we should start now to train the current crop of high school graduates, it would be 1951 before they would complete graduate studies and be prepared for effective scientific research. This fact underlines the necessity of salvaging potential scientists in uniform. The Armed Services should comb their records for men who, prior to or during the war, have given evidence of talent for science, and make prompt arrangements, consistent with current discharge plans, for ordering those who remain in uniform as soon as militarily possible to duty at institutions here and overseas where they can continue their scientific education. Moreover, they should see that those who study overseas have the benefit of the latest scientific developments. A Program The country may be proud of the fact that 95 percent of boys and girls of the fifth grade age are enrolled in school, but the drop in enrollment after the fifth grade is less satisfying. For every 1,000 students in the fifth grade, 600 are lost to education before the end of high school, and all but 72 have ceased formal education before completion of college. While we are concerned primarily with methods of selecting and educating high school graduates at the college and higher levels, we cannot be complacent about the loss of potential talent which is inherent in the present situation. Students drop out of school, college, and graduate school, or do not get that far, for a variety of reasons: they cannot afford to go on; schools and colleges providing courses equal to their capacity are not available locally; business and industry recruit many of the most promising before they have finished the training of which they are capable. These reasons apply with particular force to science: the road is long and expensive; it extends at least 6 years beyond high school; the percentage of science students who can obtain first-rate training in institutions near home is small. Improvement in the teaching of science is imperative; for students of latent scientific ability are particularly vulnerable to high school teaching which fails to awaken interest or to provide adequate instruction. To enlarge the group of specially qualified men and women it is necessary to increase the number who go to college. This involves improved high school instruction, provision for helping individual talented students to finish high school (primarily the responsibility of the local communities), and opportunities for more capable, promising high school students to go to college. Anything short of this means serious waste of higher education and neglect of human resources. To encourage and enable a larger number of young men and women of ability to take up science as a career, and in order gradually to reduce the deficit of trained scientific personnel, it is recommended that provision be made for a reasonable number of (a) undergraduate scholarships and graduate fellowships and (b) fellowships for advanced training and fundamental research. The details should be worked out with reference to the interests of the several States and of the universities and colleges; and care should be taken not to impair the freedom of the institutions and individuals concerned. The program proposed by the Moe Committee in Appendix 4 would provide 24,000 undergraduate scholarships and 900 graduate fellowships and would cost about $30,000,000 annually when in full operation. Each year under this program 6,000 undergraduate scholarships would be made available to high school graduates, and 300 graduate fellowships would be offered to college graduates. Approximately the scale of allowances provided for under the educational program for returning veterans has been used in estimating the cost of this program. The plan is, further, that all those who receive such scholarships or fellowships in science should be enrolled in a National Science Reserve and be liable to call into the service of the Government, in connection with scientific or technical work in time of war or other national emergency declared by Congress or proclaimed by the President. Thus, in addition to the general benefits to the nation by reason of the addition to its trained ranks of such a corps of scientific workers, there would be a definite benefit to the nation in having these scientific workers on call in national emergencies. The Government would be well advised to invest the money involved in this plan even if the benefits to the nation were thought of solely - which they are not - in terms of national preparedness. Chapter 5 A PROBLEM OF SCIENTIFIC RECONVERSION Effects of Mobilization of Science for War We have been living on our fat. For more than 5 years many of our scientists have been fighting the war in the laboratories, in the factories and shops, and at the front. We have been directing the energies of our scientists to the development of weapons and materials and methods, on a large number of relatively narrow projects initiated and controlled by the Office of Scientific Research and Development and other Government agencies. Like troops, the scientists have been mobilized, and thrown into action to serve their country in time of emergency. But they have been diverted to a greater extent than is generally appreciated from the search for answers to the fundamental problems - from the search on which human welfare and progress depends. This is not a complaint - it is a fact. The mobilization of science behind the lines is aiding the fighting men at the front to win the war and to shorten it; and it has resulted incidentally in the accumulation of a vast amount of experience and knowledge of the application of science to particular problems, much of which can be put to use when the war is over. Fortunately, this country had the scientists - and the time - to make this contribution and thus to advance the date of victory. Security Restrictions Should Be Lifted Promptly Much of the information and experience acquired during the war is confined to the agencies that gathered it. Except to the extent that military security dictates otherwise, such knowledge should be spread upon the record for the benefit of the general public. Thanks to the wise provision of the Secretary of War and the Secretary of the Navy, most of the results of war-time medical research have been published. Several hundred articles have appeared in the professional journals; many are in process of publication. The material still subject to security classification should be released as soon as possible. It is my view that most of the remainder of the classified scientific material should be released as soon as there is ground for belief that the enemy will not be able to turn it against us in this war. Most of the information needed by industry and in education can be released without disclosing its embodiments in actual military material and devices. Basically there is no reason to believe that scientists of other countries will not in time rediscover everything we now know which is held in secrecy. A broad dissemination of scientific information upon which further advances can readily be made furnishes a sounder foundation for our national security than a policy of restriction which would impede our own progress although imposed in the hope that possible enemies would not catch up with us. During the war it has been necessary for selected groups of scientists to work on specialized problems, with relatively little information as to what other groups were doing and had done. Working against time, the Office of Scientific Research and Development has been obliged to enforce this practice during the war, although it was realized by all concerned that it was an emergency measure which prevented the continuous cross-fertilization so essential to fruitful scientific effort. Our ability to overcome possible future enemies depends upon scientific advances which will proceed more rapidly with diffusion of knowledge than under a policy of continued restriction of knowledge now in our possession. Need for Coordination In planning the release of scientific data and experience collected in connection with the war, we must not overlook the fact that research has gone forward under many auspices - the Army, the Navy, the Office of Scientific Research and Development, the National Advisory Committee for Aeronautics, other departments and agencies of the Government, educational institutions, and many industrial organizations. There have been numerous cases of independent discovery of the same truth in different places. To permit the release of information by one agency and to continue to restrict it elsewhere would be unfair in its effect and would tend to impair the morale and efficiency of scientists who have submerged individual interests in the controls and restrictions of war. A part of the information now classified which should be released is possessed jointly by our allies and ourselves. Plans for release of such information should be coordinated with our allies to minimize danger of international friction which would result from sporadic uncontrolled release. A Board to Control Release The agency responsible for recommending the release of information from military classification should be an Army, Navy, civilian body, well grounded in science and technology. It should be competent to advise the Secretary of War and the Secretary of the Navy. It should, moreover, have sufficient recognition to secure prompt and practical decisions. To satisfy these considerations I recommend the establishment of a Board, made up equally of scientists and military men, whose function would be to pass upon the declassification and to control the release for publication of scientific information which is now classified. Publication Should Be Encouraged The release of information from security regulations is but one phase of the problem. The other is to provide for preparation of the material and its publication in a form and at a price which will facilitate dissemination and use. In the case of the Office of Scientific Research and Development, arrangements have been made for the preparation of manuscripts, while the staffs under our control are still assembled and in possession of the records, as soon as the pressure for production of results for this war has begun to relax. We should get this scientific material to scientists everywhere with great promptness, and at as low a price as is consistent with suitable format. We should also get it to the men studying overseas so that they will know what has happened in their absence. It is recommended that measures which will encourage and facilitate the preparation and publication of reports be adopted forthwith by all agencies, governmental and private, possessing scientific information released from security control. Chapter 6 THE MEANS TO THE END New Responsibilities for Government One lesson is clear from the reports of the several committees attached as appendices. The Federal Government should accept new responsibilities for promoting the creation of new scientific knowledge and the development of scientific talent in our youth. The extent and nature of these new responsibilities are set forth in detail in the reports of the committees whose recommendations in this regard are fully endorsed. In discharging these responsibilities Federal funds should be made available. We have given much thought to the question of how plans for the use of Federal funds may be arranged so that such funds will not drive out of the picture funds from local governments, foundations, and private donors. We believe that our proposals will minimize that effect, but we do not think that it can be completely avoided. We submit, however, that the nation's need for more and better scientific research is such that the risk must be accepted. It is also clear that the effective discharge of these responsibilities will require the full attention of some over-all agency devoted to that purpose. There should be a focal point within the Government for a concerted program of assisting scientific research conducted outside of Government. Such an agency should furnish the funds needed to support basic research in the colleges and universities, should coordinate where possible research programs on matters of utmost importance to the national welfare, should formulate a national policy for the Government toward science, should sponsor the interchange of scientific information among scientists and laboratories both in this country and abroad, and should ensure that the incentives to research in industry and the universities are maintained. All of the committees advising on these matters agree on the necessity for such an agency. The Mechanism There are within Government departments many groups whose interests are primarily those of scientific research. Notable examples are found within the Departments of Agriculture, Commerce, Interior, and the Federal Security Agency. These groups are concerned with science as collateral and peripheral to the major problems of those Departments. These groups should remain where they are, and continue to perform their present functions, including the support of agricultural research by grants to the Land Grant Colleges and Experiment Stations, since their largest contribution lies in applying fundamental knowledge to the special problems of the Departments within which they are established. By the same token these groups cannot be made the repository of the new and large responsibilities in science which belong to the Government and which the Government should accept. The recommendations in this report which relate to research within the Government, to the release of scientific information, to clarification of the tax laws, and to the recovery and development of our scientific talent now in uniform can be implemented by action within the existing structure of the Government. But nowhere in the Governmental structure receiving its funds from Congress is there an agency adapted to supplementing the support of basic research in the universities, both in medicine and the natural sciences; adapted to supporting research on new weapons for both Services; or adapted to administering a program of science scholarships and fellowships. A new agency should be established, therefore, by the Congress for the purpose. Such an agency, moreover, should be an independent agency devoted to the support of scientific research and advanced scientific education alone. Industry learned many years ago that basic research cannot often be fruitfully conducted as an adjunct to or a subdivision of an operating agency or department. Operating agencies have immediate operating goals and are under constant pressure to produce in a tangible way, for that is the test of their value. None of these conditions is favorable to basic research. research is the exploration of the unknown and is necessarily speculative. It is inhibited by conventional approaches, traditions, and standards. It cannot be satisfactorily conducted in an atmosphere where it is gauged and tested by operating or production standards. Basic scientific research should not, therefore, be placed under an operating agency whose paramount concern is anything other than research. Research will always suffer when put in competition with operations. The decision that there should be a new and independent agency was reached by each of the committees advising in these matters. I am convinced that these new functions should be centered in one agency. Science is fundamentally a unitary thing. The number of independent agencies should be kept to a minimum. Much medical progress, for example, will come from fundamental advances in chemistry. Separation of the sciences in tight compartments, as would occur if more than one agency were involved, would retard and not advance scientific knowledge as a whole. Five Fundamentals There are certain basic principles which must underlie the program of Government support for scientific research and education if such support is to be effective and if it is to avoid impairing the very things we seek to foster. These principles are as follows: (1) Whatever the extent of support may be, there must be stability of funds over a period of years so that long-range programs may be undertaken. (2) The agency to administer such funds should be composed of citizens selected only on the basis of their interest in and capacity to promote the work of the agency. They should be persons of broad interest in and understanding of the peculiarities of scientific research and education. (3) The agency should promote research through contracts or grants to organizations outside the Federal Government. It should not operate any laboratories of its own. (4) Support of basic research in the public and private colleges, universities, and research institutes must leave the internal control of policy, personnel, and the method and scope of the research to the institutions themselves. This is of the utmost importance. (5) While assuring complete independence and freedom for the nature, scope, and methodology of research carried on in the institutions receiving public funds, and while retaining discretion in the allocation of funds among such institutions, the Foundation proposed herein must be responsible to the President and the Congress. Only through such responsibility can we maintain the proper relationship between science and other aspects of a democratic system. The usual controls of audits, reports, budgeting, and the like, should, of course, apply to the administrative and fiscal operations of the Foundation, subject, however, to such adjustments in procedure as are necessary to meet the special requirements of research. Basic research is a long-term process - it ceases to be basic if immediate results are expected on short-term support. Methods should therefore be found which will permit the agency to make commitments of funds from current appropriations for programs of five years duration or longer. Continuity and stability of the program and its support may be expected (a) from the growing realization by the Congress of the benefits to the public from scientific research, and (b) from the conviction which will grow among those who conduct research under the auspices of the agency that good quality work will be followed by continuing support. Military Research As stated earlier in this report, military preparedness requires a permanent, independent, civilian-controlled organization, having close liaison with the Army and Navy, but with funds direct from Congress and the clear power to initiate military research which will supplement and strengthen that carried on directly under the control of the Army and Navy. As a temporary measure the National Academy of Sciences has established the Research Board for National Security at the request of the Secretary of War and the Secretary of the Navy. This is highly desirable in order that there may be no interruption in the relations between scientists and military men after the emergency wartime Office of Scientific Research and Development goes out of existence. The Congress is now considering legislation to provide funds for this Board by direct appropriation. I believe that, as a permanent measure, it would be appropriate to add to the agency needed to perform the other functions recommended in this report the responsibilities for civilian-initiated and civilian-controlled military research. The function of such a civilian group would be primarily to conduct long-range scientific research on military problems - leaving to the Services research on the improvement of existing weapons. Some research on military problems should be conducted, in time of peace as well as in war, by civilians independently of the military establishment. It is the primary responsibility of the Army and Navy to train the men, make available the weapons, and employ the strategy that will bring victory in combat. The Armed Services cannot be expected to be experts in all of the complicated fields which make it possible for a great nation to fight successfully in total war. There are certain kinds of research - such as research on the improvement of existing weapons - which can best be done within the military establishment. However, the job of long-range research involving application of the newest scientific discoveries to military needs should be the responsibility of those civilian scientists in the universities and in industry who are best trained to discharge it thoroughly and successfully. It is essential that both kinds of research go forward and that there be the closest liaison between the two groups. Placing the civilian military research function in the proposed agency would bring it into close relationship with a broad program of basic research in both the natural sciences and medicine. A balance between military and other research could thus readily be maintained. The establishment of the new agency, including a civilian military research group, should not be delayed by the existence of the Research Board for National Security, which is a temporary measure. Nor should the creation of the new agency be delayed by uncertainties in regard to the postwar organization of our military departments themselves. Clearly, the new agency, including a civilian military research group within it, can remain sufficiently flexible to adapt its operations to whatever may be the final organization of the military departments. National Research Foundation It is my judgment that the national interest in scientific research and scientific education can best be promoted by the creation of a National Research Foundation. I. Purposes. - The National Research Foundation should develop and promote a national policy for scientific research and scientific education, should support basic research in nonprofit organizations, should develop scientific talent in American youth by means of scholarships and fellowships, and should by contract and otherwise support long-range research on military matters. II. Members. - 1. Responsibility to the people, through the President and Congress, should be placed in the hands of, say nine Members, who should be persons not otherwise connected with the Government and not representative of any special interest, who should be known as National Research Foundation Members, selected by the President on the basis of their interest in and capacity to promote the purposes of the Foundation. 2. The terms of the Members should be, say, 4 years, and no Member should be eligible for immediate reappointment provided he has served a full 4-year term. It should be arranged that the Members first appointed serve terms of such length that at least two Members are appointed each succeeding year. 3. The Members should serve without compensation but should be entitled to their expenses incurred in the performance of their duties. 4. The Members should elect their own chairman annually. 5. The chief executive officer of the Foundation should be a director appointed by the Members. Subject to the direction and supervision of the Foundation Members (acting as a board), the director should discharge all the fiscal, legal, and administrative functions of the Foundation. The director should receive a salary that is fully adequate to attract an outstanding man to the post. 6. There should be an administrative office responsible to the director to handle in one place the fiscal, legal, personnel, and other similar administrative functions necessary to the accomplishment of the purposes of the Foundation. 7. With the exception of the director, the division members, and one executive officer appointed by the director to administer the affairs of each division, all employees of the Foundation should be appointed under Civil Service regulations. III. Organization. - 1. In order to accomplish the purposes of the Foundation the Members should establish several professional Divisions to be responsible to the Members. At the outset these Divisions should be: a. Division of Medical Research. - The function of this Division should be to support medical research. b. Division of Natural Sciences. - The function of this Division should be to support research in the physical and natural sciences. c. Division of National Defense. - It should be the function of this Division to support long-range scientific research on military matters. d. Division of Scientific Personnel and Education. - It should be the function of this Division to support and to supervise the grant of scholarships and fellowships in science. e. Division of Publications and Scientific Collaboration. - This Division should be charged with encouraging the publication of scientific knowledge and promoting international exchange of scientific information. 2. Each Division of the Foundation should be made up of at least five members, appointed by the Members of the Foundation. In making such appointments the Members should request and consider recommendations from the National Academy of Sciences which should be asked to establish a new National Research Foundation nominating committee in order to bring together the recommendations of scientists in all organizations. The chairman of each Division should be appointed by the Members of the Foundation. 3. The division Members should be appointed for such terms as the Members of the Foundation may determine, and may be reappointed at the discretion of the Members. They should receive their expenses and compensation for their services at a per diem rate of, say, $50 while engaged on business of the Foundation, but no division member should receive more than, say, $10,000 compensation per year. 4. Membership of the Division of National Defense should include, in addition to, say, five civilian members, one representative designated by the Secretary of War, and one representative of the Secretary of the Navy, who should serve without additional compensation for this duty. ---------------------------------------------------------------------- Proposed Organization of National Research Foundation ================================ | National Research Foundation | |------------------------------| | Members | ================================ | ----------------------- | Director | ----------------------- | |--------------------- | | | --------------------------- | | Staff offices | | | General Counsel| | | Finance Officer| | | Administrative planning | | | Personnel| | --------------------------- | ---------------------------------------------------------------- | | | | | ------------------ --------------- ------------- ------------- --------------- - | Division of| |Division of | |Division of| |Division of| |Division of | |Medical Research| |Scientific| |Natural | | National | |Publications & | |----------------| |Personnel and| | Sciences | | Defense| |Scientific | |Members| |Education | |-----------| |-----------| |Collaboration | ------------------ |-------------| | Members| | Members | |-------------- | | | Members | ------------- ------------- | Members | | --------------- || --------------- - | |||| ------------------- --------------- ------------- ------------- --------------- - |Executive officer| |Exec. officer| |Exec. off. | |Exec. off. | |Exec. officer | ------------------- --------------- ------------- ------------- --------------- - ============================================================================= IV. Functions. - 1. The Members of the Foundation should have the following functions, powers, and duties: a. To formulate over-all policies of the Foundation. b. To establish and maintain such offices within the United States, its territories and possessions, as they may deem necessary. c. To meet and function at any place within the United States, its territories and possessions. d. To obtain and utilize the services of other Government agencies to the extent that such agencies are prepared to render such services. e. To adopt, promulgate, amend, and rescind rules and regulations to carry out the provisions of the legislation and the policies and practices of the Foundation. f. To review and balance the financial requirements of the several Divisions and to propose to the President the annual estimate for the funds required by each Division. Appropriations should be earmarked for the purposes of specific Divisions, but the Foundation should be left discretion with respect to the expenditure of each Division's funds. g. To make contracts or grants for the conduct of research by negotiation without advertising for bids. And with the advice of the National Research Foundation Divisions concerned - h. To create such advisory and cooperating agencies and councils, state, regional, or national, as in their judgment will aid in effectuating the purposes of the legislation, and to pay the expenses thereof. i. To enter into contracts with or make grants to educational and nonprofit research institutions for support of scientific research. j. To initiate and finance in appropriate agencies, institutions, or organizations, research on problems related to the national defense. k. To initiate and finance in appropriate organizations research projects for which existing facilities are unavailable or inadequate. l. To establish scholarships and fellowships in the natural sciences including biology and medicine. m. To promote the dissemination of scientific and technical information and to further its international exchange. n. To support international cooperation in science by providing financial aid for international meetings, associations of scientific societies, and scientific research programs organized on an international basis. o. To devise and promote the use of methods of improving the transition between research and its practical application in industry. 2. The Divisions should be responsible to the Members of the Foundation for - a. Formulation of programs and policy within the scope of the particular Divisions. b. Recommendations regarding the allocation of research programs among research organizations. c. Recommendation of appropriate arrangements between the Foundation and the organizations selected to carry on the program. d. Recommendation of arrangements with State and local authorities in regard to cooperation in a program of science scholarships and fellowships. e. Periodic review of the quality of research being conducted under the auspices of the particular Division and revision of the program of support of research. f. Presentation of budgets of financial needs for the work of the Division. g. Maintaining liaison with other scientific research agencies, both governmental and private, concerned with the work of the Division. V. Patent Policy. - The success of the National Research Foundation in promoting scientific research in this country will depend to a very large degree upon the cooperation of organizations outside the Government. In making contracts with or grants to such organizations the Foundation should protect the public interest adequately and at the same time leave the cooperating organization with adequate freedom and incentive to conduct scientific research. The public interest will normally be adequately protected if the Government receives a royalty-free license for governmental purposes under any patents resulting from work financed by the Foundation. There should be no obligation on the research institution to patent discoveries made as a result of support from the Foundation. There should certainly not be any absolute requirement that all rights in such discoveries be assigned to the Government, but it should be left to the discretion of the director and the interested Division whether in special cases the public interest requires such an assignment. Legislation on this point should leave to the Members of the Foundation discretion as to its patent policy in order that patent arrangements may be adjusted as circumstances and the public interest require. VI. Special Authority. - In order to insure that men of great competence and experience may be designated as Members of the Foundation and as members of the several professional Divisions, the legislation creating the Foundation should contain specific authorization so that the Members of the Foundation and the Members of the Divisions may also engage in private and gainful employment, notwithstanding the provisions of any other laws: provided, however, that no compensation for such employment is received in any form from any profit-making institution which receives funds under contract, or otherwise, from the Division or Divisions of the Foundation with which the individual is concerned. In normal times, in view of the restrictive statutory prohibitions against dual interests on the part of Government officials, it would be virtually impossible to persuade persons having private employment of any kind to serve the Government in an official capacity. In order, however, to secure the part-time services of the most competent men as Members of the Foundation and the Divisions, these stringent prohibitions should be relaxed to the extent indicated. Since research is unlike the procurement of standardized items, which are susceptible to competitive bidding on fixed specifications, the legislation creating the National Research Foundation should free the Foundation from the obligation to place its contracts for research through advertising for bids. This is particularly so since the measure of a successful research contract lies not in the dollar cost but in the qualitative and quantitative contribution which is made to our knowledge. The extent of this contribution in turn depends on the creative spirit and talent which can be brought to bear within a research laboratory. The National Research Foundation must, therefore, be free to place its research contracts or grants not only with those institutions which have a demonstrated research capacity but also with other institutions whose latent talent or creative atmosphere affords promise of research success. As in the case of the research sponsored during the war by the Office of Scientific Research and Development, the research sponsored by the National Research Foundation should be conducted, in general, on an actual cost basis without profit to the institution receiving the research contract or grant. There is one other matter which requires special mention. Since research does not fall within the category of normal commercial or procurement operations which are easily covered by the usual contractual relations, it is essential that certain statutory and regulatory fiscal requirements be waived in the case of research contractors. For example, the National Research Foundation should be authorized by legislation to make, modify, or amend contracts of all kinds with or without legal consideration, and without performance bonds. Similarly, advance payments should be allowed in the discretion of the Director of the Foundation when required. Finally, the normal vouchering requirements of the General Accounting Office with respect to detailed itemization or substantiation of vouchers submitted under cost contracts should be relaxed for research contractors. Adherence to the usual procedures in the case of research contracts will impair the efficiency of research operations and will needlessly increase the cost of the work of the Government. Without the broad authority along these lines which was contained in the First War Powers Act and its implementing Executive Orders, together with the special relaxation of vouchering requirements granted by the General Accounting Office, the Office of Scientific Research and Development would have been gravely handicapped in carrying on research on military matters during this war. Colleges and universities in which research will be conducted principally under contract with the Foundation are, unlike commercial institutions, not equipped to handle the detailed vouchering procedures and auditing technicalities which are required of the usual Government contractors. VII. Budget. - Studies by the several committees provide a partial basis for making an estimate of the order of magnitude of the funds required to implement the proposed program. Clearly the program should grow in a healthy manner from modest beginnings. The following very rough estimates are given for the first year of operation after the Foundation is organized and operating, and for the fifth year of operation when it is expected that the operations would have reached a fairly stable level: --------------------------------------------------------------------------- Activity| Millions of dollars ---------------------- | First year | 5th yr --------------------------------------------------------------------------- Division of Medical Research|5.0|20.0 Division of Natural Sciences| 10.0|50.0 Division of National Defense| 10.0|20.0 Division of Scientific Personnel and Education|7.0|29.0 Division of Publications & Scientific Collaboration | .5| 1.0 Administration |1.0| 2.5 --------------------------------------------------------------------------- Action by Congress The National Research Foundation herein proposed meets the urgent need of the days ahead. The form of the organization suggested is the result of considerable deliberation. The form is important. The very successful pattern of organization of the National Advisory Committee for Aeronautics, which has promoted basic research on problems of flight during the past thirty years, has been carefully considered in proposing the method of appointment of Members of the Foundation and in defining their responsibilities. Moreover, whatever program is established it is vitally important that it satisfy the Five Fundamentals. The Foundation here proposed has been described only in outline. The excellent reports of the committees which studied these matters are attached as appendices. They will be of aid in furnishing detailed suggestions. Legislation is necessary. It should be drafted with great care. Early action is imperative, however, if this nation is to meet the challenge of science and fully utilize the potentialities of science. On the wisdom with which we bring science to bear against the problems of the coming years depends in large measure our future as a nation. National Science Foundation home | Legislative & Public Affairs home From shovland at mindspring.com Sat Dec 24 17:19:34 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sat, 24 Dec 2005 09:19:34 -0800 Subject: [Paleopsych] Reasons for hope Message-ID: Things are not great at the moment, but I think we have reasons to hope that the situation can get better: 1) Some of us, hopefully a growing number, are refusing to join the culture of fear and lies. 2) We are not running out of oil, although we do have to be careful about how fast we burn it. 3) There is a huge amount of information about staying healthy which is in existence but is not incorporated into the "health care" we normally get. 4) In general, if we do not over-reproduce, there is enough to go around, and if we are wise enough and decent enough it can go around. From shovland at mindspring.com Sat Dec 24 18:00:12 2005 From: shovland at mindspring.com (Steve Hovland) Date: Sat, 24 Dec 2005 10:00:12 -0800 Subject: [Paleopsych] Iraq incident Message-ID: The Marine And The Insurgent Agree On One Thing A U.S Marine squad was marching north of Basra when they came upon an Iraqi terrorist, badly injured and unconscious. On the opposite side of the road was an American Marine in similar but less serious state. The Marine was conscious and alert and as first aid was given to both men, the squad leader asked the injured Marine what had happened. The Marine reported, "I was heavily armed and moving north along the highway here, and coming south was that heavily armed insurgent. We saw each other and both took cover in the ditches along the road. I yelled to him that Saddam Hussein is a miserable, lowlife, scumbag, and he yelled back that Senator Ted Kennedy is a good-for-nothing, fat, leftwing liberal drunken murderer. So I yelled that Osama Bin Ladin dresses and acts like a frigid, mean spirited woman!", and he retaliated by yelling, "Oh yeah? Well so does Hillary Clinton!" "And, there we were, standing in the middle of the road, shaking hands, when a truck hit us!" -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at panix.com Sun Dec 25 01:06:13 2005 From: checker at panix.com (Premise Checker) Date: Sat, 24 Dec 2005 20:06:13 -0500 (EST) Subject: [Paleopsych] spiked-liberties: (Pamuk and Irving) Free speech in Europe Message-ID: Free speech in Europe http://www.spiked-online.com/Printable/0000000CAEDB.htm 5.12.20 [Any red-blooded American should get instantly suspicious of anything that is backed with severe sanctions, from loss of a job to jail. If suggesting that 50K+ or so years of divergent human evolution might have produced group differences in innate cognitive capacity, for example, leads to sanctions, one ought to wonder why the evidence that this hasn't happened hasn't been produced. This is most especially the case where those who maintain the Orthodox position have the where withal to do the research to demonstrate their truths. ["If you have the facts, argue the facts; if you don't have the facts, argue the law; if you don't have the law, pound on the table" is an old legal saying. But it's much worse to take away a skeptic's job or put him in jail. [It is all-too-common to put the careers of even scientists at risk for dissenting from the reigning orthodoxy. This is a tricky issue. There are a few who dissent from the theory of relativity. They usually couch their arguments in philosophical terms, often arrived off the cuff. What they must do is truly master the orthodoxy and do the hard, hard work of *building* an alternative and showing how their alternative fits the facts at least as well as the orthodoxy and overcomes the objections. [A dissenter should also argue that his is a progressive research program. That there are (paranormal?) phenomena that go beyond what we can account for has been demonstrated. But there has been no progress toward finding any regularities in over 150 years. On the other hand, there seems to be something to acupuncture, but we (Western scientists) don't understand how it might work. Nevertheless, acupuncturists afaik are doing the same thing they did since time immemorial ("the memory of man runneth not to the contrary" --Blackstone). There have been a huge number of claims for faith healing, in so many religions, that faith healings cannot be evidence for the truth of any one religion. To say these healings work though brain mechanisms just restates materialism; plausible mechanisms, or at least the outlines of such, need to be developed. Plate tectonics was slow to get accepted, not because observing the fit of Africa and South America has not suggested the possibility (not since time immemorial but since 1492) but a sufficient understanding of how such plates might work took a while to be developed. Whether this is a case of a theory being accepted too slowly, I don't know, and I suspect other theories have been accepted too quickly. ["Those who left the Party before I did are traitors; those who left afterwards are fools." [I know little about the Armenian genocide, but I have read articles and books on Holocaust Revisionism and followed somewhat closely Irving v. Lipstadt. In that trial, the judge gave a narrow definition of what Holocaust denial consisted of. It was much narrower than the persecution of Jews by Germans. Rather, it was the thesis that there were no homicidal, operational gas chambers that were used to kill some millions of Jews as a matter of deliberate state policy. [There is an actual, substantive meta-issue here! The meta-issue is over the expected quantity and variety of evidence. All sides agree that there is not very much of it. There are a couple of dozen eye-witness reports, a few confessions, a few documents that point to a deliberate state plan, and some tell-tale signs in the concentration camps that rule out a more benign interpretation, esp. Keren, et alia, below. [The meta-issue is over why there is so little. The Orthodox viewpoint is that the Holocaust was a *conspiracy*, which of course it was, though oddly it is never designated by that name. Conspirators do not want their misdeeds to be known. They use code words ("Final Solution," of course, but others like "removal to the east"), they promulgate their orders verbally, they do not even speak of it in secret cables (at least none that have been shown up in what is apparently a huge and random collection of them captured by the victors), they destroy the evidence at the sites themselves, few eyewitnesses have spoken because nearly all of them were themselves gassed. The Holocaust was, in short, nearly the perfect crime, and it has taken the detective work of, at first, a very few indefatigable historians to uncover it. Overall, though, where there was this much smoke, although not the best smoke, there must have been been fire. [The Revisionists, on the other hand, make the implicit meta-claim that there should have been mountains of evidence. Germans are meticulous record keepers. Enterprises on the scale of the Holocaust require large bureaucracies and generate enormous paper trails. There should be a lot of forensic evidence at the sites of the camps. War plans, which have shown up on secret cables, would certainly have been more secret than the running of death camps. And so on. [The Orthodoxy counters that rumors must have at least some substance behind them and that it is conspiratorial thinking to have made all this up. [The debates go back and forth, to the extent anyone stops name calling. I have two articles that I'll be glad to ship to anyone who asks for them, articles that are the best ones on both sides that I have encountered, for I always seek out the best articles in any debate: [For the Revisionists, there is a literary analysis by Samuel Crowell, "The Gas Chambers of Sherlock Holmes," which is available at http://www.codoh.com/newsite/GasChambers/SamuelCrowell/abooktoc00.html . (If this doesn't work, I can supply a text copy.) He does not go into the adequacy of eyewitness testimony (often contradictory), the interpretation of documents (which can run either way), the absence of clear Hitler order (should there have been one?), the demographics (also up for grabs). Rather he argues that the belief in murder by poison gas arose not by a conspiracy but by misunderstanding and rumors. I can trace no substantial dealing with this article, either on google or on alt.revisionism, as taken from http://groups.google.com. [For the Orthodoxy, there's "The Ruins of the Gas Chambers: A Forensic Investigation of Crematoriums at Auschwitz I and Auschwitz-Birkenau," by Daniel Keren, Jamie McCarthy and Harry W. Mazal, _Holocaust and Genocide Studies_ 2004 18(1):68-103; doi:10.1093/hgs/18.1.68 [Abstract: [Combining engineering, computer, and photographic techniques with historical sources, this research note discusses the gas chambers attached to crematoriums at Auschwitz I and the Auschwitz-Birkenau death camp. Among other things, the authors identify the locations of several of the holes in the roofs through which Zyklon B was introduced: five in Crematorium I and three of the four in the badly damaged Crematorium II. The authors began their project before David Irving's libel suit against Penguin Books and Deborah Lipstadt, proceeding simultaneously with, but independently of, the trial. The defense presented the first version of the authors' report during Irving's subsequent application to appeal. Irving's application was rejected by the court. [It was publicly available for a while, though why the Holocaust Museum, which sponsors the journal, yanked it is a mystery. I can supply a copy of this also.] ------------------- The trial of Orhan Pamuk for 'publicly denigrating Turkish identity' is a disgrace. So is Austria's imprisonment of David Irving for Holocaust denial. by Brendan O'Neill Two European writers have recently fallen foul of European governments for expressing their views about genocide. Both are threatened with trial and imprisonment for something they said or wrote. Yet one is supported by EU politicians and the international literati - who have rallied around to defend him from censorship and to champion the right of writers to speak freely - while the other has been ignored, or even told that he got what he deserved. This is bad news, because when it comes to free speech it's all or nothing: we either have it or we don't. And if we were to have free speech for one writer but not for another, then we wouldn't have free speech at all. The writers are Orhan Pamuk and David Irving. Pamuk, a Turkish novelist, faces trial for questioning the official Turkish line on Armenia. The Turkish authorities argue that their killing of Armenians during the First World War cannot be classed as a genocide, and have taken umbrage at the following comment made by Pamuk in an interview earlier this year: 'One million Armenians and 30,000 Kurds were killed in these lands and nobody but me dares talk about it.' With those words, Pamuk apparently 'insulted Turkishness' and faces up to three years behind bars if convicted (1). His case has become an international cause c?l?bre. Irving, a British historian, is currently languishing in jail in Vienna, awaiting trial in February next year for denying the Holocaust. He was arrested in November while on a speaking tour of Austria for comments he made in the country over 15 years ago. Back then he allegedly made two speeches in which he denied that there were any gas chambers under the Nazis (he has apparently since revised his views and now accepts that there may have been a few such devices). Holocaust denial is a crime in Austria, and if found guilty Irving could be jailed for up to 10 years (2). His case has not become an international cause c?l?bre. The writers could not be more different. Pamuk is an internationally acclaimed novelist. His work has been translated into more than 20 languages and he is hotly tipped to win the Nobel Prize for Literature one of these years. His 'crime' was to question Ankara on the touchy subject of Armenia. Irving, by contrast, is a racist crank, an historian whom no one outside of small fascist sects takes seriously. He denies the facts of the Holocaust, once claiming that 'more women died on the back seat of Edward Kennedy's car at Chappaquiddick than ever died in a gas chamber in Auschwitz' (3). And as someone who uses England's illiberal and undemocratic libel laws to try to punish his critics - including Deborah Lipstadt, author of Denying the Holocaust, in a case he lost in 2000 - Irving is not in a good position to complain about being robbed of his right to free speech. Yet their cases are the same: both could be incarcerated, not for physically harming another person or for damaging property, but for the words they spoke; both could have their liberty removed because they expressed views that the authorities - in Turkey and Austria - decree to be distasteful. And both of their trials are an outrage against the principle of free speech. You may or may not agree with what Pamuk said, and you probably are disgusted by Irving's weasel words. But this isn't about what either author said; it is about whether they should have the right to say it, and we should have the right to hear it. Freedom of speech, as its name suggests, does not mean freedom for views that go down well in polite society but not for views that stink: it means freedom for all speech, the freedom to think, say and write what we please and the freedom of everyone else to challenge or ridicule our arguments. The fact that Pamuk's and Irving's trials have occurred around the same time provided a tough test of Europeans' commitment to free speech. The fact that many rushed to defend Pamuk while ignoring - or giving the nod to - the imprisonment of Irving means Europe failed that test. In both cases - in the trials themselves and the reactions they have provoked - the big issue is not so much freedom as EU etiquette; it is less about defending open debate than about defining what it is to be a good little EU state and how best to please the bigwigs in Brussels. So Turkey is put under pressure to call off Pamuk's trial to demonstrate that it is the modern European state it claims to be and is fit to join the EU, while Austria is congratulated for its tough stance on Holocaust denial which is taken as evidence that it has overcome its shadowy Nazi past as the birthplace of Hitler and is moving towards a new dawn. EU officials demand that Turkey let Pamuk speak if it wants to be taken seriously, while Austria is taken seriously by refusing to let certain people speak. This is about ensuring we have the right kind of speech, as defined by Brussels. So some of the same EU officials who tacitly support Austria's imprisonment of Irving, and who have clamped down on freedom in their own states, can still lecture Turkey about Pamuk. Take Denis MacShane, New Labour MP for Rotherham and former minister for Europe. He has taken himself off to Turkey to observe Pamuk's trial, and says 'Turkey is on trial', not Pamuk: 'As in past centuries, state authorities or religious fundamentalists have put a writer on trial to stop him or her asking awkward questions, but end up in the dock themselves', says MacShane. 'Turkey will not join Europe unless Voltaire wins, and the ayatollahs - secular and religious - lose.' (4) Who the hell is MacShane to lecture Turkey about free speech, to put the Turkish authorities 'on trial', to decree if and when the Turks can 'join Europe'? His own government has ridden roughshod over free speech, recently introducing a Racial and Religious Hatred Bill that will seriously curb our right to ridicule religious obscurantism; bringing in a law that will make an offence of 'glorifying' or 'condoning' acts of terrorism (or saying other things that might be perceived as 'attacking the values of the West', in the words of Lord Falconer); and banning various things deemed offensive, whether it's the newspaper of the British National Party or the music of Jamaican dancehall artists, one of whom was arrested upon arrival in Britain last year, interrogated by the Racial and Violent Crime Taskforce, and then deported (5). Would MacShane's time not be better spent in Vienna rather than Ankara, investigating Irving's case? Irving is at least a British citizen, which means MacShane has some authority to enquire after his wellbeing and legal standing; certainly more authority than he has to talk down to the Turks. The Irving case is presumably too messy for MacShane, who seems to prefer the popular and clear-cut campaign to defend Pamuk. Or perhaps MacShane supports the trial and imprisonment of Irving. That someone from a government as illiberal as New Labour can stick it to Turkey over Pamuk demonstrates that this has little to do with free speech. Various European politicians and EU bureaucrats who don't know the meaning of free speech are queuing up to berate Turkey. One says the Turks are behaving like a 'dictatorial regime, not a modern European state' (6). Meanwhile, as one news report put it, Austria's arrest of Irving - 'in a country still coming to grips with its Nazi-ruled past' - has won the state 'praise worldwide' (7). In the Pamuk and Irving cases the argument for free speech is trumped by demands that Turkey and Austria display their EU credentials for the world (and Denis MacShane) to pass judgement on - Turkey by allowing a novelist to raise awkward questions about Armenia, Austria by clamping down on anyone who questions the Holocaust. Austria is especially keen to punish Irving following events five years ago. In 2000 the people of Austria incurred the wrath of EU officials for daring to vote for a right-wing party led by Joerg Haider. Austria was effectively informally suspended from the EU and is now keen to show that it is modern and liberal by making an example of the right-wing Irving. In the Pamuk and Irving cases, EU officials are really making a case for privileged speech, not free speech; they defend comments they agree with and authors they admire but are happy to see those they dislike banged up for expressing dodgy points of view. Pamuk's case should be thrown out of court and he should be free to say or write what he wants. But if that happens and Irving remains in jail in Vienna then there isn't free speech in Europe; if Pamuk is free to ask questions about Armenia but Irving is not free to say the Holocaust was exaggerated, then free speech does not exist. From checker at panix.com Sun Dec 25 01:18:25 2005 From: checker at panix.com (Premise Checker) Date: Sat, 24 Dec 2005 20:18:25 -0500 (EST) Subject: [Paleopsych] NYT: Buying Birds for a Song? Well, Not This Year Message-ID: Buying Birds for a Song? Well, Not This Year http://www.nytimes.com/2005/12/24/business/24index.html?pagewanted=print [This is our Christmas card to one and all. Don't overeat.] By ELIZABETH OLSON Modern-day woes like avian flu and energy prices are making it more trying and costly than ever to please one's true love with gifts from the classic holiday carol "12 Days of Christmas." The partridge to the 12 drummers - and everything in between - would set the giver back $18,348, said PNC Advisors, the wealth management firm that compiles each year the cost of the English song's tokens of affection. The price tag is up 6.1 percent over last year's total of $17,296, the firm said. That was a big jump compared with the 1.6 percent increase last year. The latest factor making it more difficult to gather the carol's gifts is the threat of avian flu, which has limited international shipments of birds and precluded purchasing the three French hens from France. However, Jeff Kleintop, PNC's chief investment strategist, who oversees the index each year, said that United States breeders are still providing domestically raised French hens for the same price as last year. Even so, some of the song's other fowl cost substantially more this year. Six geese-a-laying are up almost 43 percent, and the seven swans-a-swimming went up 20 percent, both because of delivery costs pushed up by higher fuel prices. (Swans have long been a tricky item to include because trumpeter swan breeding cycles vary widely. When the index was started in 1984, each swan cost $1,000, but the prices have varied since then, declining for several years until this year, when they rose to $600. PNC also calculates a core index that excludes their variable price.) Generally, larger items like the geese and swans - as well as the pear tree, which went up 15 percent this year - cost more because of the labor involved in crating and packing, along with the higher shipping costs, Mr. Kleintop said. "Not only are avian flu fears and fuel costs driving prices higher," he added, "but gold prices are also on the rise." The five gold rings, the gift for the fifth day, cost $325 - up 27.5 percent over last year's $255. "The growing middle class worldwide is driving the demand for gold," Mr. Kleintop said, "and jewelry is a popular gift, so that boosts the price." In contrast, the cost of almost all of the verse's skilled workers - from maids-a-milking, lords-a-leaping, pipers piping to drummers drumming - remained static. The milk maids were a bargain since, once again this year, they each were calculated at $41.20 based on receiving the $5.15 minimum hourly wage. Only the ladies dancing were pricier, and that's because they received a pay raise. PNC bases its calculation of the cost of the ladies dancing on information from Philadanco, the Philadelphia Dance Company, which gave its dancers a 4 percent raise this year. PNC Advisors, a unit of the PNC Financial Services Group, the Pittsburgh-based banking company, calculated that each dancing lady cost $508.46 to provide. PNC began calculating the index 21 years ago as an amusing measure of how well the cost of the song's goods and services tracks the government's Consumer Price Index. In past years, the Christmas index matched the consumer index, which measures a broader range of goods and services, more closely. That is not the case this year. The consumer index rose 3.5 percent over the 11 months ended in November. In past years, the romantically inclined giver had been able to trim some costs by searching for Internet bargains. But this year, there are no savings to be had there because of higher delivery costs. The overall Internet cost in 2005 was $29,322, up 5.7 percent over last year because of higher shipping and handling costs, PNC said. That is still a fraction of the $72,608 cost to give the song's gifts repeatedly - a huge 364 items. The cost was up 9.5 percent over last year's $66,344 cost, Mr. Kleintop explained, because when the whopping increases in items like gold rings and geese are multiplied over the 12 days of Christmas giving, "that really adds to the bottom line." Asked whether anyone had ever contacted PNC to report assembling all the carol's gifts, Mr. Kleintop, chuckling, said: "No, but it would be a good entry for some of the luxury catalogs, wouldn't it?" From checker at panix.com Mon Dec 26 02:24:19 2005 From: checker at panix.com (Premise Checker) Date: Sun, 25 Dec 2005 21:24:19 -0500 (EST) Subject: [Paleopsych] BBS: Archaeology and cognitive evolution Message-ID: Archaeology and cognitive evolution http://www.bbsonline.org/documents/a/00/00/22/38/bbs00002238-00/bbs.wynn.htm [I'll be sending out several of the BBS target articles over the next few days. I found a treasure trove of them.] Published in Behavioral and Brain Sciences Volume 25, Number 3: 389-402 (June 2002) _________________________________________________________________ Below is the unedited, uncorrected, unquotable final draft preprint of a BBS target article that was accepted for publication. Please visit the Cambridge Journals Online [1]BBS Home Page to order the full published treatment. _________________________________________________________________ Long abstract - 197 Short abstract - 118 Text - 12,348 References - 1,732 Entire text - 14,434 Thomas Wynn Department of Anthropology University of Colorado Colorado Springs twynn at uccs.edu Abstract Archaeology can provide two bodies of information relevant to the understanding of the evolution of human cognition - the timing of developments, and the evolutionary context of these developments. The challenge is methodological. Archaeology must document attributes that have direct implications for underlying cognitive mechanisms. One example of such a cognitive archaeology is that for spatial cognition. The archaeological record documents an evolutionary sequence that begins with ape-equivalent spatial abilities 2.5 million years ago and ends with the appearance of modern abilities in the still remote past of 400,000 years ago. The timing of these developments reveals two major episodes in the evolution in spatial ability, one 1.5 million years ago and the other one million years later. The two episodes of development in spatial cognition had very different evolutionary contexts. The first was associated with the shift to an open country adaptive niche that occurred early in the time range of Homo erectus. The second was associated with no clear adaptive shift, though it does appear to have coincided with the invasion of more hostile environments and the appearance of systematic hunting of large mammals. Neither, however, occurred in a context of modern hunting and gathering. Short Abstract Archaeology can provide two bodies of information relevant to the understanding of the evolution of human cognition - the timing of developments, and the evolutionary context of these developments. To do this, archaeology must document attributes that have direct implications for underlying cognitive mechanisms. One example of such a cognitive archaeology is that for spatial cognition. The archaeological record documents an evolutionary sequence that begins with ape-equivalent spatial abilities 2.5 million years ago and ends with the appearance of modern abilities in the still remote past of 400,000 years ago. The timing of these developments reveals two major episodes in the evolution in spatial ability, one 1.5 million years ago and the other one million years later. Archaeology and Cognitive Evolution Thomas Wynn Department of Anthropology University of Colorado in Colorado Springs twynn at uccs.edu Keywords: Archaeology, symmetry, spatial cognition, evolution, Homo erectus 1. Introduction I have two goals in this article. The first is to make a case for the relevance of archaeological contributions to studies of the evolution of cognition. The second is to provide an example of one such contribution, a reconstruction of aspects of early hominid spatial cognition based on an analysis of artifactual symmetries. Assuming that human evolution is relevant to understanding the human condition, an intellectual position that is at the core of biological approaches to behavior, if not yet psychology, then archaeology can supply two important bodies of evidence: 1) actual timing of developments, and 2) the evolutionary context of these developments. The challenge is not epistemological; archaeology can and does supply these things to the study of human evolution in general. The challenge is methodological. How can archaeology inform us about the evolution of mind? Archaeology is a set of methods for reconstructing past action from traces that exist in the present. These traces include objects made or modified by people in the past - tools, houses, ornaments, and so on - but also less tidy patterns like garbage and refuse of all kinds and evidence of past landscapes (through analysis of soils, pollen, faunal remains, and so forth). Because some traces survive the ravages of time better than others the archaeological record is a biased and non-random sample of past action. Stone tools, for example, survive well but wooden tools do not. Also, some environments preserve traces better than others. Tropical environments are poor preservers, but cold, dry, arctic environments are good preservers. Archaeology is an observational discipline. Unlike laboratory scientists, archaeologists cannot duplicate events and unlike ethologists archaeologists cannot depend on obtaining corroborating observations, though we certainly hope for them. There is a real element of serendipity in archaeology. Pompeii, the "Ice man", and the recent discovery of 400,000 year old spears at Schoeningen (Thieme 1997), are unique and wonderfully informative but they are atypical. The archaeological record boasts relatively few such treasures. Instead it consists largely of more incomplete and mundane traces that allow archaeologists to reconstruct some of what occurred in the past. The primary methodological task of the archaeologist is this reconstruction -- translating traces into actions -- and archaeology has developed a large body of concepts and techniques for doing this. We are very good at reconstructing diet from garbage, and social/political systems from the size, character, and location of settlements. Can there be an archaeology of cognition? This is in reality a two-part question. First, can traces of action inform us reliably about any aspect of cognition, and second, if so, can archaeologists overcome some rather serious methodological roadblocks inherent to the archaeological record of such traces? One way that psychologists learn about the mind is by observing the actions of individuals in controlled laboratory settings or in natural situations. Sometimes these actions leave tangible traces that become the focus of the analysis. Children's drawing is one example; block shuffling tests are another. The methodological task of the psychologist is to translate the tangible results into meaningful characterizations of the mind that produced them. Of course, psychologists can also talk to their subjects, but in principle psychology can and does analyze the traces of action. An archaeologist trying to do the same faces some additional hurdles. To make a convincing argument in cognitive archaeology, one must be able to identify specific features of the archaeological record that can inform about cognition in a valid and reliable way. This is the crux of the matter. Unfortunately, the disciplines of archaeology and psychology have never shared much in the way of theory and methodology. For an archaeologist to make a compelling case, he or she must not simply refer to a few selected psychological results. There must also be some understanding of the theoretical and methodological context of the research. With this in hand, the archaeologist can define a set of attributes that can be applied to the archaeological record. This definitional step is indispensable. It is very unlikely that variables taken directly from the psychological literature could be applied to archaeological remains. On the other hand, the traditional categories of archaeology are inappropriate, a point that bears emphasis. Over the last century and a half archaeology has developed a large set of categories for the description of archaeological remains. Some of these categories are based on presumed function (e.g., ground stone axe, or temple complex), some on presumed usefulness in temporal ordering (e.g., Iron Age), some on social complexity (e.g., "Classic" Mesoamerica), and so on. None, to my knowledge, has ever been defined with cognition in mind, and it would be misleading to use them as such (e.g., to argue that Iron Age people were cognitively different from Stone Age people). The cognitive archaeologist must avoid using these traditional categories and approach the archaeological record from the perspective of psychological theories and methods. Even after careful definition, archaeology faces a number of roadblocks peculiar to its data. The first is preservation. Not only does preservation produce a biased record, it also presents a sliding scale of resolution. The farther back in time we look, the worse the record. There is less preserved, and there are fewer examples of what is preserved. This alone gives the archaeological record a misleadingly progressive appearance; 10,000 year-old remains appear more complex than 500,000 year-old remains partly (but not entirely) because we have so many more of them. The second caveat is logical. How can we be sure that archaeological remains are a reliable reflection of the cognitive abilities of past people? Might not these people have invested their cleverest thinking in domains that are archaeologically invisible? There is no infallible way around this problem. Archaeologists can only assess the minimum competence necessary to produce a particular pattern. Our only comfort comes from increasing the number and, especially, the variety of corroborating cases. Finally, cognitive archaeology works best on an evolutionary scale of resolution. The ultimate achievement of cognitive archaeology would be to provide descriptions of the cognitive life-world of human antecedents at many points in evolution. Such descriptions would provide an evolutionary foundation for understanding the modern mind. I have long harbored the desire to provide a comprehensive account of the mind of Homo erectus, a very successful ancestor who was the immediate precursor of Homo sapiens. Surely, an understanding of Homo erectus' cognition would illuminate aspects of the modern mind; there must be much of Homo erectus with us still. Unfortunately, I do not think such a comprehensive description is possible; the archaeological record is too incomplete. Archaeology can take another approach to the question of evolution, an approach not focused on descriptions of individual antecedents, but one focused on long term patterns of change. Even though poor in detail, the archaeological record is very long, providing a quasi-continuous record of products of action that spans over two million years. Archaeologists can use this record to identify patterns of cognitive evolution that provide insights into questions of modern cognitive science. What follows is an example of this approach. The focus is on spatial cognition (generally considered, including shape recognition and image manipulation). The evidence will consist of artifactual symmetry. 2. The Archaeological Record of Artifactual Symmetry I have chosen to survey the evolution of artifactual symmetry three reasons. First, symmetry is a pattern and a concept that is recognized by everyone, which reduces the requirement for definition (but does not eliminate it entirely). Second, symmetry has been incorporated into many schemes of spatial cognitive development, and also into theories of perception, so that it provides a direct way to articulate the archaeological record with the cognitive science literature. Finally, it is amenable to visual presentation. There are several different patterns to which we apply the term symmetry. The most familiar is reflectional symmetry, also known as bilateral or mirror symmetry. Here one half of a pattern is duplicated and reversed on the opposite side. Implicit in the pattern is a central line, usually vertical, that divides the pattern into reflected versions of one another. Bilateral symmetry is "natural" in the sense that we see this pattern in the natural world of plants and animals. A second symmetry is radial symmetry, in which a pattern repeats not across a line, but continuously around a point. Similar to radial symmetry is rotational symmetry, in which a pattern is not reflected across a line, but rotated around a point; here the pattern is not reversed or inverted. Finally, there is translational symmetry, in which a pattern is repeated in a sequence, without reversal or rotation. Symmetry is ubiquitous in the natural world and the cultural world. It is a well-known feature of crystal growth, resulting from the chemical structures of the molecules themselves. It also acts as a principle in biological growth and development, as in the symmetrical duplications of supernumerary appendages in beetles (Bateson 1972), where the source probably lies in the genes regulating growth. On a larger scale, symmetry is a feature of the overall body plans of many organisms, from microscopic foraminifera to large vertebrates. In human material culture, symmetry appears in the form of artifacts, buildings, and built environments all over the world. It is a central component of decorative systems in almost all human culture, and also a component of games (e.g., string games) and mathematical puzzles (e.g., tessellations). In many of these cases the symmetry results from the application of transformational rules; simple figures repeated and "moved" to produce intricate patterns. Symmetry is so fundamental in western culture, at least, that it is often a metaphor for balance and regularity (e.g., the "symmetrical" arrangement of keys in The Marriage of Figaro). Moreover, it is often endowed with meaning, carrying explicit and implicit information about fundamental values of a culture (Washburn and Crowe 1988). Not surprisingly, perception of symmetry has been the focus of psychological research for over a century (Wagemans 1996). It is now generally accepted, for example, that reflectional symmetry is perceptually more salient than translation and rotation. Indeed, some experimental work suggests that reflectional symmetry can be detected pre-attentively. Reflectional symmetry across a vertical axis is more salient than reflection across a horizontal axis, with oblique orientations falling a distant third. In addition to such empirical generalizations, there are competing theories of symmetry perception, and it remains a component of general theories of perception (Tyler 1996). It has even come to be a focus in evolutionary psychology, where detection of asymmetry is seen to be a means of mate assessment (Gangestad 1997). Given the ubiquity of reflectional symmetry in the natural world, and its correlation with successful ontogenetic development in many, many, organisms, it is not at all surprising that perceptual systems should have evolved a sensitivity to symmetry. It is quite likely, then, that the perceptual saliency of symmetry is not a derived feature of human perception, but is one we share with many complex organisms. The degree to which it is shared, and whether it has evolved independently in several taxa or is instead a very old feature, are interesting questions, but tangential to the current discussion. The archaeological record does not document the development of symmetry perception per se. Instead, it documents the imposition of symmetry on material objects. Detecting symmetry is not sufficient for this task; other cognitive mechanisms must come into play. The importance of the archaeological record of symmetry lies not in the symmetry itself, but in what it reveals about these other mechanisms. 2.1 Stone Tools Most of the following analysis will focus on stone tools. They are far and away the most abundant material evidence archaeologists possess for the majority of human evolution. The record of stone tools begins 2.5 million years ago and extends to the present. From the tools themselves archaeologists can reconstruct a variety of actions: raw material selectivity and procurement, manufacturing sequences, use, and discard. Archaeologists have been most interested in reconstructing the specific uses of stone tools, and the role these tools played in subsistence and, sometimes, social life. But these reconstructed actions, those of manufacturing in particular, can also document particular cognitive abilities. Fracturing a stone produces sharp edges; this is the basic principle underlying almost all stone tools. Archaeologists use the term "knapping" to refer to the stone fracturing process. In the simplest case a stone knapper uses one stone, termed a hammer, to strike another. If the knapper has struck with enough force, and delivered the blow to an appropriate spot at the appropriate angle, the receiving stone, termed a core, will break. In most instances the knapper must direct the blow toward the edge of the core because a blow landing toward the center is unlikely to deliver enough force to produce a fracture. This simple act of knapping produces two potentially useful products, a smaller sharp edged piece termed a flake, and the larger core, which now also may have sharp edges. Even this simplest of knapping actions requires directed blows. Randomly bashing two rocks together can produce useful flakes but even the earliest stone tools, 2.5 million years old, resulted from directed blows. The subsequent development of knapping technology included increases in the number of blows delivered to a single core, greater specificity in the location of blows, modification of flakes, longer sequences of action between the initial blows and the final blows, a greater variety of hammering techniques, and more regularly shaped final products. [FIG 1] Recently Stout et al. (Stout, Toth et al. 2000) have conducted a pilot PET study of basic stone knapping using an experienced knapper (Nick Toth) as the subject. The result showed highly significant activation in several brain regions. Much of this activation is what one would expect from performance of a complex motor task based on hand-eye coordination (primary motor and somatososensory areas, and cerebellar activity [p. 218]), but Stout et al. also recognize a significant "cognitive" component, implied by the activation of superior parietal lobes. "The superior parietal lobe consists of what is referred to as "multi-modal association cortex" and is involved in the internal construction of a cohesive model of external space from diverse visual, tactile, and proprioceptive input" (p. 1220). In other words, simple stone knapping is a "complex sensorimotor task with a spatial-cognitive component" (p. 1221). These results, though preliminary, situate most of the significant cognitive activation within the "dorsal pathway" of visual processing (Ungerleider 1995; Kosslyn 1999). The ventral pathway associated with object identification and shape recognition is minimally activated, implying that shape is not a significant component of the basic flaking task. These results are preliminary and need confirmation. There was only one subject, a skilled knapper, and the knapping task was brief and basic -- removing flakes from a nodule (a Mode 1 procedure; see below). Nevertheless, it reinforces work of cognitive archaeologists who have focused on spatial concepts borrowed from developmental psychology ((Wynn 1979; Wynn 1981; Wynn 1985; Wynn 1989; Robson Brown 1993)). The directed action of stone knapping preserves something of the cognition of the knapper. Even in the simplest example, the knapper must make a decision about where to place a blow and how much force to use. These decisions are preserved in the products themselves. It is now common for archaeologists to "refit" cores by placing the flakes back together in a kind of 3-D jigsaw puzzle. Such a reconstruction permits archaeologists to describe in detail long sequences of action including specific location of blows, reorientation of the core by the knapper, and subsequent modification of flakes (Schlanger 1996). But even simple tools can be informative. The pattern of "negative scars" on cores or modified flakes preserves the sequences of blows. Archaeologists interested in cognition can use these preserved action sequences to investigate a variety of cognitive abilities, including sequencing, biomechanical skill, and spatial cognition. Even the simplest knapping required some notions of spatial relations and as stone tools became more complex they often preserved more complex understandings of spatial relationships. There is a problem with intentionality. All stone tools have a shape, and this shape preserves spatial relationships, but how intentional were they? Here I do not mean the layers of intentionality invoked in theory of mind literature, but the simple question of whether or not a stone knapper intended to produce a particular shape. The basic action of stone knapping will produce useful results without the knapper intending the final core and flakes to have any specific appearance whatsoever. It is even possible for the iterative application of a specific flaking procedure to produce a final core with a regular shape, completely unintended. The shape itself, and the location and extent of modification producing the shape can often, but not always, document intention. For example, the artifact in FIG 5 has extensive trimming on one side that produces a "shoulder" mirroring a natural configuration on the opposite side. This is unlikely to have been an accident. 2.2 What about apes? It is appropriate and traditional to begin discussions of human evolution with a discussion of modern apes. Much of our anatomy and behavior is shared with apes, including characteristics of the brain and cognition. A necessary first step in any evolutionary analysis is the identification of what is peculiarly human, for this allows correct focus of the undertaking. If modern apes, especially chimpanzees, employed all of the spatial abilities used by humans, then our evolutionary understanding must focus on the evolution of apes in general. It is also an axiom of paleoanthropology that human anatomy and behavior evolved out of those of an African ape, so that a description of this ancestor is a logical starting point for any summary. Our best information concerning this common ancestor comes from the living African apes (who, of course, have also evolved, but because their anatomy and habits appear more like those of a "general ape" than the anatomy and habits of the obviously unusual humans, they are a better candidate than ourselves). Whatever the cognitive requirements of stone knapping are, they are within the abilities of apes, at least at the basic level of using a hammer to remove a flake. Nick Toth and Sue Savage-Rumbaugh have taught a bonobo to flake stone, and the results of their research help identify what might have been different about the cognition of the earliest stone knappers(Toth, Schick et al. 1993; Schick, Toth et al. 1999). Kanzi, a bonobo also known for his ability to understand spoken English and use signs, learned how to remove flakes from cores by observing a human knapper; he also learned to use the sharp flakes to cut through a cord that held shut a reward box. After observing the procedure, Kanzi perfected his technique by trial and error. His initial solution, interestingly, was not to copy the demonstrator's action, but to hurl the core onto a hard surface and then select the sharp pieces from the shattered remnants. He clearly understood the notion of breakage and its consequences. When experimenters padded the room, he then copied the hammering technique used by the knapper. From this experiment (and an earlier one by Wright(Wright 1972)), it is clear that fracturing stone is within the cognitive abilities of apes. However, Kanzi is not as adept as human knappers. " (A)s yet he does not seem to have mastered the concept of searching for acute angles on cores from which to detach flakes efficiently, or intentionally using flake scars on one flake of a core as striking platforms for removing flakes from another face"(Toth et al. 1993: 89). These abilities are basic to modern knapping and, more telling, are evident in the tools made two million years ago. Toth et al. suggest that this represents a significant cognitive development, though they do not specify just what cognitive ability may have evolved. Elsewhere (Wynn, Tierson et al. 1996)I have suggested that it may represent an evolutionary development in "spatial visualization," which is the ability to discriminate patterns in complex backgrounds. If true, this would represent a minor cognitive development, of interest primarily because it is a cognitive ability tied to tool manufacture and use. Kanzi is also not very accurate in delivering blows, and this is harder to assess. It could simply be a matter of biomechanical constraint (i.e., he does not have the necessary motor control), or it could result from an inability to organize action on the small spatial field of the core. It is the organization of such action, fossilized as patterns of flake scars, that developed significantly over the two million years following the first appearance of stone tools. While apes can knap stone, they do not produce symmetries (at least not yet). The only possible example of symmetry produced by apes in the wild is the chimpanzee sleeping nest, which has a kind of radial symmetry that is produced when the individual reaches out from central position and pulls branches inward. Here the symmetry is a direct consequence of a motor habit pattern, and one need not posit some idea of symmetry (Wynn and McGrew 1989). There are no other ethological examples, at least to my knowledge. However, there has been a significant amount of research with captive apes, especially chimpanzees, including a fascinating literature about chimpanzee art and drawing, from which one can examine the ways apes arrange elements in space. Work with ape art has been of two kinds. In the first, researchers present an ape with appropriate media (finger paints, brushes and paint, etc.) and encourage it to create. In the second, researchers control the productions by supplying paper with pre-dawn patterns. The former is the more "archaeological", in that researchers have not tried to coax particular pattern productions. Perhaps not surprisingly, these spontaneous productions are patterned primarily by motor patterns. Fan shapes are common, as are zig-zags produced by back and forth arm motion. [FIG. 2] Desmond Morris(Morris 1962), the most well-known researcher in ape art, thought that these productions may demonstrate a sense of balance, and tried to coax it out with a series of experiments using sheets with stimulus figures already printed on (Figure 1B), following the earlier lead of Schiller(Schiller 1951). Morris's work led to a number of subsequent experiments by others using similar techniques. The results have been enigmatic at best. Most chimpanzees presented with a figure that is offset from the center of the paper will mark on the opposite side, or on the figure itself (Fig. 2B). Morris suggested, cautiously, that this confirmed a notion of balance. Later Smith(Smith 1973) and Boysen(Boysen, Berntson et al. 1987) confirmed these results, but argued that the pattern resulted from the chimpanzee's placing marks toward the center of the vacant space; balance was an accident. It is hard to know what to make of this evidence. First, even with the few experimental subjects, there was a lot of individual variability. Indeed, each chimpanzee had an idiosyncratic approach to both the controlled and uncontrolled drawing. Second, most repetitive patterns resulted from repetitive motor actions. Nevertheless, the individuals did appear to place their marks non-randomly, and did attend to features of the visual field. Other, non-graphic, experiments have indicated that chimpanzees can be taught to select the central element of a linear array(Rohles and Devine 1967), so chimpanzees can clearly perceive patterns in which balance is a component. But they do not appear able to produce symmetrical patterns. 2.3 Tools of Early Hominids 2.3.1 Description The earliest hominids left no archaeological record. Studies of blood chemistry and DNA indicate that humans and chimpanzees shared a common ancestor as recently as five million years ago. By four millions year ago the evolutionary split between hominids and the other African apes had occurred. There is fossil evidence for these early hominids, but it is fragmentary and more tantalizing than informative in regard to adaptive niches(Tattersall 2000). Between 4 million and 2.5 million years ago several different hominids lived in Africa. They differed from one another in habitat and adaptive niche, but shared the basic suite of hominid characteristics: bipedal locomotion, and relatively small canines and large molars. None had a particularly large brain (though slightly larger, relatively, than that of chimpanzees), and none left any archaeological traces. If any or all of these hominids made and used tools, as modern chimpanzees clearly do, then they have not been preserved. We can assume that tool use must have been in the repertoire of at least one of these hominids, only because it seems unlikely that stone tool manufacture could have developed without antecedents. To date, the oldest reliably dated stone tools are 2.5 million years old (Harris 1983). These earliest stone tools exhibit no convincingly symmetrical patterns. Archaeologists assign these tools to a category termed "Oldowan," because of their first discovery at Olduvai Gorge in Tanzania. A better label was proposed several decades ago by Graham Clark(Clark 1977), who termed them a Mode 1 technology, a term based on technological characteristics, with no time-space implications. Mode 1 tools first appeared about 2.5 million years ago in what is today Ethiopia, and were the only kind of stone technology in evidence for the next one million years. After 1.5 million years ago, Mode 1 technologies continued to be produced in many areas and, indeed, were made into historic times. As such Mode 1 represents a common, "generic" stone tool industry. It was also the earliest [FIG 3]. The emphasis of Mode 1 tools is on the edges(Toth 1985). Simple stone flakes can have very sharp edges, and are useful cutting tools without further modification. The cores from which the flakes were removed also have useful edges. These are not as sharp as the flakes, but the cores are usually heavier, and the result is a tool that can be used for chopping, crushing, and heavy cutting. Mode 1 tools exhibit little or no attention to the overall shape of the artifact. The only possible examples of a shaped tool occur in relatively late Oldowan assemblages, where there are a few flakes with trimmed projections (termed awls). Here a two-dimensional pattern of sorts has been imposed on the artifact, but it is a very "local" configuration, one that is tied to the nature of the edge itself. 2.3.2 Cognitive implications The work of Stout et al. discussed earlier supports an emphasis on the spatial cognition required by basic kind stone knapping typical of these Mode 1 artifacts. Cognitive psychology supplies some more specific variables that are also applicable to the analysis of these early tools. Forty years ago Piaget and Inhelder (Piaget and Inhelder 1967) introduced basic topological notions in their analysis of children's spatial ability, and these still have descriptive power. In particular, the relations of proximity, order and boundary are all required for the placing of trimming on Mode 1 tools. More recently, Linn and Petersen (Linn and Petersen 1986) have identified "spatial perception", the ability to detect features among complex backgrounds, as one of the four components of spatial cognition. This ability appears to be required when a knapper selects a platform with an appropriate angle for striking. What does not appear to be necessary for these tools is any kind of shape recognition or imagery. Basic flaking procedure and simple spatial relations are sufficient. The knappers imposed no overall shapes. In this respect, at least, these early hominids were very ape-like. Indeed, when we expand our perspective to other features of tool making and use, we find that it was ape-like in most respects (Wynn and McGrew 1989). Yes, use of stone tools to butcher parts of animal carcasses obtained through scavenging was a novel component to the adaptation (Toth and Schick 1986; Potts 1988; Schick and Toth 1993), but at this point in hominid evolution it appears to have been merely a variant on the basic ape adaptive pattern, with no obvious leap in intellectual ability required. Indeed, there is no compelling archaeological reason to grant tool making any special place in the selective forces directing the first three million years of human cognitive evolution. But sometime after two million years ago the situation changed. 2.4 The First Hominid Imposed Symmetry 2.4.1 Description About 1.4 million years ago hominids in East Africa, presumably Homo erectus, began making large stone tools with an overall two-dimensional shape. Many (but not all) of these "bifaces" were made by first detaching a very large flake from a boulder-sized core using a two-handed hammering technique (Jones 1981). The knapper then modified this large flake by trimming around the margins (usually onto both faces of the flake, hence the term biface). The uses to which these tools were put are unknown, though experimental evidence indicates that they can be effective butchery tools (Toth and Schick 1986). Archaeologists recognize two types of biface, the handaxe and the cleaver. Handaxes have a point or tip, and cleavers have a transverse "bit" that consists of an untrimmed portion of edge oriented perpendicular to the long axis of the tool. Both varieties of biface can have reflectional symmetry, and it is primarily this symmetry that produces the overall shape. However, not all bifaces of this age are nicely symmetrical, and even the nicest examples look crude compared to the symmetry of later tools. Are we justified in attributing some kind of symmetry concept to the knapper? [FIG 4] Might not the symmetry lie only with archaeologist, who "reads" what was in no way intended by the knapper? This is a knotty problem that has become the center of an interesting, if parochial, controversy among cognitive archaeologists (Noble and Davidson 1996). Most archaeologists, myself included, argue that the symmetry is real. First, the most symmetrical examples are also the most extensively trimmed, indicating that the knapper devoted more time to production. Second, and more telling, on some bifaces the trimming mirrors a natural shape on the opposite edge. Such artifacts do not have the best symmetry, but the economy of means by which the symmetry was achieved reveals that some concept of mirroring must have guided the knapper. [FIG 5] In addition to handaxes and cleavers, a third variety of biface occurs in low numbers in some sites in this time period. These are "discoids", so called because of their round shapes. Like the other bifacial tools, the nicest, in this case the roundest, are also the most extensively modified. Here again we can recognize symmetry, in this case radial rather than reflection. [FIG 6] 2.4.2 Cognitive implications In most respects the cognitive requirements of these early bifaces resemble those of the earlier (and presumably antecedent) Mode 1 artifacts. But the symmetry presents a puzzle for cognitive interpretation. There are at least three possibilities: 1) The symmetry (and regular radii) are purely a consequence of a technique of manufacture using large flakes as blanks. The placement of trimming on some pieces argues against this, but the absence of congruency means that the symmetry is always crude and, for many archaeologists, unconvincing (Noble and Davidson 1996). Any cognitive significance would have to lie in the techniques of blank production. Although the two handed hammering technique (Jones 1981) clearly qualifies as an invention, its cognitive prerequisites seem no different from those of other direct percussion techniques used in Mode 1 technologies. 2) The symmetry was intended, but was not "new." Rather it is a pattern that is salient in the shape recognition repertoire of apes in general. What was new was the imposition of this shape on modified objects, something other apes never do. 3) Symmetry was a new acquisition in the shape recognition repertoire of these hominids and was applied to stone tools. Conservatism inclines me towards the second hypothesis. But even if symmetry in pattern recognition is old, there was still a cognitively significant development associated with these bifaces. The stone knappers produced a symmetry by mirroring or reflecting the shapes from one lateral edge to the other. True, the edges are not exact mirrors. They are rarely if ever congruent in a modern geometric sense, but they are inversions of a two-dimensional shape. It is not even necessary that a particular overall shape have existed as an image prior to manufacture. The knapper could simply have mirrored one of the edges naturally provided him or her. In such a case the knapper would need to invert a shape. More significant, the knapper had to ignore part of the shape of the original large flake in order to impose a symmetrical edge. This is a kind of frame independence, the ability to see past the constraints imposed by a spatial array (Linn and Petersen 1986). The discoids suggest that the knappers were also able to employ a notion of spatial amount, in this case a diameter. The knappers trimmed the tool until all of the diameters were roughly equal. While not an abstract quantity like an inch, a diameter is nevertheless a spatial amount, albeit local and limited. But what is most significant is that these biface knappers incorporated a shape component into the knapping problem. This shape component need not have been an abstract concept. It could simply have been shape "recognition," matching to unimodal representation (Kosslyn 1999), in this case reflectional symmetry. Such a unimodal representation is almost certainly in the shape recognition repertoire of apes in general. What is significant here is its manifestation in the otherwise spatial task of knapping. This new development required coordination of spatial abilities with a previously separate cognitive component (or neural network in Kosslyn's sense(Kosslyn 1994)), that of shape recognition. The imposition of shape is a feature of virtually all human material culture. But the first time it ever occurred was with these early bifaces. Prior to the appearance of bifaces, stone knappers attended to the configuration of edges and to size. The earlier Mode 1 tools were arguably an ad hoc technology (Wynn 1981; Isaac 1984; Toth 1985) made for immediate use. It is unlikely that they existed as "tools" in the minds of the knappers. But tools, in the guise of bifaces, almost certainly did exist as a category in the mind of Homo erectus (Wynn 1993; Wynn, Tierson et al. 1996). 2.5 Late Bifaces: Congruent and Three-Dimensional Symmetries 2.5.1 Description Three developments in hominid imposed symmetry appear in the archaeological record sometime after 500,000 years ago. These are: 1) congruency; 2) three-dimensional symmetries; and 3) broken symmetry. While the reflectional symmetry of early bifaces was rough and imprecise, the symmetry of late examples clearly suggests attention to congruency. The mirrored sides are not just qualitative reversals, but quantitative duplicates, at least to the degree that this is possible given the constraints of stone knapping. Many, but certainly not all, late handaxes and cleavers present such congruent symmetries, and this is one of the features that makes them so attractive to us. Such a symmetry was not limited to a single shape. Late bifaces demonstrate a considerable amount of variability in overall plan shape. Some are long and narrow, others short and broad. Some have distinct shoulders, while others are almost round. Although there is some evidence that this variability was regional(Wynn and Tierson 1990), much of it is related to raw material, and much appears to have been idiosyncratic. But in almost every assemblage of this time period there will be a few bifaces with fine congruent symmetry, whatever the overall shape. The second development in symmetry was the appearance of reflectional symmetry in three dimensions. Many of these bifaces have reflectional symmetry in profile as well as in plan. In the finest examples this symmetry extends to all of the cross-sections of the artifacts, including cross-sections oblique to the major axes, as we would define them. [FIG 7] Once again, this feature is not universally true, and many, many bifaces do not have it, but it is present on at least a few artifacts from most assemblages. The third development in symmetry was the appearance of broken symmetry. Here a symmetrical pattern appears to have been intentionally altered into a non-symmetrical but nevertheless regular shape. Several cleavers from the Tanzanian site of Isimila appear "bent," as if the whole plan symmetry, including the midline, had been warped into a series of curved, parallel lines. These are invariably extensively modified artifacts, whose cross-sections are symmetrical, and the pattern is almost certainly the result of intention. [Figure 8] A better known example is the twisted profile, or "S-twist", handaxe. The artifacts give the appearance of having been twisted around the central pole. The result is an S-shape to the lateral edges, as seen in profile. [Figure 9]. Again, these are extensively modified artifacts and we must conclude, I think, that the pattern is the result of intention. It is not possible to date these developments in symmetry precisely. Archaeological systematics place all of the examples in the late Acheulean (sometimes on morphological grounds alone, which leads to a circular argument). All were probably made after 500,000 years ago, perhaps even after 400,000 years ago. The Isimila artifacts, for example, date to between 170,000 and 330,000 years ago(Howell, Kleindienst et al. 1972). The twisted profile handaxes are probably no earlier than 350,000, and most may be much later. Although 300,000 years is a long time in a historical framework, it represents only the final 12% of technological evolution. Several caveats complicate interpretation of these three developments. One is the problem of individual skill; some prehistoric stone knappers must have been more adept than others and better able to achieve congruent, three-dimensional symmetries in the intractable medium of stone. We have no way of knowing how common highly skilled knappers were. A second caveat is raw material. Some stone is much easier to work that others. I do not think it is entirely coincidence that twisted profile handaxes are invariably made of flint or obsidian, two of the most prized knapping materials. On the other hand, raw material is not as tyrannical as one might think. The "bent" cleavers from Isimila are made of granite. 2.5.2 Cognitive implications The imposition of three dimensional, congruent symmetry probably depended on cognitive abilities not possessed by the first biface makers. The cognitive psychological literature suggests some possibilities. The first requirement would appear to be the ability to coordinate perspectives. While flaking the artifact, the knapper has only one point of view. This is adequate to control edge shape, and perhaps even two-dimensional symmetry, but to produce an artifact with three-dimensional symmetry one must somehow "hold in mind" viewpoints that are not available at that moment, and for the finest symmetries viewpoints that are not directly available at all (oblique cross-sections). The knappers must have understood the consequences of their actions for the shape of the artifact as it appeared from these other perspectives. Such manipulations are akin to "allocentric perception" recognized by psychologists (Silverman, Choi et al. 2000), and used in image manipulation tasks such as mental rotation. It is likely that these hominids were able to manipulate mental images of objects. Again, archaeological bias forces a conservative analysis; however, no one has proposed a convincing simpler alternative to this one. Application of a simple flaking procedure, without any image manipulation, could not have produced the kinds of three-dimensional symmetries evident on these artifacts. The second requirement, congruency, is clearly spatial in the narrow sense of perceiving and imaging spatial quantity. As we have seen, basic knapping is largely a spatial problem, and was from the beginning. What is new here is the application of metric spatial relations to a problem of shape. Simple unimodal shape "recognition" would not have been enough. The sophistication of this symmetrical pattern suggests that shape "identification" is required. "When we recognize something, we know only that we have perceived it before, that it is familiar [re. early handaxes above]; when we identify something, we know it has a certain name, belongs to specific categories, is found in certain locales, and so forth"(Kosslyn 1999):1284). These handaxes were almost certainly categories, and categories are abstract, multi-modal, and rely on associative memory. As such they reside in declarative memory, which "...requires associative links between several types of information that are stored in different areas"(Ungerleider 1995):773). These hominids could manipulate perspectives and spatial quantity, produce congruent symmetries, and even distort these principles to achieve striking visual effects. It is fair, I think, to attribute an intuitive Euclidean concept of space to these stone knappers. A Euclidean sense of space is one of three-dimensional positions. While the human life-world is certainly organized this way, and we and other primates clearly perceive dimensional space, it is quite another thing to employ cognitive mechanisms that understand space in this way, and which can be used to organize action. Such a mechanism, or mechanisms, underpin our most sophisticated everyday navigational and mapping skills. 2.6 After 400,000 The examples I have used thus far have all been knapped stone artifacts. While symmetry clearly can be and was imposed on many knapped stone artifacts, the medium is not ideal for the imposition of form. It is not plastic, and shaping can only be done by subtraction. Indeed, after the appearance of the symmetrical patterns just discussed, no subsequent developments in symmetry can be recognized in knapped stone. There were developments in technique, and perhaps skill, but the symmetries imposed on even very recent stone tools are no more elaborate than those imposed on 300,000 year old handaxes. As a consequence we must turn to other materials. Artifacts made of other materials -- bone, antler, skin, wood, fiber, etc. -- were undoubtedly part of the technical repertoire of many early hominids (though see(Mithen 1996) for a counter suggestion). Because such materials are far more perishable than stone, the archaeological record contains few of them until relatively late in prehistory. There are a few examples that almost certainly pre-date 100,000 years ago, but all are controversial, either as to age, or as to significance. One is an pebble from the Hungarian site of Tata, on which someone engraved a line perpendicular to a natural crack(Bednarik 1995). While one might be tempted to argue from it that the maker had some notion of rotation, or radial symmetry, this is too heavy an interpretive weight to be born by a single, isolated artifact. More to the point, even if true, this would tell us little more about symmetry than is supplied by contemporary bifaces. However, it would be symmetry in a new context, a fact which if confirmed would have possible implications for cognitive evolution. It is not until very close to the present, indeed after 100,000 years ago that the archaeological record presents extensive evidence of artifacts made of perishable materials. Some archaeologists see this timing as entirely a reflection of preservation; others see it as evidence of new behaviors and abilities. The earliest such evidence is African and dates from between 50 and 90,000 years ago(Yellen, Brooks et al. 1995; Klein 2000)). These are worked bone points from a site in the eastern Congo. While these artifacts are quite important to several current arguments about prehistory, they reveal nothing new in regard to hominid imposed symmetry. The European Upper Palaeolithic provides the best documented examples of hominid imposed symmetries for the time period between 40,000-10,000 years ago. Here we find extensive evidence of symmetry in materials other than stone. [Fig. 10] Perhaps most widely known are cave paintings of Franco-Cantabrian Art, especially in compositions that are about 15,000 years old. Here we can see possible symmetries as patterns of elements in a composition, not just inherent in a single object. They appear to have resulted from the application of a compositional rule. As such, they do not inform us specifically about spatial or shape cognition and are outside the scope of this discussion. 3. Discussion I suggested at the beginning of this article that archaeology can make two important contributions to the study of the evolution of human cognition: the timing of certain developments and a description of the evolutionary context in which these developments occurred. The sequence of development of hominid imposed symmetries just summarized allows us to do both of these things. 3.1 Timing The development of artifactual symmetry was not slow and continuous. Instead, the archaeological evidence suggests that there were two episodes of development, separated by as much as one million years. During the first, hominids developed the ability to impose shape on artifacts, an ability undocumented for any living apes. In doing this early Homo erectus employed cognitive abilities in frame independence, mirroring, making simple judgments of spatial quantity, and coordination of shape recognition (symmetry) with the spatial requirements of basic stone knapping. There may have been others, but these are the ones evident in the archaeological record. This development occurred early in the evolution of the genus Homo, certainly by 1.4 million years ago. The second episode of development occurred much later and consisted of the acquisition of a modern Euclidean set of spatial understandings. Specific abilities evident from the symmetrical handaxes include congruency, three-dimensional shapes, and coordination of perspectives. The date of this development appears to correlate with the evolutionary transition from Homo erectus to Archaic Homo sapiens. This timing of developments has implications for human cognitive evolution. First, the initial hominid adaptation (4.5-1.5 mya) apparently included a basic ape-like understanding of space and shape. A generalized ape repertoire of spatial concepts was adequate for this earliest of hominid adaptive niches, including the first manufacture and use of stone tools. A distinctive set of spatial/shape abilities did not appear until relatively late in hominid evolution, after the appearance of Homo erectus. Second, because these two later episodes of cognitive development were discontinuous, and indeed rather far from another in time, it is unlikely that they occurred in response to the same selective factors. Whatever selected for the spatial/shape abilities of early Homo erectus probably did not select for the Euclidean abilities that emerged one million years later. But perhaps the most important implication that the development of artifactual symmetry has for the understanding of human shape and space cognition in general, and not just its developmental sequence, is that even the more recent developments occurred in the very remote past. In terms of shape and spatial thinking we have not just Stone Age minds, we have Lower Palaeolithic minds. 3.2 Evolutionary Context Evolutionary context is the second body of information archaeology can provide the study of the evolution of cognition. While it is well and good to describe a sequence of development, it would also be good to answer the questions of how, and perhaps why. In evolutionary science this amounts to answering the question of selection. What selected for these abilities? Evolutionary psychologists (Barkow, Cosmides et al. 1992; Bock and Cardew 1997) answer this question by looking at evidence for adaptive design, on the assumption that past selection is preserved in the modern architecture of the cognitive mechanisms. Paleoanthropologists, and archaeologists in particular, are suspicious of such reliance, and prefer to invoke the actual context of development to help identify possible selective agents. While our knowledge of the conditions of the evolutionary past is fragmentary and lacking in detail, it is still an account of actual prevailing conditions, not a reconstruction based on presumed selective pressures. Hominid fossils and the archaeological record constitute the primary evidence for the context of cognitive evolution, supported by a large body of methods used for dating and for reconstructing the physical environment. Hominid fossils provide some direct evidence of cognitive evolution in the guise of brain size and shape. At least at our present level of understanding this does not lead to persuasive arguments about specific abilities, but it can identify times of brain evolution in general, which can support arguments derived from other evidence. Hominid fossils can also inform us about other evolutionary developments in anatomy, which human paleontologists have used successfully to document changes in diet, nutrition, locomotion, heat and cold adaptation, levels of physical stress, and other aspects of adaptive niche that are directly relevant to the context of cognitive evolution. Archaeological evidence, because it is far more abundant than fossils, informs about geographic distribution, habitat use, specific dietary components, geographic range (via raw material transport), and cultural solutions to problems (fire, weapons, boats, etc.), in addition to the evidence for hominid cognitive abilities. Together the fossil and archaeological evidence provide a reliable, if incomplete, picture of the past, including the two time periods in which the major developments in artifactual symmetry occurred. 3.2.1 Early biface makers 3.2.1.1 Context We know much less about the first episode of development than the second. 1.4 million years ago, the time of the first biface industries with their evidence for the imposition of symmetry and concomitant modest developments in spatial thinking, was also the time of Homo erectus. Indeed, the first Homo erectus (a.k.a. Homo ergaster (Tattersall 2000)) had appeared in Africa (and perhaps elsewhere) several hundred thousand years earlier, so we cannot make a simple equation between Homo erectus and biface technology. Luckily, one of the most spectacular fossil finds for all of human evolution is an African Homo erectus from this time period. The Nariokotome Homo erectus is an almost complete skeleton of a youth who died about 1.55 million years ago (Walker and Leakey 1993). The completeness of the skeleton allows a more detailed discussion of life history and physiological adaptation than is possible with fragmentary remains. The youth was male, about 11 years old, stood about 160cm (63") at time of death (estimates of adult stature for this individual are 185cm [73"]), had a tall, thin build, and evidence of strenuous physical activity. His brain size was about 880cc, and the endocasts demonstrate the same left parietal and right frontal petalia typical of humans but not of apes. His thoracic spinal diameter was smaller than that of modern humans of similar size, and he had a very small pelvic opening, even for a thin male. This anatomy suggests several important things about his physiology. He had the ideal body type for heat dissipation. Added to the evidence for strenuous activity, this suggests that exertion in hot conditions was common. Earlier hominids had been largely woodland creatures who focused most of their activity close to standing water. Nariokotome had the anatomy to exploit an open tropical grassland adaptive niche. While the brain size of Nariokotome was larger than earlier hominids, so was his body size; there was only a small increase in relative brain size (compared to, say, Homo habilis). Despite the modern overall shape of the brain, the thoracic spinal diameter (and by extension the number of nerve bodies enervating the diaphragm muscles) suggests that rapid articulate speech was not in Nariokotome's repertoire. In sum, Nariokotome suggests that the Homo erectus niche was significantly different from that of earlier hominids, including earlier Homo. It is not clear from the cranial capacity that a significant increase in braininess accompanied this adaptive shift. There is no good reason to think Homo erectus had speech, at least in a modern sense (Wynn 1998). However, the niche shift itself was very significant, and is corroborated by the archaeological evidence. Archaeological sites from this time period are less informative about hominid activity than many earlier sites. This seeming paradox results from the typical sedimentary context of the sites. Most early biface sites have been found in stream deposits, rather than the lake shore deposits typical of earlier sites. These "high energy" environments move objects differentially, including bones and tools. In effect they destroy the natural associations on which archaeologists rely. Running water also modifies bone, and to a lesser extent stone tools. The unfortunate result is that archaeologists have few direct remains of activity other than the stone tools themselves. However, there are enough sites dating to this time period to allow archaeologists to assess geographic distribution and environmental context, both of which suggest that a significant change in niche had occurred. Homo erectus left stone tools in stream beds because he had moved away from permanent standing water. Archaeologists presume that the channels of ephemeral streams, or the banks of permanent streams, became one of the preferred activity locales. Given the absence of associated materials, we cannot determine just what these activities were; only the selection of locale is apparent. However, this fits nicely with the "body cooling" anatomy of Nariokotome. On open savannas, stream channels often support the only stands of trees (in addition to water). Archaeologists have also found African biface sites at higher altitudes than earlier tools sites (Cachel and Harris 1995), and, finally, there are early biface sites outside of tropical Africa. The best known is Ubeidiya in Israel, which in most respects resembles early biface sites in Africa (Bar Yosef 1987). This archaeological evidence presents a picture of Homo erectus as an expansionistic species who invaded new and varied environments. Cachel and Harris (Cachel and Harris 1995) suggest it was a "weed species" - never numerous individually but able to invade new habitats very rapidly. Given the clear reliance on tools, Homo erectus' niche was at least partly technological. Control of fire may also have been a component (James 1989). Evidence from this time period at the south African site of Swartkrans includes convincing evidence of the control of fire(Brain and Sillen 1988). While control of fire may have little cognitive significance (McGrew 1989), the importance to adaptive niche may have been profound in terms both of warmth and predator protection. We have little direct evidence for diet. >From experimental studies we know that bifaces could be effective butchery tools, but there are no obvious projectiles and no evidence for greater reliance on meat. There is, in fact, no compelling evidence for hunting. 3.2.1.2 Selection It is not clear from this picture just what might have selected for the development in cognitive abilities evident in artifactual symmetry. At the outset we can consider the possibility that natural selection acted directly on the hominid ability to recognize and conceive of symmetry, which is, after all the pattern that is so salient in the archaeological record. What might the perceptual saliency of symmetry have been good for? There is considerable evidence that body symmetry is, in fact, related to reproductive success for males(Gangestad 1997). According to Gangestad, observable phenotypic asymmetry (away from the reflectional symmetry coded genetically) correlates with developmental stress, so that asymmetry marks lower health. If a potential mate could detect this, he or she could avoid a reproductively costly (in an evolutionary sense) mating. But presumably this is true generally for vertebrates, and not just for humans. Perhaps symmetry gained added importance as a clue to general health when hominids lost thick body hair. Condition of coat is also a good indicator of general health, and its absence may have led to selection for a heightened ability to detect variations away from symmetry. But the real question here is why would Homo erectus have imposed symmetry on artifacts? Could artifacts have come to play a role in mate selection? Could symmetry have become so salient a pattern for mate assessment that it intruded into other shape recognition domains? Here the saliency of symmetry has been transferred out of the domain of the phenotypic to that of cultural signaling, but the selective advantage is the same. In this scenario both the ability to detect and produce symmetry would have had reproductive consequences. Unfortunately, it is difficult to see how such a hypothesis could be tested. It is provocative only because of the known role of symmetry in mate selection. Given the change in niche associated with early Homo erectus, with the accompanying increase in range and the pioneer aspects of the adaptation, it is tempting to posit selection for spatial cognition via navigational ability. Judging spatial quantity would be a useful skill, for example. However, while matching diameters on tools and judging distances to water are both matters of spatial quantity, it is not clear that they use identical cognitive mechanisms. Indeed, neurological research suggests that relationships in "near" and "far" space are handled somewhat differently by the brain (Marshall and Fink 2001), though there does appear to be some correlation (Silverman, Choi et al. 2000). The temporal association of territory expansion with developments is shape and spatial cognition is provocative, but hardly conclusive. Given the emphasis in some literature on sexual division of labor (Eals and Silverman 1994; Silverman, Choi et al. 2000), it is also important to reiterate that paleoanthropologists know nothing about division of labor in this time period. Even though the contextual evidence provides no leading candidate for selective agent, it does describe an adaptive milieu of relevance. Early Homo erectus was not much like a modern hunter-gatherer. There is no evidence for human-like foraging systems or social groups (the probable absence of speech would itself obviate the latter). There is not even any convincing evidence of hunting with projectiles, a favorite of several authors (Calvin 1993). Nothing in the contextual evidence warrants direct analogy to the adaptive problems of modern human foragers. The challenge that Homo erectus presents to paleoanthropologists, and other students of human evolution, is that there are no living analogs. There is no more reason to invoke a human model than a chimpanzee model, or neither. 3.2.2 Late biface makers 3.2.2.1 Context Paleoanthropologists' knowledge of evolutionary context is much better for the time period associated with the appearance of three-dimensional symmetries, congruency, and multiple perspectives. These abilities were clearly in place by 300,000 years ago, and probably by 500,000. Our knowledge is better partly because this time period was much closer to the present (though still remote), but also because Homo had expanded into Europe. It is true that some temperate environments have good preservation, but the primary archaeological effect of this expansion is that Homo moved into an area where a great deal of modern archaeology has been done. Africa may be the home of mankind, but Europe is the home of archaeology. Based largely on European sites it is possible to draw an outline sketch of the behavioral/cultural context of daily life. The peopling of Europe is itself a fascinating topic. Some argue that Europe was occupied relatively late, after 500,000 year ago (Roebroeks, Conard et al. 1992). Earlier sites are certainly scarce and often controversial. However, there are sites in Italy (Isernia(Cremaschi and Peretto 1988), Ceprano(Ascenzi, Mallegni et al. 2000)) and Spain (Atapuerca(Bermudez de Castro, Arsuaga et al. 1997)) that provide strong evidence for the presence of Homo erectus (now attributed by some to Homo antecessor) prior to 500,000. In many respects these resemble earlier Homo erectus sites - poor geological context and little to go on. There is little doubt that after 500,000 the record is better and includes the peopling of northern Europe. There are informative sites in England, Germany, France, and the Netherlands. Some archaeologists have suggested that this represents an adaptive breakthrough, an evolutionary development that opened up harsher environments (Roebroeks, Conard et al. 1992). These first colonists in northern Europe did make bifaces, and their bifaces required all of the cognitive abilities identified earlier in this paper. The association is provocative, and suggests that some evolutionary development in cognition may have been partially responsible for this breakthrough. Of course, similar artifacts appeared in Africa, so the breakthrough cannot have been specific to Europe. But what specifically might have selected for cognitive abilities evident in the fine three-dimensional symmetries of bifaces? The archaeological record does provide some clues. The move into northern Europe (and China) may seem a minor extension of the much more dramatic expansion of Homo erectus 1,000,000 years earlier. However, it may well have required some significant changes in adaptation. Northern European climate during the Pleistocene was in more or less constant flux, with cold glacial periods alternating with warmer, forested, interglacial periods. During periods of maximum cold northern Europe was uninhabitable, and indeed even the anatomically modern of humans of 18,000 years ago were forced to the south. But after 500,000 we have evidence of biface sites in northern Europe during some of the warmer episodes embedded in glacial periods. The environment during these periods was more open and less heavily wooded than today, but also cooler than today. The adaptive problems posed by such an environment are fairly well known. Compared to warmer environments, even in southern Europe, there would have been fewer edible plant species, and a concomitant requirement for increased reliance on animals. Then there obvious problems of keeping warm, including the likely necessity of controlling and probably even making fire. In effect, these northern temperate environments "pushed the envelope" of Homo's adaptation. But here our European bias risks misleading. We see it clearly because we have the sites. We know, however, that comparable technological developments occurred in Africa, the Near East, and China, and it is unlikely that Europe was in any way central (indeed cultural backwater is more likely), so what we may be seeing is abilities that evolved elsewhere applied to a European problem. The fossil remains from this time period present a confusing picture. In some areas of the world, Asia in particular, Homo erectus was clearly still present. In Africa the prevailing fossil type appears to have been a larger brained form that still retained many Homo erectus like features of the face. Europe is the biggest puzzle. There are few clear Homo erectus fossils (the Mauer mandible is a probable example (Rightmire 1992), as is the Ceprano calvaria(Ascenzi, Mallegni et al. 2000)) but most fossils attributed to the taxon have some at least some sapient-like features. Rightmire (Rightmire 1998) now favors a return to the use of Homo heidlebergensis for these forms. The key issue for the present argument is that this was a time of evolutionary change in anatomy, as well as technology and cognition. We cannot yet describe the complex evolutionary patterns, but we do know the long period of evolutionary stasis that was Homo erectus was coming to an end. By this time it is almost certain that these Homo hunted large mammals. Many sites have included the association of stone tools with animal bone (Hoxne(Singer, Gladfelter et al. 1993), Boxgrove (Wenban-Smith 1989; Roberts, Stringer et al. 1994; Roberts and Parfitt 1999)), and in some cases like Hoxne the range of body parts present suggests that at least some of these animals had been hunted, not just scavenged (Binford 1985). Until recently the only evidence of hunting technique was an enigmatic sharpened stick from Clacton-on-Sea that could have been a spear tip (Oakley, Andrews et al. 1977). In 1996, the German site of Schoeningen dramatically confirmed this interpretation. Here Thieme (Thieme 1997) and crew excavated three complete spears carved out of spruce, each over two meters in length. The spears were in direct association with the bones of horses. The center of gravity of each of these spears was situated slightly toward the tip, much as it is in modern javelins, and Thieme argues that Homo must have designed these spears for throwing. If true, this suggests a relationship between design and use that has technological and perhaps even cognitive implications. But what bears more on the issue at hand is that hunting with spears was obviously a component of the behavioral/cultural context of these Homo, a component that calls up modern human analogs. The archaeological record does not provide much direct evidence for group organization, or even size, at least not for this time period. For more recent times the size and debris patterns of structures provides much useful evidence of social organization, but such patterns degrade rapidly and in only the most ideal conditions survive for tens of thousands of years, let alone half a million. All of the possible campsites from the time period of interest here are controversial. Thirty years ago de Lumley (Lumley and Boone 1976) presented a dramatic argument for huts, seasonal occupation, and reuse at the French site of Terra Amata, an interpretation that still survives in textbooks(Turnbaugh, Jurmain et al. 1999). The site has never been published in detail, and the one independent analysis cast considerable doubt on de Lumley's interpretation(Villa 1983). At best, there is evidence for a few post holes, stone blocks, stone knapping debris, and shallow hearths scooped from the sand. These may be the remains of a flimsy shelter, and would certainly fit into a reconstruction of a small hunter gatherer band. Unfortunately, this optimistic picture is at least premature, and probably unwarranted. Villa's analysis indicates that the integrity of the site is much lower than presented by de Lumley. It is in reality an accumulation of cultural debris that has been moved and altered by natural processes. Yes, this provides evidence of hominid activity, but not of a coherent campsite with multiple activities and an artificial structure. There are no better examples until much later. Indeed, Gamble (Gamble 1994; Gamble 1999) argues that this absence of campsites is an accurate reflection of the life ways of early Homo sapiens, and that coherent long-term or multiple use campsites were not part of the adaptive niche. If true, these early Homo sapiens were not like modern hunters and gatherers. There are other ways in which they were not modern. As familiar as some of this evidence is, there is a striking piece of modern behavior that was entirely missing. We have no convincing evidence of art, or personal ornamentation, or anything that clearly was an artifact of symbolic culture. Many sites do have lumps of red ochre, some of which had been scraped or ground. A few sites have enigmatic scratched bones. None of this constitutes indisputable evidence that these hunters and gatherers used any material symbols, of any kind whatsoever. Compared to the abundant use of such items in sites post-dating 50,000 this absence is telling. In sum, the archaeological evidence indicates that by 400,000 years ago Homo was a hunter gatherer who had invaded new, more hostile environments, but who did not invest in symbolic artifacts. Despite similarities to modern hunters and gatherers, these early Homo sapiens were different in many respects. 3.2.2.2 Selection What might have selected for the cognitive abilities required for three-dimensional, congruent symmetries? Again, mate selection is a possibility, this time by way of technological skill. An individual who could produce a more regular (symmetrical) artifact would be cuing his or her skill, and worth as a potential mate. Other things being equal, the stone knapper who produced the fine three-dimensional bifaces was smarter and more capable, with better genes, than one who couldn't. Especially if knapping skill correlated with other technological abilities this would be one means of identifying mates with future potential as providers. Kohn and Mithen(Kohn and Mithen 1999) take this argument even farther, framing it terms of sexual selection and emphasizing abilities other than technological. "Those hominids...who were able to make fine symmetrical handaxes may have been preferentially chosen by the opposite sex as mates. Just as a peacock's tail may reliably indicate its `success', so might the manufacture of a fine symmetrical handaxe have been a reliable indicator of the hominid's ability to secure food, find shelter, escape from predation and compete withing the social group. Such hominids would have been attractive mates, their abilities indicating `good genes'" (Kohn and Mithen 1999:521). Modern people certainly do use material culture to mark their individual success, and it is perhaps not far fetched to extend this behavior into the past, perhaps even to the time of late Homo erectus or early Homo sapiens. As second possibility is that selection operated on enhanced spatial and or shape cognition, with artifactual symmetry being just one consequence. Given the co-occurrence of hunting and gathering and modern spatial thinking in the paleoanthropological record, this hypothesis suggests that they are somehow linked. What selective advantage could congruency, three dimensional symmetries, and image manipulation bestow on a hunter-gatherer? William Calvin (Calvin 1993) has long argued that aimed throwing was a key to cognitive evolution. While I find his argument that bifaces were projectiles far from convincing (see also (Whittaker and McCall 2001)), the Schoeningen spears may well have been, indeed probably were, projectiles. That there is a spatial component to accurate throwing seems beyond question. Calvin himself emphasizes the importance of timed release and the computational problems of hitting moving targets. Would any of this select for image manipulation, congruency, and so on? It is hard to see how, unless ability to estimate distance to target selected for abilities in judging all spatial quantities (e.g., congruent symmetries). The selective agent, throwing, just does not match up well with the documented abilities. Navigation is again an alternative, and one favored by many psychologists (e.g. (Gaulin and Hoffman 1988; Moffat, Hampson et al. 1998); (Dabbs, Chang et al. 1998; Silverman, Choi et al. 2000)). While route following using sequential landmarks can work, using basic topological notions like those known for chimpanzees and early hominids, it is difficult to conceive of and follow novel routes without a Euclidean conception of location. Arguably hunting, especially varieties invoking long distance travel, herd following, or intercept techniques would favor Euclidean conceptions of space. There is now experimental evidence documenting a correlation between navigational skill and the standard psychometric measures of spatial cognition like mental rotation (Moffat, Hampson et al. 1998; Silverman, Choi et al. 2000), though recall that "near" and "far" space are not handled identically by the brain(Marshall and Fink 2001). This specific selective hypothesis, then, is a better fit than the throwing hypothesis. Of course, navigation skill might have been unrelated to hunting per se, and instead tied to mate searching (Gaulin and Hoffman 1988) or any long distance travel. What is provocative is the correlation between the earliest evidence for large scale hunting and Euclidean spatial relations, as represented by fine three dimensional symmetries. While the correlation between this development in spatial thinking and navigation is provocative, it does have two weaknesses. First, many animals are fine navigators without relying on the enhanced spatial abilities in question. Of course, what we are seeing here is the hominid solution to navigation problems, so I am not too troubled by this objection. The second objection is more bothersome. When modern people navigate, they rarely use the spatial abilities in question. For example, when modern hunters and gatherers move across the landscape they use established paths and routes, often following water ways or animal trails (Baluchet 1992; Gamble 1999). The geometric underpinning of such navigation is largely topological, and does not rely on the kind of spatial abilities evident in the stone tools. Yes, it is possible to imagine a form of navigation that relies on such abilities, but this does not seem to be the way people actually move about. Unless there is a compelling reason to think that modern hunter-gatherers rely on Euclidean spatial relations to navigate, it will remain a weak hypothesis. Of course, these spatial and shape abilities may not have been directly selected for at all. They may be by-products of natural selection operating on other cognitive mechanisms. For example, if Kosslyn's (Kosslyn 1994) (Kosslyn, Thompson et al. 1998)characterization of mental imaging is accurate, the key development may have been in central processing rather than the more encapsulated shape recognition system or spatial assessment system. These are relatively discrete neural networks that reside in different parts of the brain (one temporal lobe, one parietal). In order for someone to conceive of congruency, and perhaps alternative perspectives, the two outputs must be coordinated, and this coordination appears to happen in the association areas of the frontal lobe. Here the evolutionary development would be in the area of association and central processing, and there is no reason for selection to have been for shape recognition or spatial ability per se. In other words, the archaeological evidence for the development of three dimensional, congruent symmetries, may inform us about developments in more general cognitive abilities, not just a narrowly encapsulated module of spatial thinking. 4. Conclusion The archaeological record of symmetry reveals two of the times at which significant developments in hominid cognition occurred. The first, a million and a half years ago, encompassed cognitive developments necessary to the imposition of shape on artifacts, the coordination of shape recognition (symmetry) and spatial thinking (stone knapping) being the most salient. This evolutionary development was associated with Homo erectus, and the appearance of the first hominid adaptation that was clearly outside the range of an ape adaptive grade. These Homo erectus were not, however, like modern hunter/gatherers in any significant sense; indeed, there are no appropriate analogs living today, and the precise agents selecting for these cognitive abilities are not apparent. The second episode evident from artifactual symmetries occurred a million years later and encompassed the development of modern Euclidean understandings and manipulations of shape and space. This was the also time of the transition from Homo erectus to Archaic Homo sapiens. The appearance of large mammal hunting in the contemporary archaeological record lends some support to evolutionary psychological arguments that hunting may have selected for features of human spatial cognition, either by way of projectile use or navigation. However, given the range of evidence documenting the appearance of many features of hunting and gathering at this time -- not just spatial thinking -- it is perhaps simpler to posit a few developments in associative abilities than a raft of specific cognitive mechanisms. It is also important to reiterate that despite being Homo sapiens, these were not modern hunters and gatherers. They lacked the rich symbolic milieu on which modern humans, including hunters and gatherers, rely. Archaeology cannot itself resolve many of the controversies raised by the evidence. Questions concerning the cognitive and neural bases of the actions preserved in the archaeological record must be answered in studies of modern cognition. Archaeology can point to the times and contexts of cognitive evolution, but cannot itself illuminate the workings of the human mind. A comprehensive approach to cognitive evolution must therefore be multidisciplinary. References Cited Ascenzi, A., F. Mallegni, et al. (2000). "A re-appraisal of Ceprano calvaria affinities with Homo erectus, after the new reconstruction." Journal of Human Evolution 39: 443-450. Baluchet, S. (1992). Spatial mobility and access to resources among the African Pygmies. Mobility and Territoriality: Social and Spatial Boundaries Among Foragers, Fishers, Pastoralists, and Peripatetics. M. Casimir and A. Rao. Oxford, Berg. Bar Yosef, O. (1987). "Pleistocene connexions between Africa and Southwest Asia: an archaeological perspective." African Archaeological Review 5: 29-38. Barkow, J., L. Cosmides, et al., Eds. (1992). The Adapted Mind: Evolutionary Psychology and the Generation of Culture. New York, Oxford University Press. Bateson, G. (1972). A re-examination of "Bateson's Rule". Steps to an Ecology of Mind. G. Bateson. New York, Ballantine: 379-398. Bednarik, R. (1995). "Concept-mediated marking in the Lower Palaeolithic." Current Anthropology 36: 605-634. Bermudez de Castro, J. M., J. L. Arsuaga, et al. (1997). "A hominid from the Lower Pleistocene of Atapuerca, Spain: Possible ancestor to Neandertals and modern humans." Science 276: 1392-1395. Binford, L. R. (1985). "Human ancestors: Changing views of their behavior." Journal of Anthropological Archaeology 4: 292-327. Bock, G. and G. Cardew, Eds. (1997). Characterizing Human Psychological Adaptations. Ciba Foundation Symposia. New York, John Wiley & Sons. Boysen, S., G. Berntson, et al. (1987). "Simian scribbles: A reappraisal of drawing in the chimpanzee (Pan troglodytes)." Journal of Comparative Psychology 101: 82-89. Brain, C. K. and A. Sillen (1988). "Evidence from the Swartkrans cave for the earliest use of fire." Nature 336: 464-466. Cachel, S. and J. Harris (1995). Ranging patterns, land-use and subsistence in Homo erectus from the perspective of evolutionary biology. Evolution and Ecology of Homo erectus. J. Bower and S. Sartono. Leiden, Pithecanthropus Centennial Foundation: 51-66. Calvin, W. (1993). The unitary hypothesis: a common neural circuitry for novel manipulations, language, plan-ahead, and throwing? Tools, Language, and Cognition in Human Evolution. K. Gibson and T. Ingold. Cambridge, Cambridge University Press: 230-250. Clark, G. (1977). World Prehistory. Cambridge, Cambridge University Press. Cremaschi, M. and C. Peretto (1988). "Les sols d'habitat du site paleolithique d'Isernia la Pineta (Molise, Italie Cantrale)." L'Anthropologie 92(4): 1017-1040. Dabbs, J. M. J., E.-L. Chang, et al. (1998). "Spatial ability, navigation strategy, and geographic knowledge among men and women." Evolution and Human Behavior 19: 89-98. Eals, M. and I. Silverman (1994). "The hunter-gatherer theory of spatial sex differences: Proximate factors mediating the female advantage in recall of object arrays." Ethology and Sociobiology 15: 95-105. Gamble, C. (1994). Timewalkers: The Prehistory of Global Colonization. Cambridge, Mass., Harvard University Press. Gamble, C. (1999). The Palaeolithic Societies of Europe. Cambridge, Cambridge University Press. Gangestad, S. (1997). Evolutionary psychology and genetic variation: non-adaptive, fitness-related, and adaptive. Characterizing Human Psychological Adaptations. G. Bock and G. Cardew. New York, John Wiley and Sons: 212-230. Gaulin, S. and H. Hoffman (1988). Evolution and development of sex differences in spatial ability. Human Reproductive Behavior: A Darwinian Perspective. L. Betzig, M. Borgerhoff Mulder and P. Turke. Cambridge, Cambridge University Press: 129-152. Harris, J. W. K. (1983). "Cultural beginnings: Plio-Pleistocene archaeological occurrences from the Afar, Ethiopia." African Archaeological Review 1: 3-31. Howell, F. C., M. Kleindienst, et al. (1972). "Uranium series dating of bone from the Isimila prehistoric site, Tanzania." Nature 237: 51-52. Isaac, G. (1984). The archaeology of human origins: studies of the lower Pleistocene in East Africa 1971-1981. Advances in World Archaeology. F. Wendorf and A. Close. New York, Academic Press: 1-87. James, S. (1989). "Hominid use of fire in the Lower and Middle Pleistocene: A review of the evidence." Current Anthropology 30(1): 1-26. Jones, P. (1981). Experimental implement manufacture and use: a case study from Olduvai Gorge, Tanzania. The Emergence of Man. J. Young, E. Jope and K. Oakley. London, The Royal Society and the British Academy: 189-195. Klein, R. (2000). "Archeology and the evolution of human behavior." Evolutionary Anthropology 9(1): 17-36. Kohn, M. and S. Mithen (1999). "Handaxes: products of sexual selection?" Antiquity 73: 518-26. Kosslyn, S. (1994). Image and Brain: The Resolution of the Imagery Debate. Cambridge, Mass., MIT Press. Kosslyn, S., W. Thompson, et al. (1998). "Neural systems that encode categorical vs. coordinate spatial relations: PET investigations." Psychobiology 26(4): 333-347. Kosslyn, S. M. (1999). "If neuroimaging is the answer, what is the question?" Philosophical Transactions of the Royal Society of London, B 354: 1283-1294. Linn, M. C. and A. C. Petersen (1986). A meta-analysis of gender differences in spatial ability: Implications for mathematics and science achievement. The Psychology of Gender. J. Hyde and M. Linn. Baltimore, Johns Hopkins University Press: 67-101. Lumley, H. d. and Y. Boone (1976). Les structures d'habitat au Paleolithique inferieur. La Prehistoire Francaise. H. d. Lumle. Paris, Centre National de la Recherche Scientifique. 1: 625-643. Marshall, J. C. and G. R. Fink (2001). "Spatial cognition: Where we were and where we are." Neuroimage 14: S2-S7. McGrew, W. C. (1989). "Comment on "Hominid use fire in the Lower and Middle Pleistocene" by Steven James." Current Anthropology 30(1): 16-17. Mithen, S. (1996). The Prehistory of Mind. London, Thames and Hudson. Moffat, S. D., E. Hampson, et al. (1998). "Navigation in a "virtual" maze: Sex differences and correlation with psychometric measures of spatial ability in humans." Evolution and Human Behavior 19: 73-87. Morris, D. (1962). The Biology of Art. London, Methuen. Noble, W. and I. Davidson (1996). Human Evolution, Language, and Mind: A Psychological and Archaeological Inquiry. Cambridge, Cambridge University Press. Oakley, K., P. Andrews, et al. (1977). "A reappraisal of the Clacton spear." Proceedings of the Prehistoric Society 43: 1-12. Piaget, J. and B. Inhelder (1967). The Child's Conception of Space. New York, Norton. Potts, R. (1988). Early Hominid Activities at Olduvai. New York, Aldine. Rightmire, G. P. (1992). "Homo erectus: Ancestor or evolutionary side branch?" Evolutionary Anthropology 1(2): 43-49. Rightmire, G. P. (1998). "Human Pleistocene in the Middle Pleistocene: The role of Homo heidelbergensis." Evolutionary Anthropology 6(6): 218-227. Roberts, M. A. and S. A. Parfitt (1999). Boxgrove: A Middle Pleistocene hominid site at Eartham Quarry, Boxgrove, West Sussex. London, English Heritage. Roberts, M. B., C. B. Stringer, et al. (1994). "A hominid tibia from Middle Pleistocene sediments at Boxgrove, UK." Nature 369: 311-313. Robson Brown, K. (1993). "An alternative approach to cognition in the Lower Palaeolithic: The modular view." Cambridge Archaeological Journal 3: 231-245. Roebroeks, W., N. J. Conard, et al. (1992). "Dense forests, cold steppes, and the Palaeolithic settlement of northern Europe." Current Anthropology 33(5): 551-586. Rohles, F. and J. Devine (1967). "Further studies of the middleness concept with the chimpanzee." Animal Behavior 15: 107-112. Schick, K. and N. Toth (1993). Making Silent Stones Speak: Human Evolution and the Dawn of Technology. New York, Simon & Schuster. Schick, K., N. Toth, et al. (1999). "Continuing investigations into the stone tool-making capabilities of a Bonobo (Pan paniscus)." Journal of Archaeological Science 26: 821-832. Schiller, P. (1951). "Figural preferences in the drawings of chimpanzee." Journal of Comparative and Physiological Psychology 44: 101-110. Schlanger, N. (1996). "Understanding Levallois: Lithic technology and cognitive archaeology." Cambridge Archaeological Journal 6(2): 231-254. Silverman, I., J. Choi, et al. (2000). "Evolved mechanisms underlying wayfinding: further studies on the hunter-gatherer theory of spatial sex differences." Evolution and Human Behavior 21: 201-213. Singer, R., B. G. Gladfelter, et al. (1993). The Lower Paleolithic Site at Hoxne, England. Chicago, University of Chicago Press. Smith, D. (1973). "Systematic study of chimpanzee drawing." Journal of Comparative and Physiological Psychology 82: 406-414. Stout, D., N. Toth, et al. (2000). "Stone tool-making and brain activation: Positron emission tomography (PET) studies." Journal of Archaeological Science 27: 1215-1223. Tattersall, T. (2000). "Paleoanthropology: The last half-century." Evolutionary Anthropology 9(1): 2-16. Thieme, H. (1997). "Lower Palaeolithic hunting spears from Germany." Nature 385: 807-810. Toth, N. (1985). "Archaeological evidence for preferential right-handedness in the Lower and Middle Pleistocene, and its possible implications." Journal of Human Evolution 14: 607-614. Toth, N. (1985). "The Oldowan reassesed: A closer look at early stone artifacts." Journal of Archaeological Science 12: 101-120. Toth, N. and K. Schick (1986). The first million years: the archaeology of protohuman culture. Advances in Archaeological Method and Theory. M. Schiffer. New York, Academic Press. 9: 1-96. Toth, N., K. Schick, et al. (1993). "Pan the tool-maker: Investigations into the stone tool-making and tool-using capabilties of a bonobo (Pan paniscus)." Journal of Archaeological Science 20: 81-91. Turnbaugh, W. A., R. Jurmain, et al. (1999). Understanding Physical Anthropology and Archaeology. Belmont, CA, West. Tyler, C. W. (1996). Human symmetry perception. Human Symmetry Perception and its Computational Analysis. C. W. Tyler. Utrecht, VSP: 3-21. Ungerleider, L. G. (1995). "Functional brain imaging studies of cortical mechanisms for memory." Science 270: 769-755. Villa, P. (1983). Terra Amata and the Middle Pleistocene archaeological record of southern France. Berkeley, University of California Press. Wagemans, J. (1996). Detection of visual symmetries. Human Symmetry Perception and its Computational Analysis. C. W. Tyler. Utrecht, VSP: 25-48. Walker, A. and R. Leakey, Eds. (1993). The Nariokotome Homo erectus skeleton. Cambridge, Harvard University Press. Washburn, D. K. and D. W. Crowe (1988). Symmetries of Culture: Theory and Practice of Plane Pattern Analysis. Seattle, University of Washington Press. Wenban-Smith, F. (1989). "The use of canonical variates for determination of biface manufacturing technology at Boxgrove Lower Palaeolithic site and the behavioral implications of this technology." Journal of Archaeological Science 16: 17-26. Whittaker, J. C. and G. McCall (2001). "Handaxe-hurling hominids: An unlikely story." Current Anthropology 42(4): 566-572. Wright, R. (1972). "Imitative learning of flaked tool technology: The case of an orangutan." Mankind 8: 296-306. Wynn, T. (1979). "The intelligence of later Acheulean hominids." Man 14: 371-391. Wynn, T. (1981). "The intelligence of Oldowan hominids." Journal of Human Evolution 10: 529-541. Wynn, T. (1985). "Piaget, stone tools, and the evolution of human intelligence." World Archaeology 17: 32-43. Wynn, T. (1989). The Evolution of Spatial Competence. Urbana, University of Illinois Press. Wynn, T. (1993). "Two developments in the mind of early Homo." Journal of Anthropological Archaeology 12: 299-322. Wynn, T. (1998). "Did Homo erectus speak?" Cambridge Archaeological Journal 8(1): 78-81. Wynn, T. and W. C. McGrew (1989). "An ape's view of the Oldowan." Man 24: 283-298. Wynn, T. and F. Tierson (1990). "Regional comparison of the shapes of later Acheulean handaxes." American Anthropologist 92: 73-84. Wynn, T., F. Tierson, et al. (1996). "Evolution of sex differences in spatial cognition." Yearbook of Physical Anthropology 39: 11-42. Yellen, J., A. Brooks, et al. (1995). "A Middle Stone Age worked bone industry from Katanda, Upper Simliki Valley, Zaire." Science 268: 553-556. References 1. http://journals.cambridge.org/jid_BBS From checker at panix.com Mon Dec 26 02:24:36 2005 From: checker at panix.com (Premise Checker) Date: Sun, 25 Dec 2005 21:24:36 -0500 (EST) Subject: [Paleopsych] BBS: Webb, B (2001) Can robots make good models of biological behaviour? Message-ID: Webb, B (2001) Can robots make good models of biological behaviour? http://www.bbsonline.org/documents/a/00/00/04/64/bbs00000464-00/index.html BEHAVIORAL AND BRAIN SCIENCES (2001) 24(6) Barbara Webb Centre for Computational and Cognitive Neuroscience Department of Psychology University of Stirling Stirling FK9 4LA Scotland, U.K. [1]b.h.webb at stir.ac.uk [2]www.stir.ac.uk/psychology/Staff/bhw1/ Abstract: How should biological behaviour be modelled? A relatively new approach is to investigate problems in neuroethology by building physical robot models of biological sensorimotor systems. The explication and justification of this approach are here placed within a framework for describing and comparing models in the behavioural and biological sciences. First, simulation models - the representation of a hypothesis about a target system - are distinguished from several other relationships also termed 'modelling' in discussions of scientific explanation. Seven dimensions on which simulation models can differ are defined and distinctions between them discussed: 1. Relevance: whether the model tests and generates hypotheses applicable to biology. 2. Level: the elemental units of the model in the hierarchy from atoms to societies. 3. Generality: the range of biological systems the model can represent. 4. Abstraction: the complexity, relative to the target, or amount of detail included in the model. 5. Structural accuracy: how well the model represents the actual mechanisms underlying the behaviour. 6. Performance match: to what extent the model behaviour matches the target behaviour 7. Medium: the physical basis by which the model is implemented No specific position in the space of models thus defined is the only correct one, but a good modelling methodology should be explicit about its position and the justification for that position. It is argued that in building robot models biological relevance is more effective than loose biological inspiration; multiple levels can be integrated; that generality cannot be assumed but might emerge from studying specific instances; abstraction is better done by simplification than idealisation; accuracy can be approached through iterations of complete systems; that the model should be able to match and predict target behaviour; and that a physical medium can have significant advantages. These arguments reflect the view that biological behaviour needs to be studied and modelled in context, that is in terms of the real problems faced by real animals in real environments. Keywords: models; simulation; animal behaviour; neuroethology; robotics; realism; levels. [me3.jpg] Barbara Webb joined the Psychology Department at Stirling University in January 1999. Previously she lectured at the University of Nottingham (1995-1998) and the University of Edinburgh (1993-1995). She recieved her Ph.D. (in Artificial Intelligence) from the University of Edinburgh in 1993, and her B.Sc. (in Psychology) from the University of Sydney in 1988. 1. Introduction 'Biorobotics' can be defined as the intersection of biology and robotics. The common ground is that robots and animals are both moving, behaving systems; both have sensors and actuators and require an autonomous control system that enables them to successfully carry out various tasks in a complex, dynamic world. In other words "it was realised that the study of autonomous robots was analogous to the study of animal behaviour" p.60 (Dean, 1998), hence robots could be used as models of animals. As summarised by Lambrinos et al. (1997) et al "the goal of this approach is to develop an understanding of natural systems by building a robot that mimics some aspects of their sensory and nervous system and their behaviour" (p.185). Dean (op. cit.) reviews some of this work, as do Meyer (1997), Beer et al. (1998), Bekey (1996), and Sharkey & Ziemke (1998), although the rapid growth and interdisciplinary nature of the work make it difficult to comprehensively review. Biorobotics will here be considered as new methodology in biological modelling, rather than as a new 'field' per se. It can then be discussed directly in relation to other forms of modelling. Rather than vague justification in terms of intuitive similarities between robots and animals, the tenets of the methodology can be more clearly stated and a basis for comparison to other approaches established. However, a difficulty that immediately arises is that a "wide divergence of opinion exists concerning the proper role of models" p. 597 (Reeke & Sporns, 1993) in biological research. For example, the level of mechanism that should be represented in the model is often disputed. Cognitivists criticise connectionism for being too low level (Fodor & Pylyshyn, 1988), while neurobiologists complain that connectionism abstracts too far from real neural processes (Crick, 1989). Other debates address the most appropriate means for implementing models. Purely computer-based simulations are criticised by advocates of sub-threshold transistor technology (Mead, 1989) and by supporters of real-world robotic implementations (Brooks, 1986). Some worry about oversimplification (Segev, 1992) while others deplore overcomplexity (Maynard Smith, 1974; Koch, 1999). Some set out minimum criteria for good models in their area (e.g. Pfeifer, 1996; Selverston, 1993); others suggest there are fundamental trade-offs between desirable model qualities (Levins, 1966). The use of models at all is sometimes disputed, on the grounds that detailed models are premature and more basic research is needed. Croon & van de Vijver (1994) argue that "Developing formalised models for phenomena which are not even understood on an elementary level is a risky venture: what can be gained by casting some quite gratuitous assumptions about particular phenomena in a mathematical form?" p.4-5. Others argue that "the complexity of animal behaviour demands the application of powerful theoretical frameworks" (Barto, 1991, p.94) and "nervous systems are simply to complex to be understood without the quantitative approach that modelling provides" (Bower, 1992, p.411). More generally, the formalization involved in modelling is argued to be an invaluable aid in theorising - "important because biology is full of verbal assertions that some mechanism will generate some result, when in fact it wont" (Maynard Smith, 1988, p.231). Beyond the methodological debates, there are also meta-arguments regarding the role and status of models in both pure and applied sciences of behaviour. Are models essential to gaining knowledge or just convenient tools? Can we ever really validate a model (Oreskes, et al, 1994)? Is reification of models mistaken i.e. can a model of a process ever be a replica of that process (Pattee, 1989; Webb, 1991)? Do models really tell us anything we didnt already know? In what follows a framework for the description and comparison of models will be set out in an attempt to answer some of these points, and the position of biorobotics with regard to this framework will be made clear. Section 2 will explicate the function of models, in particular to clarify some of the current terminological confusion, and define 'biorobotic' modelling. Section 3 will describe different dimensions that can be used to characterise biological models, and discuss the relationships between them. Section 4 will lay out the position of robot models in relation to these dimensions, and discuss how this position reflects a particular perspective on the problems of explaining biological behaviour. 2. The process of modelling 2.1 The "model muddle" (Wartofsky, 1979) Discussions of the meaning and process of modelling can be found: in the philosophy of science e.g. Hesse (1966), Harre (1970b), Leatherdale (1974), Bunge (1973), Wartofsky (1979), Black (1962) and further references throughout this paper; in cybernetic or systems theory, particularly Zeigler (1976); and in textbooks on methodology - recent examples include Haefner (1996), Giere (1997), and Doucet & Sloep (1992). It also arises as part of some specific debates about approaches in biology and cognition: in ecological modelling e.g. Levins (1966) and Orzack & Sober (1993); in cognitive simulation e.g. Fodor (1968), Colby (1981), Fodor & Pylyshyn (1988), Harnad (1989); in neural networks e.g. Sejnowski et al (1988), Crick (1989); and in Artificial Life e.g. Pattee (1989), Chan & Tidwell (1993). However the situation is accurately summed up by Leatherdale (1974): "the literature on models displays a bewildering lack of agreement about what exactly is meant by the word model in relation to science" (p.41). Not only model but most of the associated terms - such as 'simulation', 'representation', 'realism', 'accuracy', 'validation' have come to be used in a variety of ways by different authors. Several distinct frameworks for describing models can be found, some explicit and some implicit, most of which seem difficult to apply to real examples of model building. Moreover many authors seem to present their usage as the obvious or correct one and thus fail to spell out how it relates to previous or alternative approaches. Chao in 1960 noted 30 different, sometimes contradictory, definitions of 'model' and the situation has not improved. There does seem to be general agreement that modelling involves the relationship of representation or correspondence between a (real) target system and something else(1). Thus "A model is a representation of reality" Lamb, 1987, p.91) or "all [models] provide representations of the world" (Hughes, 1997, p. 325). What might be thought uncontroversial examples are: a scale model of a building which corresponds in various respects to an actual building; and the billiard-ball model of gases, suggesting a correspondence of behaviour in microscopic particle collisions to macroscopic object collisions. Already, however, we find some authors ready to dispute the use of the term model for one or other of these examples. Thus Kaplan (1964) argues that purely sentential descriptions like the billiard-ball example should not be called models; whereas Kacser (1960) maintains that only sentential descriptions should be called models and physical constructions like scale buildings should be called analogues; and Achinstein (1968) denies that scale buildings are analogies while using model for both verbal descriptions and some physical objects. A large proportion of the discussion of models in the philosophy of science concerns the problem that reasoning by analogy is not logically valid. If A and A* correspond in factors x[1],,x[n], it is not possible to deduce that they will therefore correspond in factor x[n+1]. Underdetermination is a another aspect of essentially the same problem if two systems behave the same, it is not logically valid to conclude the cause or mechanism of the behaviour is the same; so a model that behaves like its target is not necessarily an explanation of the targets behaviour. These problems are sometimes raised in arguments about the practical application of models, e.g. Oreskes et al. (1994) use underdetermination to argue that validation of models is impossible. Weitzenfeld (1984) suggests that a defence against this problem can be made by arguing that if there is a true isomorphism between A and A*, the deduction is valid, and the problem is only to demonstrate the isomorphism. Similar reasoning perhaps explains the frequently encountered claim that a model is "what mathematicians call an isomorphism" (Black, 1962, p.222)a one to one mapping - of relevant aspects (Schultz & Sullivan, 1972), or essential structure (Kaplan, 1964). Within cybernetic theory one can find formal definitions of models (e.g. Klir & Valach, 1965) that require there to be a complete isomorphic or homomorphic mapping of all elements of a system, preserving all relationships. However, this is not helpful when considering most actual examples of models (unless one allows there "to be as many definitions possible to isomorphism as to model" Conant & Ashby, 1991, p.516). In the vast majority of cases, models are not (mathematical) isomorphisms, nor are they intended to be. Klir and Valach (op. cit.) go on to include as examples of models "photos, sculptures, paintings, filmseven literary works" (p.115). It would be interesting to know how they intend to demonstrate a strict homomorphism between Anna Karenina and "social, economic, ethical and other relations" (op. cit.) in 19^th century Russia. In fact it is just as frequently (and often by the same authors) emphasised that a model necessarily fails to represent everything about a system. For example Black (1962) goes on to warn of "risks of fallacies of inference from inevitable irrelevancies or distortions in the model" (p.223) but if there is a true isomorphism, how can there be such a risk? A partial isomorphism is an oxymoron; and more to the point, cannot suffice for models to be used in valid deduction. Moreover this approach to modelling obscures the fact that the purpose in modelling is often to discover what are the 'relevant features' or 'essential structures', so model usage cannot depend on prior knowledge of what they are to establish the modelling relationship. 2.2 What use are models? "There are things and models of things, the latter being also things, but used in a special way" (Chao, 1960, p.564) Models are intended to help us deal in various ways with a system of interest. How do they fulfil this role? It is common to discuss how they offer a convenient/cost-effective/manageable/safe substitute for working on or building the real thing. But this doesnt explain why working on the model has any relevance to the real system, or provide some basis by which relevance can be judged i.e. what makes a model a useful substitute? It is easier to approach this by casting the role of modelling as part of the process of explanation and prediction described in the following diagram: Figure 1: Models and the process of explanation This picture can be regarded as an elaboration of standard textbook illustrations of either the 'hypothetico-deductive' approach or the 'semantic' approach to science (see below). To make each part of the diagram clear, consider an example. Our target - selected from the world - might be the human cochlea and the human behaviour of pitch perception. Our hypothesis might be that particular physical properties of the basilar membrane enable differently positioned hair cells to respond to different sound frequencies. One source of this idea may be the Fourier transform, and associated notion of a bank of frequency filters as a way of processing sound. To see what is predicted by the physical properties of the basal membrane we might build a symbolic simulation of the physical properties we think perform the function, and run it using computer technology, with different simulated sounds to see if it produces the same output frequencies as the cochlea (in fact Bekesy first investigated this problem using rubber as the technology to represent the basilar membrane). We could interpret the dominant output frequency value as a pitch percept and compare it to human pitch perception for the same waveforms: insofar as it fails to match we might conclude our hypothesis is not sufficient to explain human pitch perception. Or as Chan & Tidwell (1993) concisely summarise this process, we theorise that a system is of type T, and construct an analogous system to T, to see if it behaves like the target system. I have purposely not used the term 'model' in the above description because it can be applied to different parts of this diagram. Generally, in this paper, I take 'modelling' to correspond to the function labelled 'simulation': models are something added to the 'hypothesis - prediction - observation' cycle merely as "prostheses for our brains" (Milinski, 1991). That is, modelling aims to make the process of producing predictions from hypotheses more effective by enlisting the aid of an analogical mechanism. A mathematical model such as the Hodgkin-Huxley equations sets up a correspondence between the processes in theorised mechanism the ionic conductances involved in neural firing and processes defined on numbers such as integration. We can more easily manipulate the numbers than the chemicals so the results of a particular configuration can be more easily predicted. However limitations in the accuracy of the correspondence might compromise the validity of conclusions drawn. However, under the 'semantic' approach to scientific explanation (Giere, 1997) the hypothesis itself is regarded as a model, i.e. it specifies a hypothetical system of which the target is supposed to be a type. The process of prediction is then described as demonstration (Hughes, 1997) of how this hypothetical system should behave like the target. Demonstration of the consequences of the hypothesis may involve another level of representation in which the hypothesis is represented by some other system, also called a model. This system may be something already 'found' - an analogical or source model - or something built - a simulation model (Morgan, 1997). Moreover the target itself can also be considered a 'model', in so far as it involves abstraction or simplification in selecting a system from the world (Cartwright, 1983). This idea perhaps underlies Gordon's (1969) definition of model: "we define a model as the body of information about a system gathered for the purpose of studying the system" (p.5). 2.3 Theories, models, simulations and sources While the usage of 'model' to mean the target is relatively rare, it is common to find 'model' used interchangeably with hypothesis and theory' (2): even claims that "A model is a description of a system" (Haefner, 1996 p.4); or "A scientific model is, in effect, one or a set of statements about reality"(Ackoff, 1962, p.109). This usage of 'model' is often qualified, most commonly as the theoretical model, but also as the conceptual model (Rykiel, 1996; Ulinski, 1999; Brooks & Tobias, 1996), sentential model (Harre, 1970a), abstract model (Spriet & Vansteenkiste, 1982), or, confusingly, the real model (Maki & Thompson, 1973) or base model (Zeigler, 1976). The tendency to call the hypothesis a model seems to be linked to how formal or precise is the specification it provides (Braithwaite, 1960), as hypotheses can range from a vague qualitative predictions to Zeiglers (1976) notion of a well-described base model, which involves defining all input, output and state variables, and their transfer and output functions, as a necessary prior step to simulation. The common concept of the theoretical model is that of a hypothesis that describes the components and interactions thought sufficient to produce the behaviour: "the actual building of the model is a separate step" (Brooks & Tobias, 1996, p.2). This separate step is implementation (3) as a simulation, which involves representing the hypothesis in some physical instantiation - taken here in its widest sense i.e. including carrying out mathematical calculations or running a computer program, as well as more obviously 'physical' models. But as Maki & Thompson (1973) note: "in many cases it is very difficult to decide where the real model [the hypothesis] ends and the mathematical model [the simulation] begins" (p.4). Producing a precise formulation may have already introduced a number of 'technological' factors that are not really part of the hypothesis, in the sense that they are there only to make the solution possible, not because they are really considered to be potential components or processes in the target system. Grice (cited in Cartwright, 1983) called these "properties of convenience" and Colby (1981) makes this a basis for distinguishing models from theories: all statements of a theory are intended to be taken as true whereas some statements in a model are not. Simulation (4) is intended to augment our ability to deduce consequences from the assumptions expressed in the hypothesis: "a simulation program is ultimately only a high speed generator of the consequences that some theory assigns to various antecedent conditions" (Dennett, 1979, p.192); "modelshelpby making predictions of unobvious consequences from given assumptions" (Reeke & Sporns, 1993, p.599). Ideally, a simulation should clearly and accurately represent the whole of the hypothesis and nothing but the hypothesis, so conclusions based on the simulation are in fact correct conclusions about the hypothesis. However, a simulation must also necessarily be precise in the sense used above, that is, all components and processes must be fully specified for it to run. The formalization imposed by implementation usually involves elaborations or simplifications of the hypothesis to make it tractable, which may have no theoretical justification. In other words, as is generally recognised, any actual simulation contains a number of factors that are not part of the 'positive analogy' between the target and the model. In the philosophy of science, discussion of 'simulation' models has been relatively neglected. Rather, as Redhead (1980) points out the extensive literature on models in science is mostly about modelling in the sense of using a source analogy. A source (5) is a pre-existing system used in devising the hypothesis. For example, Amit (1989) describes how concepts like 'energy' from physics can be used in an analogical sense to provide powerful analysis tools for neural networks, without any implication that a 'physics level' explanation of the brain is being attempted. Though traditionally the source has been thought of as another physical system (e.g. a pump as the source of hypotheses for the functioning of the heart) it is plausible to consider mathematics to be a source. That is, mathematical knowledge provides a pre-existing set of components and operations we can put in correspondence to the hypothesised components and operations of our target. Mathematics just happens to be a particularly widely applicable analogy (Leatherdale, 1974). It is worth explicitly noting that the source is not in the same relation to the hypothesis as the technology, i.e. what is used to implement the hypothesis in a simulation. Confusion arises because the same system can sometimes be used both as a source and as a technology. Mathematics is one example, and another of particular current relevance is the computer. The computer can be used explicitly as a source to suggest structures and functions that are part of the hypothesis (such as the information processing metaphor in cognition), or merely as a convenient way of representing and manipulating the structures and functions that have been independently hypothesised. It would be better if terms like computational neuroscience that are sometimes used strongly in the source sense computation as an explanatory notion for neuroscience - were not so often used more loosely in the technology sense: "not every research application that models neural data with the help of a computer should be called computational neuroscience" (Schwartz, 1990, p.x). Clarity is not served by having (self-labelled) computational neuroethologists e.g. Beer (1990) and Cliff (1991) who apparently reject computation as an explanation of neuroethology. 2.4 Biorobotic models Figure 1 suggests several different ways in which robots and animals might be related through modelling. First, there is a long tradition in which robots have been used as the source in explaining animal behaviour. Since at least Descartes (1662), regarding animals as merely complex machines, and explaining their capabilities by analogy with man-made systems has been a common strategy. It was most explicitly articulated in the cybernetic approach, which, in Weiner's subtitle to "Cybernetics" (1948), concerned "control and communication in the animal and the machine". It also pertains to the information processing approaches common today, in which computation is the source for explaining brains. Much work in biomechanics involves directly applying robot-derived analyses to animal capacities, for example Walker (1995) attempts "to analyse the strengths and weaknesses of the ancient design of racoon hands from the point of view of robotics" (p.187). Second, animals can be regarded as the source for hypotheses in robot construction. This is one widely accepted usage of the term biorobotics sometimes called 'bio-mimetic' or 'biologically-inspired' robotics. For example Ayers et al. (1998) suggest "the set of behavioural acts that a lobster or lamprey utilises in searching for and identifying prey is exactly what an autonomous underwater robot needs to perform to find mines". Pratt & Pratt (1998b) in their construction of walking machines "exploit three different natural mechanisms", the knee, ankle and swing of animal legs to simplify control. The connection to biology can range from fairly exact copies of mechanisms e.g. Franceschini et al.'s (1992) electronic copy of the elementary motion detection circuitry of the fly, to adopting some high level principles e.g. using the ethological concept of 'releasing stimuli' to control a robot via simple environmental cues (Connell, 1990) or the approach described in Mataric (1998). For the following discussion, however, I wish to focus on a third relationship: robots used as simulations of animals, or how "robots can be used as physical models of animals to address specific biological questions" (Beer et al., 1998, p.777). The potential for building such models has increased enormously in recent years due to advances in both robot technology and neuroethological understanding, allowing "biologists/ethologists/neuroscientists to use robots instead of purely computational models in the modelling of living systems" Sharkey & Ziemke, 1998, p.164) The following criteria have been adopted for the inclusion of work in what follows as biorobotic modelling, to avoid the necessity of discussing an unmanageably large body of work in robotics and biological modelling: It must be robotic: the system should be physically instantiated and have unmediated contact with the external environment; the transduction is thus constrained by physics. The intention is to rule out purely computer-based models (i.e. where the environment as well as the animal is represented in the computer); and also computer sensing systems that terminate in descriptions rather than actions. This somewhat arbitrarily discounts verbal behaviour (e.g. visual classification) as sufficient; but to do so is consistent with most peoples understanding of 'robotic'. It must be biological: one aim in building the system should be to address a biological hypothesis or demonstrate understanding of a biological system. The intention is to rule out systems that might use some biological mechanisms but have no concern about altering them in ways that make it a worse representation e.g. industrial robot arms, most computer vision, most neural net controllers. It also rules out much of the behaviour-based approach in robotics which uses "algorithms specifying robot behaviours that have analogy to behaviours of life-form[s]" (Yamaguchi, 1998, p.3204) but makes no serious attempt to compare the results to natural systems. Probably the largest set of borderline cases thus excluded is the use of various learning mechanisms for robot behaviour, except those specifically linked to animal behavioural or physiological studies. There is already a surprisingly substantial amount of work even applying these criteria. The earliest examples come from mid-century, where theories of equilibrium (Ashby, 1952 learning (Shannon, 1951) and sensorimotor control (Grey Walter, 1961) were tested by building animal machines of various kinds - a number of other early examples are discussed in Young (1969). Current work tends to be more focused on specific biological systems, and ranges across the animal kingdom, from nematodes to humans. Table 1 lists a selection of recent studies, and to illustrate the approach I will describe three examples here in more detail. 1. A robot model of rat hippocampus: Burgess et al. (1997,1998, 2000) have presented a model of the rat hippocampus implemented on a robot. "The use of a robot ensures the realism of the assumed sensory inputs and enables true evaluation of the navigational capability" (Burgess et al., 1997, p.1535). The robot uses edge-filtering on a camera image to sense the distance of walls in its environment and a combination of visual and odometric information to link the distance to the allocentric direction of the walls, rotating in place to cover a sufficient field of view. They argue that these mechanisms "provide realistic simulationsince the rat's visual and odometric system appear to be relatively unsophisticated" (Burgess et al., 2000, p.306). This sensory information is encoded computationally by sensory 'cells' that effectively have 'receptive fields' for different directions and distances of walls. These feed to an array of 'entorhinal cells' which combine connections from sensory cells. These connect to the layer of 'place cells' with the connection pattern modifiable by competitive learning: thus representing the learnt place dependent activity of cells observed in rat hippocampus. These cells further connect to a small number of goal cells, which also receive input from 'head direction' cells. By Hebbian learning of these connections when a goal is encountered, the network forms a representation which can be used to guide the robot's movement back to a goal position from novel locations. "[To] maintain close contact with the experimental situations in which the place cell data constraining the model was collected, the robot was tested in simple rectangular environments" (Burgess et al., 2000, p.306). The results show the robot is capable of good self localisation while wandering in the environment and can reliably return to the goal position from novel locations. The effects of changing the environment (e.g. the proportions of the rectangle, or adding a new barrier) on the place cell representation and the search behaviour can be compared to the results in rats; some predictions from the model have been supported (Burgess et al., 2000). They further predict that cells with 'receptive fields' for direction and distance of barriers will be found within or upstream of the entorhinal cortex, but this is yet to be confirmed. 2. A robot model of desert ant navigation: The impressive homing capabilities of the desert ant Catyglyphis have been the subject of long study (Wehner, 1994). Several aspects of this behaviour have been investigated in robot models that operate in the same Sahara environment (Lambrinos et al., 1997; Moller et al., 1998; Lambrinos et al., 2000). Insects can use the polarisation pattern of the sky as a compass, with three 'POL' neurons in the brain integrating the response from crossed-pairs of filters at three different orientations. This sensor-neural morphology has been duplicated in the robot. Two different models for extracting compass direction were considered: a 'scanning' mechanism that rotates to find a peak response which indicates the solar meridian (as had been previously proposed for the ant); and a novel 'simultaneous' mechanism that calculates the current direction from the pattern of neural output. The 'simultaneous' mechanism was substantially more efficient as the robot (or ant) does not need to rotate 360 degrees each time it wants to refer to the compass. This compass was successfully used in a path integration algorithm, reducing the error in the robot's return to its starting location. A further development of the robot allowed the testing of hypotheses about landmark navigation. A conical mirror placed above a camera enabled the robot to get a 360 degree view of the horizon comparable to that of the ant. The 'snapshot' model proposed by Cartwright & Collett 1983 was implemented first: this matches the landmarks in a current view with a stored view, to create a set of vectors whose average is a vector pointing approximately in the home direction. The ability of this model to return the robot to a location was demonstrated in experiments with the same black cylinders as landmarks as were used for the ant experiments. Further, a simplification of the model was proposed, in which the robot (or animal) only stores an 'average landmark vector' rather than a full snapshot, and it was shown that the same homing behaviour could be reproduced. To provide "insights as to how the visual homing might be implemented in insect brains" (p.243) , it has recently been implemented in analog electronic hardware (Moller, 2000) and successfully tested on a robot in reproductions of experiments performed on bees in which landmarks are moved or removed. 3. A robot model of human motor control: Schaal & Sternad (2001) present a comparison of human and robot behaviour to analyse the control of motor trajectories. This is used to addressed a critical question - does the apparent '2/3 power law' relating endpoint velocity to path curvature in human movement represent an explicit parameter implemented directly in the nervous system, or is it merely the by-product of other control mechanisms? The study measured humans making cyclic drawing motions, and modelled the behaviour using a 7 degree-of-freedom anthropomorphic robot arm, with PID control of joint movements based on simple sinusoidal target trajectories. The frequency, amplitude and phase of the sinusoids were estimated from measurements on the human subjects. They found that "As in the human data, for small perimeter values [the 2/3 law] was produced quite accurately, but, as in the human subjects, the same deterioration of power law fits were apparent for increasing pattern size" (p.67). Moreover they could explain these deviations as a consequence of non-linearities in the kinematic transform from joint control to end-effector trajectories, and explain the power law as emergent from mechanisms for ensuring smooth movement in joint space. It can thus be seen that useful results for biology have been already been gained from robotic modelling. But it is still pertinent to ask: Why use robots to simulate animals? How does this methodology differ from alternative approaches to modelling in biology? To answer these questions it is necessary to understand the different ways in which models can vary, which will now be examined. 3. Dimensions for describing models Figure 2: Dimensions for describing models Figure 2 presents a seven-dimensional view of the 'space' of possible biological models. If the 'origin' is taken to be using the system itself as its own model (to cover the view expressed by Rosenblueth & Wiener (1945) as "the best material model of a cat is another, or preferably the same, cat" p.316) then a model may be distanced from its target in terms of abstraction, approximation, generality or relevance. It may copy only higher levels of organisation, or represent the target using a very different material basis, or only roughly reproduce the target's behaviour. Exactly what is meant here by each of listed dimensions, and in what ways they are (and are not) related will be discussed in detail in what follows. They are presented as an attempt to capture, with a manageable number of terms, as much as possible of the variation described and discussed in the literature on modelling, and to separate various issues that are often conflated. Though it is generally more popular in the literature to classify models into types (see for example the rather different taxonomies provided by Achinstein (1968), Haefner, (1996), Harre (1970b) and Black (1962)), there are precedents for this kind of dimensional description of models. Some authors attempt to use a single dimension. For example Shannon (1975) presents a diagram of models situated on a single axis that goes from exact physical replicas at one end to highly abstracted symbolic systems at the other. By contrast, Schultz & Sullivan (1972) present a list of some 50-odd different dimensions by which a model may be described. One set of dimensions widely discussed in ecological modelling was proposed by Levins in 1966. He suggested that models could vary in realism, precision and generality (in his 1993 reply to Orzack & Sober's (1993) critique he notes that this was not intended to be a formal or exhaustive description). Within the systems approach to modelling the most commonly discussed dimensions are complexity, detail and validity as well as more practical or pragmatic considerations such as cost (e.g. Rothenberg (1989) includes cost-effectiveness as part of his definition of simulation). Brooks & Tobias (1996) discuss some proposed methods for measuring these factors, and also point out how some of the connections between these factors are not as simple as seems to be generally thought. Many of the debates about 'appropriate' biological simulation assume that there are strict relations between certain aspects of modelling. Neural nets are said to be more accurate than symbol processing models because they are lower level; Artificial Life models are said to be general because they are abstract; neuromorphic models are said to be more realistic because they use a physical implementation. However, none of these connections follow simply from the nature of modelling but depend on background assumptions about biology. Is inclusion of a certain level essential to explaining behaviour? Can general laws of life be found? Are physical factors more important than information processing in understanding perception? The arguments for using robot models in biology, as for any other approach, reflect particular views about biological explanation. This will be further discussed in section 4 which applies the defined dimensions to describe the biorobotic approach. 3.1 Biological Relevance Is the biological target system clearly identified? Does the model generate hypotheses for biology? Models can differ in the extent to which they are intended to represent, and to address questions about, some real biological system. Work in biorobotics varies in biological relevance. For example Huber & Bulthoff (1998) use a robot to test the hypothesis that a single motion-sensitive circuit can control stabilisation, fixation and approach in the fly. This work is more directly applicable to biology than the robot work described by Srinivasan et al. (1999) utilising bee-inspired methods of motor control from visual flow-fields, which does not principally aim to answer questions about the bee. Similarly, the robotuna (Triantafyllou & Triantafyllou, 1995) and 'robopike' were specifically built to test hypotheses for fish swimming - "The aim of these robots is to help us learn more about the complex fluid mechanics that fish use to propel themselves" (Kumph, 1998) - whereas the pectoral fin movements implemented on a robot by Kato & Inaba (1998), though based on close study of Black bass, are not tested primarily for how well they explain fish swimming capability. Another expression of this dimension is to distinguish between investigation of "the model as a mathematical statement and the model as empirical claim about some part of the physical world" (Orzack & Sober, 1993, p.535).Investigating a model for its own sake is often regarded critically. Hoos (1981) describes as "modilitisbeing more interested in the model than the real world and studying only the portions of questions that are amenable to quantitative treatment" (p.42). Bullock (1997) criticises Artificial Life where "simulations are sometimes presented as artificial worlds worthy of investigation for their own sakeHowever this practice is theoretically bankrupt, and such [result] statements have no scientific currency" (p.457). But Caswell (1988), for example, defends the need to investigate theoretical problems raised by models independently of their fit to reality. Langton's (1989) advocacy of investigating life as it could be is an example. As in 'pure' maths, the results may subsequently prove to have key applications, but of course there is no guarantee that the "model-creating cycle" will not end up "spiralling slowly but surely away from reality" (Grimm, 1994, p.645) without any reconnection occurring. It is worth explicitly mentioning in this context that a model that is 'irrelevant' for biology might have utility in other respects. Models may serve for purposes of communication or education; or be employed for prediction and control. Moreover, there may be some value in investigating the technological aspects of a model: the mechanisms may have utility independent of their adequacy in explaining their origin. Arkin (1998) describes robots that abstract and use "underlying details" from biological sciences "unconcerned with any impact on the original discipline" (p.32). Such 'models' should then be evaluated with respect to engineering criteria (6), rather than how well they represent some natural system. Biologically 'irrelevant' models, then, are those too far removed from biology to connect their outcomes back to understanding the systems that inspired them. For a non-robotic example, doubts are expressed about the relevance of artificial neural networks by e.g. Miall (1989): "it is not clear to what extent artificial networks will help in the analysis of biological networks" (p.11). The main criteria for relevance could be taken to be the ability of the model to generate testable hypotheses about the biological system it is drawn from. For example the robot studies of Triantafyllou & Triantafyllou (1995) mentioned above suggest that fish use the creation of vortexes as a means of efficient tail-fin propulsion. Arbib & Liaw (1995) provide their as definition of a biological model: "a schema-based model becomes a biological model when explicit hypotheses are offered as to how the constituent schemas are played over particular regions of the brain" (p.56) (in their case, this involves the use of simulated and robot models of the visual guidance of behaviour in the frog). Generalised, this seems an appropriate test for relevance: are the mechanisms in the model explicitly mapped back to processes in the animal, as hypotheses about its function? In biorobotics this may sometimes concern neural circuitry, e.g. in a model of auditory localisation of the owl (Rucci et al. 1999). But it can also occur at a relatively high level, such as using shaping methods in learning (Saksida et al., 1997) or involve testing a simple algorithm such as the sufficiency of a small set of local rules to explain collecting and sorting behaviour in ants (Melhuish et al., 1998; Holland et al, 1999). The point is to use the robot model to make a serious attempt at addressing biological questions, at whatever level these may exist. This notion of 'relevance' appears to be what at least some authors mean by the term 'realism' in describing models. Churchland & Sejnowski (1988) appear to define 'realistic' in this way "realistic models, which are genuinely and strongly predictive of some aspect of nervous system dynamics or anatomy" vs. "simplifying models, which though not so predictive, demonstrate that the nervous system could be governed by specific principles" (p.744). But this is rather different to their definition in Sejnowski et al. (1988) of a realistic model as a "large scale simulation that tries to incorporate as much of the cellular detail as is available" made "realistic by adding more variables and more parameters" (p.1300). It seems unlikely that they believe only models realistic in the latter sense can be realistic in the former sense - indeed they argue in Churchland et al. that "a genuine perfect model, faithful in every detail, is as likely to be incomprehensible as the system itself" (p.54). However, 'realistic' is often used to mean 'detailed', or 'not abstract'. For example: Beer et al. (1998) specify realistic in relation to robot models as "literally try to emulate in every detail a particular species of insect" (p. 32); Manna & Pnueli (1991) define realism as degree of detail; Palsson & Lee (1993) directly equate realistic to complex- a decision on realism is how many factors to include; and Orzack & Sober (1993) redefine Levins' (1966) realism as "takes into account more independent variables known to have an effect" (p.534). However it is clear that Levins (1966) was concerned to argue against the assumption that a model can only be made 'realistic' by being more detailed. His discussion of real & general models includes a number of quite simple and abstract examples: the issue of realism is the extent to which they improve understanding of the biological system, i.e. what I have here called relevance. Schultz & Sullivan (1972) make a useful distinction between modelling that tries to build a complete "picture of reality" versus building a device for learning about reality: i.e. it may be possible for a model to be too detailed (or 'realistic' in one sense) to actually be productive of hypotheses (or 'realistic' in the other sense). Collin & Woodburn (1998) similarly refer to the possibility of "a model in which the incorporated detail is too complex...for it to contribute anything to the understanding of the system" (p.15-16). The relevance of a model to biology, and the detail it includes, are separable issues which should not be conflated under the single term 'realism'. 3.2 Level What are the base units of the model? This dimension concerns the hierarchy of physical/processing levels that a given biological model could attempt to represent. Any hypothesis will usually have elemental units whose "internal structure does not exist or can be ignored" (Haefner, 1996, p.4). In biology these can range from the lowest known mechanisms such as the physics of chemical interactions through molecular and channel properties, membrane dynamics, compartmental properties, synaptic & neural properties, networks and maps, systems, brains and bodies, perceptual and cognitive processes, up to social and population processes (Shepherd, 1990). The level modelled in biorobotics usually includes mechanisms of sensory transduction, for example the sonar sensors of bats (Kuc, 1997) including the pinnae movements (Peremans et al., 1998), or of motor control, such as the six legs of the stick insect (Pfeiffer et al., 1995) or the multi-jointed body of the snake (Hirose, 1993). The central processing can vary from a rule-based level through high level models of brain function such as the control of eye movements (Schall & Hanes, 1998), to models of specific neuron connectivity hypothesised to underlie the behaviour, such as identified neural circuitry in the cricket (Webb & Scutt, 2000), and even the level of dendritic tree structure that explains the output of particular neurons such as the looming detector found in the locust and modelled on a robot by Blanchard et al. (1999). The data for the model may come from psychophysics (e.g. Clark's (1998) model of saccades), developmental psychology (Scassellati, 1998) or evolutionary studies (Kortmann & Hallam, 1999), but most commonly comes from neuroethological investigations. This notion of level corresponds to what Churchland & Sejnowski 1988 call levels of organisation and as they note this does not map onto Marr's well-known discussion of levels of analysis (1982). Marr's levels (computational, algorithmic and implementational) apply rather to any explanation across several levels of organisation and describes how one level (be that network, neuron or channel) considered as an algorithm relates to the levels above (computation) and below (implementation). In fact this point was made clearly by Feibleman (1954): "For any organisation, at any given level, its mechanism lies at the level below and its purpose at the level above"(p.61). One source of the conflict over the 'correct level' for biological modelling may be that levels in biology are relatively close in spatio-temporal scale, as contrasted with macro and micro levels in physics by Spriet & Vansteenkiste (1982). They point out that "determination of an appropriate level is consequently less evident" (p.46) in biological sciences. Thus it is always easy to suggest to a modeller that they should move down a level; while it is obviously impractical to pursue the strategy of always working at the lowest level. Koch (1990) makes the interesting point that low-level details may be unimportant in analysing some forms of collective neural computation, but may be critical for others - the 'correct level' may be problem specific, and "which really are the levels relevant to explanation of in the nervous system is an empirical, not an a priori, question" (Churchland et al., 1990, p.52). Another problem related to levels is the misconception that the level of a model determines its biological relevance. A model is not made to say more about biology just by including lower-level mechanisms. For example, using a mechanism at the neural level doesnt itself make a model realistic: most neural network controlled robots have little to do with understanding biology (Zalzala & Morris, 1996). Moreover, including lower levels will generally make the model more complex, which may result in its being intractable and/or incomprehensible. Levins (1993) provides a useful example from ecological models: it is realistic to include a variable for the influence of nutrients; less realistic to include specific variables for nitrogen and oxygen if thereby other nutrient effects are left out. It is also important to distinguish level from accuracy (see below) as it is quite possible to inaccurately represent any level. Shimoyama et al. (1996) suggest that to "replicate functionality and behaviournot necessarily duplicate their anatomy" in building robot models of animals is to be "not completely faithful" (p.8): but a model can faithfully replicate function at different levels. 3.3 Generality How many systems does the model target? A more general model is defined as one that "applies to more real-world [target] systems" (Orzack & Sober, 1993, p.534). Some researchers in biorobotics appear sanguine about the possibility of generality e.g. Ayers et al. (1998) claim "locomotory and taxis behaviours of animals are controlled by mechanisms that are conserved throughout the animal kingdom" and thus their model of central pattern generators is taken to be of high generality. Others are less optimistic about general models. Hannaford et al (1995), regarding models of motor control with broad focus, opines "because of their broad scope, it is even more difficult for these models to be tested against the uncontroversial facts or for them to predict the results of new reductionist experiments". This suggests that increasing generality decreases relevance, so it should be noted that, strictly speaking, a model must be relevant to be general - if it doesn't apply to any specific system, then how can it apply to many systems (Onstad, 1988)? But a model does not have to be general to be relevant. The obvious way to test if a model is general is to show how well it succeeds in representing a variety of different specific systems. For many models labelled 'general' this doesnt happen. When it is attempted, it usually requires a large number of extra situation or task specific assumptions to actually get data from the model to compare to the observed target. This is a common criticism of optimal foraging studies (Pierce & Ollanson, 1987): that provided enough task specific assumptions are made, any specific data can be made to fit the general model of optimality. A similar critique can be made of 'general' neural nets (Verschure, 1996) - a variety of tasks can be learned by a common architecture, but only if the input vectors are carefully encoded in a task specific manner. Raaijmakers (1994) makes a similar point for memory models in psychology and pertinently asks is this any better than building specific models in the first place? The most common confusion regarding generality is that what is abstract will thereby be general. This can often be found in writings about artificial life simulations, and Estes (1975) for example makes this claim for psychological models. Shannon (1975) talks about "the most abstract and hence the most general models" (p.10) and Haefner (1996) suggests more detail necessarily results in less generality. Sejnowski et al (1988) describe simplifying models as abstracting from individual neurons and connectivity to potentially provide general findings of significance for the brain. Sometimes this argument is further conflated with 'levels', for example Wilson (1999) discusses how "component neurons may be described at various levels of generality" (p.446) contrasting the 'abstraction' of spike rates to the 'detail' of ionic currents - but an ionic current description is actually more general as it applies to both spiking and non-spiking neurons. The membrane potential properties of neurons are very general across biology but not particularly abstract; whereas logical reasoning is quite abstract but not very general across biological systems. Obviously some concepts are both abstract and general such as feedback and many concepts are neither. Moreover precisely the opposite claim, i.e. that more detail makes models more general, is made by some authors e.g. Black (1962), Orzack & Sober (1993). The reasoning is that adding variables to a model will increase its scope, because it now includes systems where those variables have an influence, whereas before it was limited to systems where they do not. Grimm (1994) points out that insofar as generality appears to be lost when increasing detail, it may simply be because the systems being modelled are in fact unique, rather than because of an inherent trade-off between these factors. This raises the important issue that "generality has to be found, it cannot simply be declared" (Weiner, 1995, p.155). That is to say the generality of a model depends on the true nature of the target(s). If different animals function in different ways then trying to generalise over them wont work you are left studying an empty set. Robertson (1989) makes the same point with regard to neural networks "[neural] circuits that are unique in their organisation and operation demand unique models if such models are to be useful" (p.262); Taylor (1989) similarly argues for ecology that simple models are "not shortcuts to ecological generality". Consequently, one strategy is to instead work on understanding specific systems, from which general mechanisms, if they exist, will emerge (Arbib & Liaw, 1995). Biology has often found that the discovery and elucidation of general mechanisms tends to come most effectively from close exploration of well-chosen specific instantiations (Miklos, 1993) such as the fruitfly genome or squid giant axon. 3.4 Abstraction How many elements and processes from the target are included in the model? Abstraction concerns the number and complexity of mechanisms included in the model; a more detailed model is less abstract. The 'brachiator' robot models studied by Saito & Fukuda (1996) illustrate different points on this spectrum: an early model was a simple two-link device, but in more recent work they produce a nine-link, 12 degree-of-freedom robot body with its dimensions based on exact measurements from a 7-8 year-old female simiang skeleton. 'Abstraction' is not just a measure of the simplicity/complexity of the model however (Brooks & Tobias, 1996) but is relative to the complexity of the target. Thus a simple target might be represented by a simple, but not abstract, model, and a complex model still be an abstraction of a very complex target. Some degree of abstraction is bound to occur in most model building. Indeed it is sometimes taken as a defining characteristic of modelling - "A model is something simple made by the scientist to help them understand something complicated" (Segev, 1992, p.414). It is important to note that abstraction is not directly related to the level of modelling: a model of a cognitive process is not, of its nature, more or less abstract than a model of channel properties. The amount of abstraction depends on how the modeller chooses to describe and represent the processes, not what kind of processes they represent. Furthermore, the fact that some models - such as biorobots - have a hardware 'medium' (see below) does not make them necessarily less abstract than computer simulations. A simple pendulum might be used as an abstract physical model for a leg, whereas a symbolic model of the leg may include any amount of anatomical detail. As Etienne (1998) notes "Robots tend to simulate behaviour and the underlying neural events on the basis of a simplified architecture and therefore less precisely than computers" (p.286). How much abstraction is considered appropriate seems to largely reflect the tastes of the modeller: should biology aim for simple, elegant models or closely detailed system descriptions? Advocates of abstraction include Maynard Smith (1974) "Should we not therefore put into the model everything that we think might be important? construction of such complex models is a waste of time" (p.116), and Molenaar (1994) "precisely by simplification and abstraction that models are most useful" (p.101). The latter gives as reasons for preferring more abstract models that complex models are harder to implement, understand, replicate or communicate. An important point is they thereby become hard for reviewers to critique or check (e.g. Rexstad & Innis (1985) report a surprising number of basic errors in published models they were attempting to reimplement to test simplification techniques). Simpler models are easier to falsify and reduce the risk of merely data-fitting, by having fewer free parameters. Their assumptions are more likely to be transparent. Another common argument for building a more abstract model is to make the possibility of an analytical solution more likely (e.g. the abstraction of neural sprouting proposed by Elliot et al., 1996). However abstraction carries risks. The existence of an attractive formalism might end up imposing its structure on the problem so that alternative, possibly better, interpretations are missed. Segev (1992) argues that in modelling neurons, we need to build complex detailed models to discover what are appropriate simplifications. Details abstracted away might turn out to actually be critical to understanding the system. As Kaplan (1964) notes the issue is often not just over-simplification per se, but whether we have "simplified in the wrong way" or that "what was neglected is something important for the purposes of that very model"(p.281). For explaining biological behaviour, abstracting away from the real problems of sensorimotor interaction with the world is argued, within biorobotics, to be an example of the latter kind: in this case, abstraction reduces relevance because the real biological problem is not being addressed. 3.5 Structural Accuracy Is the model a true representation of the target? Accuracy is here intended to mean how well the mechanisms in the model reflect the real mechanisms in the target. This is what Zeigler calls structural validity: "if it not only produces the observed real system behaviour but truly reflects the way in which the real system operates to produce this behaviour" (1976, p.5) as distinct from replicative and predictive validity, i.e. how well the input/output behaviour of the system matches the target (7). This notion has also been dubbed "strong equivalence" (Fodor, 1968). Brooks & Tobias (1996) call this the credibility of the model, and Frijda (1967) suggests "[input/output] performance as such is not as important as convincing the reader that the reasons for this performance are plausible" (p.62). Thus Hannaford et al. (1995) lay out their aims in building a robot replica of the human arm as follows: "Although it is impossible to achieve complete accuracy, we attempt to base every specification of the systems function and performance on uncontroversial physiological data". One major issue concerning the accuracy of a model is "how can we know?" (this is also yet another meaning of 'realism'). The anti-realist interpretation of science says that we cannot. The fact that certain theories appear to work as explanations is not evidence to accept that they represent reality, because the history of science has shown us to be wrong before (the pessimistic meta-induction, Laudan, 1981). On the other hand, if they do not approximately represent reality then how can we build complex devices that actually work based on those theoretical assumptions (the no miracle argument', Putnam 1975)? Not wishing to enter this thorny territory, it will suffice for current purposes to argue for no more than an instrumentalist position. If we cant justifiably believe our models, we can justifiably use them (Van Fraassen, 1980). Accuracy in a model means there is "acceptable justification for scientific content of the model" (Rykiel, 1996, p.234) relative to the contemporary scientific context in which it is built; and that it is rational (Cartwright, 1983) to attempt "experimental verification of internal mechanisms" (Reeke & Sporns, 1993, p.599) suggested by the model. Inaccuracies in models should affect our confidence in using the model to make inferences about the workings of the real system (Rykiel, 1996), but do not rule out all inference provided "assumptions [are] made explicit so that the researcher can determine in what direction they falsify the problem situation and by how much" (Ackoff, 1962, p.109). Horiuchi & Koch (1999) make this point for neuromorphic electronics "By understanding the similarities and differencesand by utilising them carefully, it is possible to maintain the relevance of these circuits for biological modelling" (p.243). Thus accuracy can be distinguished from relevance. It is possible for a model to address 'real' biological questions without utilising accurate mechanisms. Many mathematical models in evolutionary theory fit this description. Dror & Gallogly (1999) describe how "computational investigations that are completely divorced, in practice and theory, from any aspect of the nervous systemcan still be relevant and contribute to understanding the biological system" (p.174), for example as described by Dennett (1984) to "clarify, sharpen [and] systematise the purely semantic level characterisation" (p.203) of the problem to be solved. Accuracy is not synonymous with 'amount of detail' included in the model. This is well described by Schenk (1996) in the context of 'tree' modelling. He notes that researchers often assume a model with lots of complex detail is accurate, without actually checking that the details are correct. Or, a particular simplification may be widely used, and justified as a necessary abstraction, without looking at alternatives that may be just as abstract but more accurate. Similarly, it has already been noted that accuracy does not relate directly to the level of the representation a high-level model might be an accurate representation of a cognitive process where a low-level model may turn out not to be accurate to brain biology. A widely used term that overlaps with both 'relevance' and 'accuracy' is biological plausibility. This can be taken simply to mean the model is applicable to some real biological system; or used to describe whether the assumptions on which the model are based are biologically accurate. More weakly, it is sometimes used to mean that the model "does not require biologically unrealistic computations" (Rucci et al., 1999, p.?). In fact this latter usage is probably a better interpretation of 'plausible', that is, it describes models where the mechanism merely could be like the biology, rather than those where there are stronger reasons to say the mechanism is like the biology - the latter is 'biological accuracy', and neither is a pre-requisite for 'biological relevance' in a model. 3.6 Match To what extent does the model behave like the target? This dimension describes how the model's performance is assessed. In one sense it concerns testability: can we potentially falsify the model by comparing its behaviour to the target? For example, the possibility that the lobster uses instantaneous differences in concentration gradients between its two antennules to do chemotaxis was ruled out by testing a robot implementation of this algorithm in the real lobster's flow-tank (Grasso et al. 2000). However assessment of a biorobot may be simply in terms of its capabilities rather than directly relate back to biology. While a significant role for robot models is the opportunity to compare different control schemes for their success (e.g. Ferrell (1995) looks at three different controllers, two derived from biology, for six-legged walking) simply reporting what will work best on a (possibly inaccurate) robot model does not necessarily allow us to draw conclusions about the target animal behaviour. When a direct comparison with biology is attempted, there is still much variability on this dimension regarding the nature of the possible match between the behaviours. Should the behaviours be indistinguishable or merely similar? Are informal, expert or systematic statistical investigations to be used as criteria for assessing similarity? Is a qualitative or quantitative match expected? Can the model both reproduce past data and predict future data? Some modelling studies provide little more than statements that, for example, "the overall behaviour looked quite similar to that of a real moth" (Kuwana et al., 1995, p.375). Others make more direct assessment, e.g. Harrison & Koch (1999) have tested their analog VLSI optomotor system in the real fly's flight simulator and "repeated an experiment often performed on flies", showing for example that the transient oscillations observed in the fly emerge naturally from inherent time-delays in the processing on the chip. Even where biological understanding is not the main aim of the study, it is possible that "animals provide a benchmark" for evaluating the robot system, such as Berkemeier & Desai's (1996) comparison of their "biologically-styled" leg design to the hindlimb of a cat at the level of force and stiffness parameters. There are inevitable difficulties in drawing strong conclusions about biological systems from the results of robot models. As with any model, the performance of similar behaviour is never sufficient to prove the similarity of mechanisms - this is the problem of underdetermination. Some authors are concerned to stress that behavioural match is never sufficient evidence for drawing conclusions about the accuracy or relevance of a model (e.g. Deakin, 1990; Oreskes et al., 1994). Uttal (1990) goes so far as to say that "no formal model is verifiable, validatable or even testable with regard to internal mechanisms" and claims this is "generally accepted throughout other areas of science". But the widespread use of models in exactly the way so deplored suggests that most modelers think a reasonable defence for the practice can be made in terms of falsification or coincidence. If the model doesnt match the target then we can reject the hypothesis that led to the model or at least know we need to improve our model. If it does match the target, better than any alternatives, then the hypothesis is supported to the extent that we think it unlikely that such similar behaviour could result from completely different causes. This is sometimes more formally justified by reference to Bayes theorem (Salmon, 1996). However there are some limitations to this defence. Carrying out the comparison of model and target behaviours can be a sufficiently complex process that neither of the above points apply. First, how can we be sure that the measurements on the real system are correct? If the model doesnt match we may reject the measurements rather than the model. Second, an interpretation process is required to convert the behaviour of the model and target into comparable form. This interpretation process may be wrong, or more worryingly, may be adjusted until the match comes out right - "interpretive steps may inadvertently contain key elements of the mechanism" (Reeke & Sporns, 1993, p.598). Third, it is not uncommon for models to have their parameters tuned to improve the match. As Hopkins & Leipold 1996 demonstrate, this practice can in fact conceal substantial errors in the model equations or in the data. Finally Bower & Koch (1992) provide a sobering view of the likelihood of a model being rejected on the basis of failure to match experiments: "experiments needed to prove or disprove a model require a multi-year dedicated effort on the part of the experimentalistfalsification of any one such model through an experimentum crucis can be easily countered by the introduction of an additional ad hoc hypothesis or by a slight modification of the original model. Thus the benefit, that is, the increase in knowledge, derived from carrying out such time- and labour- intensive experiments is slight" (p.459) 3.7 Medium What is the simulation built from? Hypotheses can be instantiated as models in various different forms, and hardware implementation is one of the most striking features of biorobotics compared to other biological models. Doucet & Sloep (1992) list mechanical, electric, hydraulic, scale, map, animal, game and program as different forms a model might take. A popular taxonomy is 'iconic', 'analog' and 'symbolic' models (e.g. Black, 1962; Schultz & Sullivan, 1972; Kroes, 1989; Chan & Tidwell, 1993) but the definitions of these terms do not stand up to close scrutiny. Iconic originally derives from representation, meaning something used to stand in for something else, and is used that way by some authors (Harre, 1970b; Suppe, 1977) to mean any kind of analogy-based model. However, it is now often defined specifically as using "another instance of the target type" (Chan & Tidwell, 1993), or "represent the properties by the same properties with a change of scale" (Schultz & Sullivan, 1972, p.6). One might assume this meant identity of materials for the model and the target, as discussed below, but the most cited example is Watson & Cricks scale model of DNA, which was built of metal, not deoxyribonucleic acid. Yet 'analog' models are then distinguished from 'iconic' as models that introduce a "change of medium" (Black, 1962) to stand in for the properties. A popular example of an analog model is the use of electrical circuit models of mechanical systems. Some authors include computer models as analogs e.g. Achinstein (1968) whereas others insist they are symbolic e.g. Lambert & Brittan (1992). But whether the properties are shared or analogous or merely symbolically represented depends entirely on how the properties are defined: whether the 'essence' of a brain is its chemical constitution, its connectivity pattern or its ability to process symbols depends on what you are trying to explain. All models are 'iconic', or share properties, precisely from the point of view that makes the model usefully stand in for the target for a particular purpose (Durbin (1989) calls this "the analogy level"). Hence I will abandon this distinction and consider the medium more literally as what the model is actually built from. A model can be constructed from the same materials as its target. Bulloch & Syed (1992) describe culture models i.e. the reconstruction of simplified networks of real neurons in vitro as models of networks in vivo; and Miklos (1993) argues for the use of transgenic techniques to "build novel biological machines to test our hypotheses" (p.843). Kuwana et al. (1995) use actual biological sensors - the antennae of moths on their robot model and note these are 10,000 times more sensitive than available gas sensors. In these cases the representation of the target properties is by identity in the model properties. However most models are not constructed from the same materials. They may share some physical properties with their targets, e.g. a vision chip and an eye both processes real photons. Gas sensing is substituted for pheromone sensing in Ishida et al.'s (1999) robot model of the moth, but they replicate other physical features of the sensor, for example the way that the moths wings act as a fan to draw air over the sensors. Models may use similar physical properties. This may mean that the properties can be described by the same mathematics e.g. the subthreshold transistor physics used in neuromorphic design are said to be equivalent to neuron membrane channel physics (Etienne-Cummings et al, 1998). Or it may be a 'looser' mapping. The robot model of chemotaxis in C. Elegans (Morse et al., 1998) uses a light source as an analog for a chemical gradient in a petri dish, while preserving a similar sensor layout and sensitivity. Models may also use quite different properties to stand in for the properties specified in the target e.g. a number in the computer processor labelled activity to represent the firing rate of a neuron, or the use of different coloured blocks in a robot arena to represent 'food' and 'mates'. In fact nearly all models use all these modes of representation to various extents in creating correspondences to the hypothesised target variables. Thus 'symbolic' computer simulations frequently use time to represent time (Schultz & Sullivan, 1972); 'iconic' scale models tend to use materials of analogous rigidity rather than the same materials; mathematical models can be treated as a short-hand for building a physically 'analogous' system. Rather than sharply contrasting 'kinds' of models, what is of relevance are the constraints the medium imposes on the operation of the model. What makes a representation more symbolic is that the medium is more arbitrary or neutral with respect to representing the target properties. Symbols rest on "arbitrary conventionsno likeness or unlikeness it may bear to a its subject matter counts as a reason why it is a symbol for, or of, a " (Harre, 1970, p.38). More physical models are chosen because the medium has some pre-existing resemblance to the properties we wish to represent, such as the use of analog VLSI to implement neural processing (Mead, 1989). The medium may contribute directly to the accuracy and relevance of the model, or simply make it easier to implement, run or evaluate as described by Quinn & Espenscheid (1993): "Even in the most exhaustive [computer] simulations some potentially important effects may be neglected, overlooked or improperly modelled. It is often not reasonable to attempt to account for the complexity and unpredictability of the real world. Hence implementation in hardware is often a more straightforward and accurate approach for rigorously testing models of nervous systems" (p.380) Doucet & Sloep (1992) point out "the way physical models operate is, as it were, ruled by nature itselfrules for functioning of conceptual [symbolic] modelswe make ourselves" (p.281). Symbolic models may implicitly rely on levels of precision in processing that are unlikely to be possible to real systems. Computer programs can represent a wider range of possible situations than we can physically model, but physical models cannot break the laws of physics. 4. The position of biorobotics In section 2.4 I discussed in what sense biorobots can be considered biological models - in particular, how robots can be used as physical simulations of organisms, to test hypotheses about the control of behaviour. How, then, does biorobotics differ from other modelling approaches in biology? If it is suggested that "the use of a robot ensures the realism" (Burgess et al., 1997, p.1535) of a model, does this mean making the model more relevant for biology, making it more detailed, making it more accurate, making it more specific (or general?), making it a 'low-level' model, making the performance more lifelike, or just that the model is operating with 'real' input and output? In this section, I will use the framework developed above to clarify how biorobotics differs, on various dimensions, from other kinds of biological models. I will also advance arguments for why the resulting position of biorobots in modelling 'space' is a good one for addressing some fundamental questions in explaining biological behaviour. I do not intend to suggest that it is the only correct approach - "there is no unique or correct model" (Fowler, 1997, p.8) of a system. However "there are good and bad models" (op. cit.) relative to the purposes of the model builder. Thus this discussion will also illustrate for what purposes in understanding biology biorobotics appears to have particular strengths. 4.1 Relevance to biology A notable feature that distinguishes recent biorobotic research from earlier biologically-inspired approaches in robotics (such as the 'behaviour-based' approach articulated by Brooks Brooks (1986) and Arkin (1998)) is the increased concern as to whether and how the robot actually resembles some specified biological target. Thus most of the robot studies listed in table 1 cite the relevant biological literature that has guided decisions on what to build, how to build it, and how to assess it; frequently a biological investigator has initiated or collaborated directly in the research. The likelihood of being able to apply the results back to biology is thus increased, even where this was not the primary aim in the initial robot construction. Biorobotics has been able to confirm, develop and refute theories in several areas of biology, as already described in a number of examples above. A distinction was drawn in previous sections between using biorobots as biological models and using them for engineering, and it is sometimes argued that these are incompatible, or at least orthogonal, concerns (e.g. Hallam, 1998; Pfeifer, 1996). Nevertheless many of those working in biorobotics claim to be doing both. For example Hirose et al. (1996) include as biorobotics both "build robots that emulate biological creatures" and "use development of robots to understand biological and ethological concepts" (p.95). Espenschied et al. (1996) in describing their work on robot models of insect walking claim "results that demonstrate the value of basing robot control on principles derived from biologyalsoprovide insight into the mechanisms of locomotion in animals and humans" (p.64). Lambrinos et al. (1997) regarding their robot model of desert ant navigation suggest "On the one hand, the results obtained provide support for the underlying biological models. On the other handcomputationally cheap navigation methods for mobile robots are derived" (p.?). Raibert (1986) in discussing methods for legged locomotion points out "In solving problems for the machine, we generate a set of plausible algorithms for the biological system. In observing the biological behaviour, we explore plausible behaviours for the machine" (p.195). Indeed, even where the explicit aim in building the robot model is said to be just engineering or just biology, the process of doing one is very likely to involve some of the other. It is the engineering requirement of making something that actually works that creates much of the hypothesis testing power of robotic models of biological systems. This is well described by Raibert (1986): "To the extent that an animal and a machine perform similar locomotion tasks, their control systems and mechanical structure must solve similar problems. By building machines we can get new insights into these problem, and we can learn about possible solutions. Of particular value is the rigor required to build physical machines that actually work" (p.3) In the other direction, building a robot 'inspired' by an animal source presupposes a certain degree of knowledge about that source. If as Ayers et al. (1998) claim "biologically-based reverse engineering is the most effective procedure" to design robots, then we need to understand the biology to build the robots - in Ayers' case this has involved exhaustive analysis of the underlying 'units' of action in the locomotion behaviour of the lobster. That is, our goal is as defined by Shimoyama et al. (1996) "to understand activation and sensing of biological systems so that we can build equivalents" (p.8) or Leoni et al. (1998) "a proper comprehension of human biological structures and cognitive behaviour is fundamental to design and develop a [humanoid] robot system" (p.2274). The robot designers motives thus overlap substantially with those of the biologist. 4.2 Level It is sometimes argued in biorobotics that this methodology should focus on lower levels, or working from 'bottom-up'. In fact Taddeucci & Dario (1998a) describe explicitly, in the context of models of eye-hand control, what most biorobotics researchers do implicitly i.e. work both top-down and bottom-up on the problems. The possible influence of lower level factors is kept in mind, and the exploration of the interaction of levels is engaged in. While this is perhaps true of many other modelling approaches, robotic implementation specifically supports the consideration and integration of different levels of explanation because of its emphasis on requiring a complete, behaving system as the outcome of the model. For example, Hannaford at al. (1995) primarily consider their robot arm as a "mechanism or platform with which to integrate information", particularly the interaction of morphology and neural control. Thus the context of the behaviour of the organism is always included in a robot model, counteracting the tendency in biological studies to lose sight of this context in close study of small parts of the underlying mechanisms. The level of mechanism modelled by the robot will reflect the level of information currently available from biology. Interest in a particular level of explanation (such as single neuron properties) may bias the choice of target system e.g. towards invertebrate systems in which identified neurons have been mapped (Franceschini, 1996). On the other hand, interest in a particular target may determine the level at which an accurate model can be attempted. For example, Etienne (1998) reviews the behavioural and physiological data on mammalian navigation and concludes that lack of information about the actual interactions of the neural systems "leaves the field wide open to speculative modelling" (p.283) at the level of networks. In addition, biorobotic systems emphasise the 'physical' level in the performance of sensing and action. That is, the dynamics of the physical interaction of the robot/animal and its environment are seen to be as critical in explaining its behaviour as the processing or neural connectivity (Chiel & Beer, 1997). It is often found that engaging closely in modelling the periphery simplifies central or higher level processing. For example, Mura & Shimoyama (1998) note that copying the circuitry of insect visual sensors "closely integrates sensing and early stage processing" to "ease off decision making at a higher processing level" (p.1859), and Kubow & Full (1999) discuss the extent to which running control is actually done by the mechanical characteristics of the cockroachs legs. Some of the most interesting results from biorobotic modelling are demonstrations that surprisingly simple control hypotheses can suffice to explain apparently complex behaviours when placed in appropriate interaction with the environment. Examples include the use of particular optical motion cues to achieve obstacle avoidance that slows down the robot in cluttered environments without explicit distance cues being calculated (Franceschini et al. 1992), the 'choice' between sound sources with different temporal patterns resulting from a simple four-neuron circuit in the robot cricket (Webb & Scutt, 2000), and the use of limb linkage through real world task constraints to synchronise arm control (Williamson, 1998). 4.3 Generality In engineering, robots built for specific tasks have to date been more successful than 'general purpose' ones. Similarly in biorobotics the most successful results to date have been in the context of modelling specific systems - particular competencies of particular animals. There is some doubt whether modelling 'general' animal competencies (e.g. by simulating 'hypothetical' animals such as Pfeifer's (1996) 'fungus eater' or Bertin & van de Grind's (1996) 'paddler') will tell us much about any real biological system. Without regrounding the generalisations by demonstrating the applicability of the results to some specific real examples, the problem modelled may end up being 'biological' only in the terminology used to describe it. An example of the tendency to more specificity is the shift in research described by Nelson & Quinn (1998) from generic six-legged walkers (Espenscheid, 1996) to a robot that closely copies the anatomy and mechanics of the cockroach. As they explain, the desired movement capabilities for the robot - fast running and climbing abilities - depend on quite specific properties such as the different functions of the rear, middle and front pair of legs. Hence the specific morphology has to be built into the robot if it is to be able to exploit features such as the propulsive power of the rear legs and the additional degrees of freedom in the front legs that enable the cockroach to climb. If important factors in understanding behaviour lie in the specific sensorimotor interface, then it is necessary to model specific systems in sufficient detail to encompass this. 'Generalising' a sensorimotor problem can result in changing the nature of the problem to be solved. What is lost are the properties described by Wehner (1987) as 'matched filters', the specific fit of sensor (or motor) mechanisms to the task. The sound localisation of crickets is a good illustration. Crickets have a unique auditory system in which the two ears are connected by a tracheal tube to form a pressure difference receiver. This gives them good directionality but only for a specific frequency - that of the calling song they need to localise. Copying this mechanism in a robot model it was possible to demonstrate that this factor alone can suffice to reproduce the cricket's ability to respond only to songs with the carrier frequency of conspecifics (Lund et al., 1997). Note that while 'matched filters' are by their nature specific to particular animals, the concept is a general one. Similarly, while the neural circuitry modelled in the cricket robot is highly specific to the task (and hence very efficient) the idea it uses of exploiting timing properties of neural firing is a general one. Thus we can see general principles emerging from the modelling of specific systems. Moreover, the 'engineering' aspect of biorobotics enhances the likelihood of discovering such generalities as it attempts to transfer or apply mechanisms from biology to another field, the control of man-made devices. 4.4 Abstraction It might be assumed that the aims discussed so far of increasing relevance by having a clearly identified target system, and increasing specificity rather than trying to invent general models, require that biorobotic models become more detailed. Beer et al. (1997) suggests as a principal for this approach "[generally to] err by including more biology than appears necessary" (p.33). However others believe that abstraction does not limit relevance e.g. McGeer (1990) "it seems reasonable to suppose that our relatively simple knee jointed model has much to say about walking in nature" (p.1643). Indeed has been suggested that a key advantage of biorobotics is the discovery of simpler solutions to problems in biology because it takes an abstract rather than analytic approach (Meyer, 1997). It is clear that some quite abstract robot representations have usefully tested some quite specific biological hypotheses. For example, there is minimal representation of biological details in the physical architecture of Beckers et al's (1996) robot 'ants', Burgess et al.'s (1997) 'rat' or indeed the motor control of the robot 'cricket' mentioned above, but nevertheless it was possible to demonstrate interesting resemblance in the patterns of behaviour of the robots and the animals, in a manner appropriate to testing the hypotheses in question. Rather than being less abstract, it might better be said that biorobotics has adopted different abstractions from simulations (or from standard robot control methods (Bekey,1996 ; Pratt & Pratt, 1998a)). Robots are not less abstract models just because they are physically implemented - a two-wheeled robot is a simpler model of motor control than a six-legged simulation. What does distinguish abstraction in biorobotics from simulations is that it usually occurs by leaving out details, substitution, or simplifying the representation, rather than by idealising the objects or functions to be performed. Thus even two-wheeled motor control has to cope with friction, bumps, gravity and so on whereas a six-legged computer simulation may restrict itself to representing only the kinematic problems of limb control and ignore the dynamics entirely. Different aspects of the systems are often abstracted to different degrees in biorobotics. Thus models involving quite complex sensors often use very simple two-wheeled motor control rather than equally complex actuators. Edelman et al. (1992) describe relatively complex neural models but test them in rather abstract tasks. Though some robots are tested in quite complex environments, the majority have a simplified environment constructed for them (though in some cases this is not much different from the controlled environment used to test the animals). Pfeifer (1996) and Cruse (2000) have made the point that this imbalance in abstraction may itself lead to a loss of biological relevance. What is needed is to ensure that the assumptions involved in the abstraction are clear, and justified. A good example is the description by Morse et al. (1998) of the simplifications they adopted in their robot model of chemotaxis in C. elegans, such as the biological evidence for abstracting the motor control as a constant propulsive force plus a steering mechanism provided by contraction of opposing muscles. 4.5 Accuracy If 'highest possible accuracy' was considered to be the aim in biorobotics, then there are many ways in which existing systems can be criticised. Most robot sensors and actuators are not directly comparable to biological ones: they differ in basic capability, precision, range, response times and so on. Binnard (1995) in the context of building a robot based on some aspects of cockroach mechanics, suggests that the "tools and materialsare fundamentally different" (p. 44), particularly in the realm of actuators. Ayers (1995) more optimistically opines that "Sensors, controlling circuits and actuators can readily be designed which operate on the same principles as their living analogs". The truth is probably somewhere in between these extremes. Often the necessary data from biology is absent or not in a form that can easily be translated into implementation (Delcomyn et al., 1996). The process of making hypotheses sufficiently precise for implementation often requires a number of assumptions that go well beyond what is accurately known of the biology. As for abstraction, there is also a potential problem in having a mismatch in the relative accuracy of different parts of the system. For example it is not clear how much is learnt by using an arbitrary control system for a highly accurate anatomical replica of an animal; or conversely applying a detailed neural model to control a robot carrying out a fundamentally different sensorimotor task. Biorobotics researchers are generally more concerned with building a complete, but possibly rough or inaccurate model, than with strict accuracy per se. That is, the aim is to build a complete system that connects action and sensing to achieve a task in an environment, even if this limits the individual accuracy of particular parts of the model because of necessary substitutions, interpolations and so on. While greater accuracy is considered worth striving for, a degree of approximation is considered a price worth paying for the benefits of gaining a more integrated understanding of the system and its context, in particular the "tight interdependency between sensory and motor processing" (Pichon et al., 1989, p.52). This is exemplified in their robot 'fly' by the use of self movement to generate the visual input required for navigation. Projects that set out to build 'fully accurate' models tend not to get completed, and we can learn more from several somewhat inaccurate models than from one incomplete one. In several cases the accuracy has then been increased iteratively, for example, the successive moves from a slower, larger robot implementation of the cricket robot (Webb, 1994), to a robot capable of processing sound at cricket speed (Lund et al., 1998), and then to a controller that more closely represents the crickets neural processing (Webb & Scutt, 2000). Indeed, Levins (1966) argues that building multiple models is a useful strategy to compensate for inevitable inaccuracies because results common to all the models are 'robust' with respect to the individual inaccuracies of each. 4.6 Match It should be admitted that the assessment of the behaviour relative to the target is still weak in most studies in biorobotics. It is more common to find only relatively unsupported statements that a robot "exhibited properties which are consistent with experimental results relating to biological control systems" (Leoni et al., 1998, p.2279). One encouraging trend in the direction of more carefully assessing the match is the attempt to repeat experiments with the same stimuli for the robot and the animal. For example Touretsky & Saksida (1997) describe how they "apply our model to a task commonly used to study working memory in rats and monkeys- the delayed match to sample task" (p.219), and Sharpe & Webb (1998) draw on data in ant chemical trail-following behaviour for methods and critical experiments to assess a robot model's ability to follow similar trails under similar condition variations, such as changes in chemical concentration. Some behaviours lend themselves more easily than others to making comparisons for example the fossilised worm trails reproduced in a robot model by Prescott & Ibbotson (1997) provide a clear behavioural 'record' to attempt to copy with the robot. The accuracy of the robot model may impose its own limits on the match. Lambrinos et al. (1997) note, when testing their polarisation compass and landmark navigation robot in the Sahara environment, that despite the same experimental conditions "it is difficult to compare the homing precision of these agents, since both their size and their method of propulsion are completely different" (p.?). There is also the inherent problem in any modelling, that reproducing the same behaviour is not proof that the same underlying mechanism is being used by the robot and the animal. There are some of ways in which the biorobotics approach can attempt to redress these limitations. By having a specific target, usually chosen because there is substantial existing data, more extensive comparisons can be made. Using a physical medium and more accurately representing environmental constraints reduces the possibility that the world model is being tuned to make the animal model work, rather than the reverse. The interpretation of the behaviour is more direct. Voegtlin & Verschure (1999) argue, in their robot implementation of models of classical conditioning, that by combining levels, and thus satisfying constraints from "anatomy, physiology and behaviour" the argument from match is strengthened. Finally, biorobotic modelling has been instrumental in driving the collection of further data from the animal. Quinn & Ritzmann (1998) describe how building a cockroach-inspired robot has "required us to make detailed neurobiological and kinematic observations of cockroaches" (p.239). Correctly matching the behaviour is perhaps less important then revealing what it is we need to know about the animal to select between possible mechanisms demonstrated in the robot. 4.7 Medium The most distinctive feature of the biorobotics approach is the use of hardware to model biological mechanisms. It is also perhaps the most often questioned - what is learnt that could not be as effectively examined by computer simulation? One justification relates to the issue of building 'complete' models discussed above - the necessity imposed by physical implementation that all parts of the system function together and produce a real output. Hannaford et al. (1995) argue that "Physical modelling as opposed to computer simulation is used to enforce self consistency among co-ordinate systems, units and kinematic constraints" in their robot arm. Another important consideration is that using identity in parts of a model can sometimes increase accuracy at relatively little cost. Using real water or air-borne plumes, or real antennae sensors, saves effort in modelling and makes validation more straightforward. Dean (1998) proposes that by capturing the body and environmental constraints, robots provide a stronger "proof in principle" that a certain algorithm will produce the right behaviour. In engineering, demonstration of a real device is usually a more convincing argument than simulated results. Thus one direction of current efforts in biorobotics is the attempt to find materials and processes that will support better models. Dario et al. (1997) review sensors and actuators available for humanoid robots. Kolacinski & Quinn (1998) discuss elastic storage and compliance mechanisms for more muscle-like actuators. Mojarrad & Shahinpoor (1997) describe polymeric artificial muscles that replicate undulatory motions in water, which they use to test theoretical models of animal swimming. On a similar basis some researchers use dedicated hardware for the entire control system (i.e. not a programmed microcontroller). Franceschini et al.'s (1992) models of the fly motion detection system used to control obstacle avoidance are developed as fully parallel, analog electronic devices. Maris & Mahowald 1998 describe a complete robot controller (including contrast sensitive retina and motor spiking neurons) implemented in analog VLSI. Cited advantages of hardware implementations include the ability to exploit true parallelism, and increased emphasis on the pre-processing done by physical factors such as sensor layout. It is important to note, however, that simply using a more physical medium does not reduce the need for "ensuring that the relevant physical properties of the robot sufficiently match those of the animal relative to the biological question of interest" (Beer et al., 1998, p.777). Electronic hardware is not the same medium as that used biology, and may lend itself to different implementations a particular problem is that neural connectivity is three dimensional where electronic circuits are essentially two-dimensional. However a more fundamental argument for using physical models is that an essential part of the problem of understanding behaviour is understanding the environmental conditions under which it must be performed - the opportunities and constraints that it offers. If we simulate these conditions, then we include only what we already assume to be relevant, and moreover represent it in a way that is inevitably shaped by our assumptions about how the biological mechanism works. Thus our testing of that mechanism is limited in a way that it is not if we use a real environment, and the potential for further discovery of the actual nature of the environment is lost. Thus Beckers et al. (1996) et al suggest "systems for the real world must be developed in the real world, because the complexity of interactions available for exploitation in the real world cannot be matched by any practical simulation environment" (p.183). Flynn & Brooks (1989) argue that "unless you design, build, experiment and test in the real world in a tight loop, you can spend a lot of time on the wrong problems" (p.15). 5. Conclusions "It was by learning the inner workings of nature that man became a builder of machines" (Hoffer, cited by Arkin, 1998, p.31). "We've only rarely recognised any mechanical device in an organism with which we weren't already familiar from engineering" (Vogel, 1999, p.311) Biorobotics, as the intersection of biology and robotics, spans both views represented by the quotes above - understanding biology to build robots, and building robots to understand biology. It has been argued that robots can be 'biological models' in several different senses. They can be modelled on animals - the biology as a source of ideas when attempting to build a robot of some target capability. They can be models for animals - robotic technology or theory as a source of explanatory mechanisms in biology. Or they can be models of animals - robots as a simulation technology to test hypotheses in biology. Work on this last kind of 'biorobot', and the potential contribution it can make to biology, has been the main focus of discussion in this paper. To assess biorobotics in relation to other kinds of simulations in biology, a multidimensional description of approaches to modelling has been proposed. Models can be compared with respect to their relevance, the level of organisation represented, generality, abstraction, structural accuracy, behavioural match, and the physical medium used to build them. Though interrelated, these dimensions are separable: models can be relevant without being accurate, general without being abstract, match behaviour at different levels, and so on. Thus a decision with respect to one dimension does not necessarily constrain a modeller with respect to another. I agree with Levins (1993) that a dimensional description should not be primarily considered as a means of ranking models as 'better' or 'worse' but rather as an elucidation of potential strategies. The strategy of biorobots has here been characterised as: increasing relevance and commitment to really testing biological hypotheses; combining levels; studying specific systems that might illustrate general factors; abstracting by simplification rather than idealisation; aspiring to accuracy but concerned with building complete systems; looking for a closer behavioural match; and using real physical interaction as part of the medium. The motivations for this strategy have been discussed in detail above, but can be compactly summarised as the view that biological behaviour needs to be studied in context, that is in terms of the real problems faced by real animals in real environments. Thus the justification of the biorobotic approach is grounded in a particular perspective on the issues that need to be addressed. Different approaches to modelling will reflect differing views about the processes being modelled, and the nature of the explanations required. One aim of this paper is to encourage other modellers to clarify their strategies and the justification for them - even if it is only by disagreement over the included dimensions. Indeed, different views of 'models' reflect different views of the 'nature of explanation', as has been long discussed in the philosophy of science. It has not been possible to pursue all these meta-issues, some of which seem in any case to have little relevance to everyday scientific use of simulation models. What is critical is that the conclusions that can be drawn from a model are only as good as the representation provided by that model. In this respect, by working on real problems in real environments, robots can make good models of real animals. Notes 1.Although Suppe(1977) distinguishes this representational use of 'model' from model used in the mathematical sense of a semantic interpretation of a set of axioms such that they are true. There is not space in this article to discuss this model theoretic approach in the philosophy of science (Carnap, 1966; Nagel, 1961; Suppe, 1977) or the formal systems theoretic approach to models developed by Zeigler (1976), and adopted in many subsequent works (e.g variants in Halfon, 1983; Maki & Thompson, 1973; Spriet & Vansteenkiste, 1982; Widman & Loparo, 1989. These formal/logical definitions are in any case not easy to apply to real examples of models in science where Modelling is certainly an art, involving a number of logical gaps (Redhead, 1980, p.162). 2. Black (1962) suggests this generic usage of 'model' is a pretentious substitute for theory whereas Stogdill (1970) calls it an unpretentious name for a theory. 3. Implementation is sometimes taken to mean actually reproducing a real copy of the system (Harnad, 1989), i.e. replication; this is not intended here. 4. It should be acknowledged that there are several, fairly widespread, definitions of simulation more restricted than the usage I have adopted here. First there is the usage that contrasts iterative solutions for mathematical systems to analytical solutions (Forrester, 1972). Second, there is the emphasis on simulations being processes i.e. dynamic vs. static models or an operating modelone that is itself a process (Schultz & Sullivan, 1972, p.?). These distinctions have some validity, but I am going to ignore them for convenience, as analytical and static models stand in the same relationship to targets and hypotheses as iterative or temporal ones. Third is the usage of simulation to refer to relatively detailed models of specific systems (e.g. of a particular species in certain niche) as opposed to more general models (e.g. of species propagation) which may also be implemented on computers for iterative solutions (e.g. Levins, 1993; Maynard Smith, 1974). Fourth is the distinction of simulations as models that only attempt to match input-output behaviour (e.g. Ringle, 1979; Dreyfus, 1979) as opposed to models that are supposed to have the same internal mechanisms as their target. These latter distinctions often carry the implication that simulations are used for applications and models for science, i.e. these distinctions tend to be polemic rather than principled (Palladino, 1991), and they are certainly not clear-cut. 5. The term source is taken from Harre (1970b) who discusses this notion extensively. Unfortunately the term 'source' is also occasionally used for what I have called the target, by some authors. 6. 'Biologically-inspired' robots can be criticised at times for using 'biological' as an excuse for not evaluating the mechanism against other engineered solutions, while using 'inspired' as a disclaimer for being required to show it applies to biology. 7. Some authors do use 'accuracy' in the sense of 'replicative validity' e.g. Bhalla et al. (1992) accuracy is defined as the average normalized mean square difference between the simulator output and the reference curve p.453). The term 'match' is used instead in this article (see section 3.7). Table 1: Examples of biorobot research. This is intended to be a representative sampling not a fully comprehensive listing. Subject area Examples References Simple sensorimotor control Chemical Moth pheromone tracking Kuwana, Shimoyama, & Miura, 1995; Ishida, Kobayashi, Nakamoto, & Moriisumi, 1999; Kanzaki, 1996, Ant trail following Sharpe & Webb, 1998; Russell, 1998 Lobster plume following Grasso, Consi, Mountain, & Atema, 1996; Ayers et al., 1998 C. elegans gradient climb Morse, Ferree, & Lockery, 1998 Auditory Cricket phonotaxis Webb, 1995; Lund, Webb, & Hallam, 1998; Webb & Scutt, 2000 Owl sound localisation Rucci, Edelman, & Wray, 1999 Human localisation Horiuchi, 1997; Huang, Ohnishi, & Sugie, 1995 Bat sonar Kuc, 1997; Peremans, Walker, & Hallam, 1998 Visual Locust looming detection Blanchard, Verschure, & Rind, 1999; Indiveri, 1998 Frog snapping Arbib & Liaw, 1995 Fly motion detection to control movement Franceschini, Pichon, & Blanes, 1992; Hoshino, Mura, Morii, Suematsu, & Shimoyama, 1998; Huber & Bulthoff, 1998; 1997; Harrison & Koch, 1999 Praying mantis peering Lewis & Nelson, 1998 Human oculomotor reflex Horiuchi & Koch, 1999; Shibata & Schaal, 1999 Saccade control Clark, 1998; Schall & Hanes, 1998 Other Ant polarized light compass Lambrinos et al., 1997 Lobster anemotaxis Ayers et al., 1998 Cricket wind escape Chapman & Webb, 1999 Trace fossils Prescott & Ibbotson, 1997 Complex motor control Walking Stick insect Cruse et al., 1998; Pfeiffer et al., 1995 Cockroach Espenschied et al., 1996; Nelson & Quinn, 1998; Binnard, 1995 Four-legged mammal Ilg et al., 1998; Berkemeier & Desai, 1996 Swimming Tail propulsion Triantafyllou & Triantafyllou, 1995; Kumph, 1998 Pectoral fin Kato & Inaba, 1998 Undulation Patel et al., 1998 Flagellar motion Mojarrad & Shahinpoor, 1997 Flying Insect wings Bat Miki & Shimoyami 1998;Fearing, 1999 Pornsin-Sirirak & Tai, 1999 Arms/hands Spinal circuits Hannaford et al., 1995 Cerebellar control Fagg et al., 1997 Grasping Leoni et al., 1998 Rhythmic movement Schaal & Sternad, 2001 Haptic exploration Erkman et al., 1999 Humanoid Special issue Advanced Robotics 11(6): 1997 Brooks & Stein, 1993 Hirai et al., 1998 Other Running & Hopping 1986 Brachiation Saito & Fukuda, 1996 Mastication Takanobu et al., 1998 Snakes Hirose, 1993, Review in Worst, , Paper wasp nest construct Honma, 1996 Navigation Landmarks Ant/bee landmark homing Moller, 2000; M?ller et al., 1998 Maps Rat hippocampus Burgess et al., 1997 Gaussier et al., 1997 Search review Gelenbe et al., 1997 Collective behaviours Beckers et al., 1996 Melhuish et al., 1998 Learning Edelman et al., 1992; Sporns, forthcoming Scutt & Damper, 1997 Saksida et al., 1997 Voegtlin & Verschure, 1999 Chang & Gaudiano, 1998 References Achinstein, P. (1968). Concepts of Science: A philosophical analysis . Baltimore: John Hopkins Press. Ackoff, R. L. (1962). Scientific Method. New York: John Wiley & Sons. Amit, D. J. (1989). Modeling brain function: the world of attractor neural networks. Cambridge: Cambridge Uuniversity Press. Arbib, M., & Liaw, J. (1995). Sensorimotor transformations in the worlds of frogs and robots. Artificial Intelligence, 72: 53-79. Arkin, R. C. (1998). Behaviour-based robotics. Cambridge, MA: MIT Press. Ashby, W. R. (1952). Design for a brain. London: Chapman and Hall. Ayers, J. (1995). A reactive ambulatory robot architecture for operation in current and surge. Proceedings of the Autonomous Vehicles in Mine Countermeasures Symposium. Naval Postgraduate School(pp. 15-31). http://www.dac.neu.edu/msc/nps95mcm%20manuscript.html Ayers, J., Zavracky, P., Mcgruer, N., Massa, D., Vorus, V., Mukherjee, R., & Currie, S. (1998). A modular behavioural-based architecture for biomimetic autonomous underwater robots. Autonomous Vehicles in Mine Countermeasures Symposium .http://www.dac.neu.edu/msc/biomimeticrobots98.html Barto, A. G. (1991). Learning and incremental dynamic programming. Behavioral and Brain Sciences, 14:94-95. Beckers, R., Holland, O. E., & Deneubourg, J. L. (1996). From local actions to global tasks: stigmergy and collective robotics. Artificial Life IV, 181-189. Beer, R. D. (1990). Intelligence as Adaptive Behaviour: an experiment in computational neuroethology. London: Academic Press. Beer, R. D., Chiel, H. J., Quinn, R. D., & Ritzmann, R. E. (1998). Biorobotic approaches to the study of motor systems. Current Opinion in Neurobiology, 8:777-782. Beer, R., Quinn, R., Chiel, H., & Ritzmann, R. (1997). Biologically inspired approaches to robotics. Communications of the ACM, 40: 31-38. Bekey, G. (1996). Biologically inspired control of autonomous robots. Robotics and Autonomous Systems, 18: 21-31. Berkemeier, M., & Desai, K. (1996). Design of a robot leg with elastic energy storage, comparison to biology, and preliminary experimental results. Proceedings - IEEE International Conference on Robotics and Automation, 1996: 213-218. Bertin, R. J. V., & van de Grind, W. A. (1996). The influence of light-dark adaptation and lateral inhibition on phototaxic foraging: a hypothetical animal study. Adaptive Behaviour, 5:141-167. Bhalla, U. S., Bilitch, D. H., & Bower, J. M. (1992). Rallpacks: a set of benchmarks for neural simulators. Trends in Neurosciences, 15:453-58. Binnard, M. B. (1995). Design of a Small Pneumatic Walking Robot. ftp://ftp.ai.mit.edu/pub/users/binn/SMthesis.zip. Black, M. (1962). Models and Metaphors. Ithaca: Cornell University Press. Blanchard, M., Verschure, P. F. M. J., & Rind, F. C. (1999). Using a mobile robot to study locust collision avoidance responses. International Journal of Neural Systems, 9:405-410. Bower, J. M. (1992). Modeling the nervous system. Trends in Neurosciences, 15:411-412. Bower, J. M., & Koch, C. (1992). Experimentalists and modelers:can we all just get along? Trends in Neurosciences, 15:458-461. Braithwaite, R. B. (1960). Models in the Empirical Sciences. in E. Nagel, P. Suppes, & A. Tarski (editors), Logic methodology and philosophy of science(pp. 224-231). Stanford: Stanford University Press. Brooks, R. A. (1986). Achieving artificial intelligence through building robots. A.I. Memo 899 M.I.T. A.I. Lab. Brooks, R. A., & Stein, L. A. (1993). Building Brains for Bodies. A.I. Memo No. 1439. MIT A.I. Lab: Brooks, R. J., & Tobias, A. M. (1996). Choosing the best model: level of detail, complexity and model performance. Mathematical Computer Modelling, 24:1-14. Bulloch, A. G. M., & Syed, N. I. (1992). Reconstruction of neuronal networks in culture. Trends in Neurosciences, 15:422-427. Bullock, S. (1997). An exploration of signalling behaviour by both analytic and simulation means for both discrete and continuous models. 4th European Conference on Artificial Life. Cambridge MA: MIT Press. Bunge, M. (1973). Method, Model and Matter. Holland: D. Reidal Publishing co. Burgess, N., Donnett, J. G., Jeffery, K. J., & O'Keefe, J. (1997). Robotic and neuronal simulation of the hippocampus and rat navigation. Philosophical Transactions of the Royal Society, B, 352:1535-1543. Burgess, N., Donnett, J. G., & O'Keefe, J. (1998). Using a mobile robot to test a model of the rat hippocampus. Connection Science, 10:291-300. Burgess, N., Jackson, A., Hartley, T., & O'Keefe, J. (2000). Predictions derived from modelling the hippocampal role in navigation. Biological Cybernetics, 83:301-312. Carnap, R. (1966). Philosophical Foundations of Physics. New York: Basic Books. Cartwright, B., & Collett, T. (1983). Landmark learning in bees. Journal of Comparative Physiology A, 151: 521-543. Cartwright, N. (1983). How the laws of physics lie. Oxford: Clarendon Press. Caswell, H. (1988). Theories and models in ecology - a different perspective. Ecological Modelling, 43:33-44. Chan, K. H., & Tidwell, P. M. (1993). The reality of Artificial Life: can computer simulations become realizations? submission to Third International Conference on Artificial Life. Chang, C., & Gaudiano, P. (1998). Application of biological learning theories to mobile robot avoidance and approach behaviours. Journal of Complex Systems, 1:79-114. Chao, Y. R. (1960). Models in Linguistics and Models in General. in E. Nagel, P. Suppes, & A. Tarski (editors), Logic, methodology and philosophy of science(pp. 558-566). Stanford: Stanford University Press. Chapman, T., & Webb, B. (1999). A neuromorphic hair sensor model of wind-mediated escape in the cricket. International Journal of Neural Systems, 9:397-403. Chiel, H., & Beer, R. (1997). The brain has a body: adaptive behaviour emerges from interactions of nervous system, body and environment. Trends in Neurosciences, 20:553-557. Churchland, P. S., Koch, C., & Sejnowski, T. J. (1990). What is computational neuroscience? E. L. Schwartz (editor), Computational Neuroscience. Cambridge, Mass.: MIT Press. Churchland, P. S., & Sejnowski, T. J. (1988). Perspectives on Cognitive Neuroscience. Science, 242:741-745. Clark, J. J. (1998). Spatial attention and saccadic camera motion. Proceedings - IEEE International Conference on Robotics and Automation, 1998: 3247-3252. Cliff, D. (1991). Computational neuroethology: a provisional manifesto. in J.-A. Meyer, & S. W. Wilson (editors), From animals to animats(pp. 29-39). Cambridge, MA: MIT Press. Colby, K. M. (1981). Modeling a paranoid mind. Behavioural and Brain Sciences, 4:515-560. Collin, C., & Woodburn, R. (1998). Neuromorphism or Pragmatism? A formal approach. in L. Smith, & A. Hamilton (editors), Neuromorphic Systems: Engineering Silicon from Neurobiology. London: World Scientific. Conant, R. C., & Ashby, W. R. (1991). Every good regulator of a system must be a model of that system. reprinted in G. J. Klir (editor), Facets of System Science. New York: Plenum Press. Connell, J. H. (1990). Minimalist Mobile Robotics: A colony style architecture for an Artificial Creature. Boston: Academic Press. Crick, F. (1989). The recent excitement about neural networks. Nature, 337:129-132. Croon, M. A., & van de Vijver, F. J. R. (1994). Introduction. in M. A. Croon, & F. J. R. van de Vijver (editors), Viability of mathematical models in the social and behavioural science. Lisse: Swets and Zeitlinger. Cruse, H. (2000). (draft paper). AAAI 1998 Fall Symposium on Robots and Biology. Cruse, H., Kindermann, T., Schumm, M., Dean, J., & Schmitz, J. (1998). Walknet - a biologically inspired network to control six-legged walking. Neural Networks, 11: 1435-1447. Dario, P., & et al. (1997). Sensors and actuators for humanoid robots. Advanced Robotics, 11:567-584. Deakin, M. A. B. (1990). Modelling Biological Systems. in T. L. Vincent, A. I. Mees, & L. S. Jennings (editors), Dynamics of complex interconnected biological systems(pp. 2-16). Boston: Birkhauser. Dean, J. (1998). Animats and what they can tell us. Trends in Cognitive Sciences, 2:60-67. Delcomyn, F., Nelson, M., & CocatreZilgien, J. (1996). Sense organs of insect legs and the selection of sensors for agile walking robots. International Journal of Robotics Research, 15: 113-127. Dennett, D. C. (1979). Brainstorms: Philosophical Essays on Mind and Psychology. Sussex: Harvester Press. Descartes, R. (1662). Trait de l'homme. translation 1911 E. S. Haldane, & G. R. T. Ross. Cambridge: Cambridge University Press. Doucet, P., & Sloep, P. B. (1992). Mathematical modelling in the life sciences. Chichester: Ellis Horwood. Dreyfus, H. L. (1979). A framework for misrepresenting knowledge. in M. Ringle (editor), Philosophical Perspectives in Artificial Intelligence. Sussex: Harvester Press. Dror, I. E., & Gallogly, D. P. (1999). Computational analyses in cognitive science: in defense of biological implausibility. Psychonomic Bulletin and Review, 6:173-182. Durbin, R. (1989). On the correspondence between network models and the nervous system. in R. Durbin, C. Miall, & G. Mitchison (editors), The computing neuron. Wokingham: Addison-Wesley. Edelman, G. M., Reeke, G. N., Gall, W. E., Tononi, G., Williams, D., & Sporns, O. (1992). Synthetic neural modeling applied to a real world artifact. Proceedings of the National Academy of Sciences, 89:7267-7271. Elliott, T., Howarth, C. I., & Shadbolt, N. R. (1996). Neural competition and statistical mechanics. Proceedings of Royal Society B, 263:601-606. Erkman, I., Erkman, A. M., Takkaya, A. E., & Pasinlioglu, T. (1999). Haptic perception of shape and hollowness of deformable objects using the Anthrobot-III robot hand. Journal of Robotic Systems, 16:9-24. Espenschied, K., Quinn, R., Beer, R., & Chiel, H. (1996). Biologically based distributed control and local reflexes improve rough terrain locomotion in a hexapod robot. Robotics and Autonomous Systems, 18: 59-64. Estes, W. K. (1975). Some targets for mathematical psychology. Journal of Mathematical Psychology, 12:263-282. Etienne, A. S. (1998). Mammalian navigation, neural models and robotics. Connection Science, 10:271-289. Etienne-Cummings, R., Van der Spiegel, P., & Mueller, P. (1998). Neuromorphic and digital hybrid systems. L. Smith, & A. Hamilton (editors), Neuromorphic Systems: Engineering Silicon from Neurobiology. London: World Scientific. Fagg, A., Sitkoff, N., Barto, A., & Houk, J. (1997). Cerebellar learning for control of a two-link arm in muscle space. Proceedings -IEEE International Conference on Robotics and Automation, 1997: 2638-2644. Fearing, R. S. (1999) Micromechanical Flying Insect [Web Page]. URL http://robotics.eecs.berkeley.edu/~ronf/mfi.html. Feibleman, J. K. (1954). Theory of integrative levels. British Journal of the Philosophy of Science, 5:59-66. Ferrell, C. (1995). Comparison of three insect-inspired locomotion controllers. Robotics and Autonomous Systems, 16: 135-159. Flynn, A. M., & Brooks, R. A. (1989). Battling Reality. A.I. Memo 1148 M.I.T. A.I. Lab. Fodor, J. A. (1968). Psychological Explanation. Random House. Fodor, J. A., & Pylyshyn, Z. (1988). Connectionism and cognitive architecture: a critical analysis. Cognition, 28:3-71. Forrester, J. W. (1972). Principles of Systems. Cambridge, Mass: Wright-Allen Press. Fowler, A. C. (1997). Mathematical Models in the Applied Sciences. Cambridge: Cambridge University Press. Franceschini, N. (1996). Engineering applications of small brains. FED Journal, 7:38-52. Franceschini, N., Pichon, J. M., & Blanes, C. (1992). From insect vision to robot vision. Philosophical Transactions of the Royal Society B, 337:283-294. Frijda, N. J. (1967). Problems of computer simulation. Behavioural Science, 12:59-. Gaussier, P., Banquet, J. P., Joulain, C., A. Revel, A., & Zrehen, S. (1997). Validation of a hippocampal model on a mobile robot. in Vision, Recognition, Action: Neural Models of Mind and Machine Boston, Massachusetts. Gelenbe, E., Schmajuk, N., Staddon, J., & Reif, J. (1997). Autonomous search by robots and animals: a survey. Robotics and Autonomous Systems, 22: 23-34. Giere, R. N. (1997). Understanding Scientific Reasoning. Orlando: Harcourt Brace. Gordon, G. (1969). System simulation. New Jersey: Prentice Hall. Grasso, F., Consi, T., Mountain, D., & Atema, J. (1996). Locating odor sources in turbulence with a lobster inspired robot. In Maes, P., Mataric, M. J., Meyer, J. A., Pollack, J., and Wilson, S. W. (editors) Sixth International Conference on Simulation of Adaptive Behaviour: From animals to animats 4 Cambridge, Mass.: MIT Press. Grasso, F., Consi, T., Mountain, D., & Atema, J. (2000). Biomimetic robot lobster performs chemo-orientation in turbulence using a pair of spatially separated sensors: Progress and challenges . Robotics and Autonomous Systems, 30:115-131. Grimm, V. (1994). Mathematical models and understanding in ecology. Ecological Modelling, 75/76:641-651. Haefner, J. W. (1996). Modeling Biological Systems. New York: Chapman and Hall. Halfon, E. (1983). Is there a best model structure? I Modeling the fate of a toxic substance in a lake. Ecological Modelling, 20:135-152. Hallam, J. (1998). Can we mix robotics and biology? Workshop on Biomorphic Robots Victoria BC Canada. Hannaford, B., Winters, J., Chou, C.-P., & Marbot, P.-H. (1995). The anthroform biorobotic arm: a system for the study of spinal circuits. Annals of Biomedical Engineering, 23:399-408. Harnad, S. (1989). Minds, machines and Searle. Journal of Experimental and Theoretical Artificial Intelligence, 1:5-25. Harre, R. (1970a). The method of science. London: Wykeham. Harre, R. (1970b). The principles of scientific thinking. Chicago: Chicago University Press. Harrison, R. R., & Koch, C. (1999). A robust analog VLSI motion sensor based on the visual system of the fly. Autonomous Robotics, 7:211-224. Hesse, M. B. (1966). Models and analogies in science. Notre Dame University Press. Hirai, K., Hirose, M., Haikawa, Y., & Takenaka, T. (1998). The development of the Honda humanoid robot. Proceedings - IEEE International Conference on Robotics and Automation, 1998: 1321-1326. Hirose, S., Raibert, M., & Pack, R. T. (1996). General systems group report (International Workshop on Biorobotics). Robotics and Autonomous Systems, 18:95-99. Hirose, S. (1993). Biologically Inspired Robotics. Oxford: Oxford University Press. Holland, O., & Melhuish, C. (1999). Stigmergy, self-organization and sorting in collective robotics. Artificial Life, 5:173-202. Honma, A. (1996). Construction robot for three-dimensional shapes based on the nesting behavior of paper wasps. Seimitsu Kogaku Kaishi/Journal of the Japan Society for Precision Engineering, 62: 805-809. Hoos, I. R. (1981). Methodology, Methods and Models. in F. E. Emery (editor), Systems Thinking: 2. Suffolk: Penguin. Hopkins, J. C., & Leipold, R. J. (1996). On the dangers of adjusting parameter values of mechanism-based mathematical models. Journal of Theoretical Biology, 183:417-427. Horiuchi, T. (1997). An auditory localization and co-ordinate transform chip. in Advances in Neural Information Processing Systems 8. Cambridge MA: MIT Press. Horiuchi, T. K., & Koch, C. (1999). Analog VLSI-based modeling of the primate oculomotor system. Neural Computation, 11:243-264. Hoshino, H., Mura, F., Morii, H., Suematsu, K., & Shimoyama, I. (1998). A small sized panoramic scanning visual sensor inspired by the fly's compound eye. Proceedings - IEEE International Conference on Robotics and Automation (pp. 1641-1646). Huang, J., Ohnishi, N., & Sugie, N. (1995). A Biomimetic System for Localization and Separation of Multiple Sound Sources. IEEE Trans. on Instrumentation and Measurement, 44:733-738. Huber, S. A., & Bulthoff, H. H. (1998). Simulation and robot implementation of visual orientation behaviour of flies. in Pfeifer, R., Blumberg, B., Meyer, J. A., and Wilson, S. W. (editors)From animals to animats 5 (pp. 77-85). Cambridge, Mass.: MIT Press. Hughes, R. I. G. (1997). Models and representation. Philosophy of Science , 64:S325-S336. Ilg, W., Berns, K., Jedele, H., Albiez. J., Dillmann, R., Fischer, M., Witte, H., Biltzinger, J., Lehmann, R., & Schilling, N. (1998a). BISAM: from small mammals to a four-legged walking machine. in Pfeifer, R., Blumberg, B., Meyer, J. A., and Wilson, S. W. (editors)From animals to animats 5 Cambridge, Mass.: MIT Press. Indiveri, G. (1998b). Analog VLSI Model of Locust DCMD Neuron Response for Computation of Object Approach. L. Smith, & A. Hamilton (editors), Neuromorphic Systems: Engineering Silicon from Neurobiology. London: World Scientific. Ishida, H., Kobayashi, A., Nakamoto, T., & Moriisumi, T. (1999). Three dimensional odor compass. IEEE Transactions on Robotics and Automation, 15: 251-257. Kacser, H. (1960). Kinetic models of development and heredity. in Symposia of the Society for Experimental Biology Models and analogues in biology. Cambridge: Cambridge University Press. Kanzaki, R. (1996). Behavioral and neural basis of instinctive behavior in insects: odor- source searching strategies without memory and learning. Robotics and Autonomous Systems, 18: 33-43. Kaplan, A. (1964). The conduct of enquiry. San Francisco: Chandler. Kato, M., & Inaba, T. (1998). Guidance and control of fish robot with apparatus of pectoral fin motion. Proceedings - IEEE International Conference on Robotics and Automation, 1: 446-451. Klir, J., & Valach, M. (1965). Cybernetic Modelling. London: ILIFFE Books Ltd. Koch, C. (1999). Biophysics of Computation. Oxford: Oxford University Press. Kolacinski, M., & Quinn, R. D. (1998). A novel biomimetic actuator system. Robotics and Autonomous Systems, 25:1-18. Kortmann, R., & Hallam, J. (1999). Studying animals through artificial evolution: the cricket case. in Proceedings of the Fifth European Conference on Artificial Life Berlin: Springer-Verlag. Kroes, P. (1989). Structural analogies between physical systems. British Journal for the Philosophy of Science, 40:145-154. Kubow, T. M., & Full, R. J. (1999). The role of the mechanical system in control: a hypothesis of self-stabilisation in the cockroach. Philosophical Transactions of the Royal Society of London B, 354:849-861. Kuc, R. (1997). Biomimetic sonar recognizes objects using binaural information. Journal of the Acoustical Society of America, 102: 689-696. Kumph, J. M. (1998) MIT Robot Pike Project [Web Page]. URL http://www.mit.edu/afs/athena/org/t/towtank/OldFiles/www/pike/index.ht ml. Kuwana, Y., Shimoyama, I., & Miura, H. (1995). Steering control of a mobile robot using insect antennae. IEEE International Conference on Intelligent Robots and Systems, 2: 530-535. Lamb, J. R. (1987). Computer simulation of biological systems. Molecular and Cellular Biochemistry, 73:91-98. Lambert, K., & Brittan, G. G. (1992). An introduction to the philosophy of science. 4th ed.. Atascadero, CA: Ridgview Publishing. Lambrinos, D., Maris, M., Kobayashi, H., Labhart, T., Pfeifer, R., & Wehner, R. (1997). An autonomous agent navigating with a polarized light compass. Adaptive Behaviour, 6: 175-206. Lambrinos, D., Moller, R., Labhart, T., Pfeifer, R., & Wehner, R. (2000). A mobile robot employing insect strategies for navigation. Robotics and Autonomous Systems, 30:39-64. Langton, C. G. (1989). Artificial Life. Addison-Wesley. Laudan, L. (1981). A confutation of convergent realism. Philosophy of Science, 48:218-249. Leatherdale, W. H. (1974). The role of analogy, model and metaphor in science. Amsterdam: North Holland Publishing Co. Leoni, F., Guerrini, M., Laschi, C., Taddeucci, D., Dario, P., & Staritao, A. (1998). Implementing robotic grasping tasks using a biological approach. Proceedings -IEEE International Conference on Robotics and Automation, 3: p 2274-2280. Levins, R. (1993). A response to Orzack and Sober: formal analysis and the fluidity of science. Quarterly Review of Biology, 68:547-555. Levins, R. (1966). The strategy of model building in population biology. American Scientist, 54:421-431. Lewis, M. A., & Nelson, M. E. (1998). Look before you leap:peering behaviour for depth perception. in Pfeifer, R., Blumberg, B., Meyer, J. A., and Wilson, S. W. (editors)From animals to animats 5 (pp. 98-103). Cambridge, Mass.: MIT Press. Lund, H. H., Webb, B., & Hallam, J. (1997). A robot attracted to the cricket species Gryllus Bimaculatus. P. Husbands, & I. Harvey (editors), Fourth European Conference on Artificial Life(pp. 246-255). Cambridge MA: MIT Press. Lund, H. H., Webb, B., & Hallam, J. (1998). Physical and temporal scaling considerations in a robot model of cricket calling song preference. Artificial Life, 4:95-107. Maki, D. P., & Thompson, M. (1973). Mathematical Models and Applications. Englewood Cliffs, NJ: Prentice-Hall. Manna, Z., & Pnueli, A. (1991). On the faithfulness of formal models. Lecture Notes in Computer Science, 520:28-42. Maris, M., & Mahowald, M. (1998). Neuromorphic sensory motor mobile robot controller with pre-attention mechanism. L. Smith, & A. Hamilton (editors), Neuromorphic Systems: Engineering Silicon from Neurobiology. London: World Scientific. Marr, D. (1982). Vision. San Francisco: W.H. Freeman. Mataric, M. J. (1998). Behavior-based robotics as a tool for synthesis of artificial behavior and analysis of natural behavior. Trends in Cognitive Sciences, 2:82-87. Maynard Smith, J. (1974). Models in Ecology. Cambridge: Cambridge University Press. Maynard Smith, J. (1988). Did Darwin get it Right? London: Penguin. McGeer, T. (1990). Passive Walking with Knees. IEEE Conference on Robotics and Automation (pp. 1640-1645). Mead, C. (1989). Analog VLSI and Neural Systems . Reading, Mass: Addison-Wesley. Melhuish, C., Holland, O., & Hoddell, S. (1998). Collective sorting and segregation in robots with minimal sensing. in Pfeifer, R., Blumberg, B., Meyer, J. A., and Wilson, S. W. (editors)From animals to animats 5 Cambridge, Mass.: MIT Press. Meyer, J. (1997). From natural to artificial life: biomimetric mechanisms in animat designs. Robotics and Autonomous Systems, 22: 3-21. Miall, C. (1989). The diversity of neuronal properties. in R. Durbin, C. Miall, & G. Mitchison (editors), The computing neuron. Wokingham: Addison-Wesley. Miklos, G. L. G. (1993). Molecules and cognition: the latterday lessons of levels, language and lac. Journal of Neurobiology, 24:842-890. Milinski, M. (1991). Models are just protheses for our brains. Behavioral and Brain Sciences, 14. Mojarrad, M., & Shahinpoor, M. (1997). Biomimetic robotic propulsion using polymeric artificial muscles. Proceedings - IEEE International Conference on Robotics and Automation, 3: 2152-2157. Molenaar, I. M. (1994). Why do we need statistical models in the social and behavioural sciences. in M. A. Croon, & F. J. R. van de Vijver (editors), Viability of mathematical models in the social and behavioural science. Lisse: Swets and Zeitlinger. M?ller, R. (2000). Insect visual homing strategies in a robot with analog processing. Biological Cybernetics, 83:231-243. M?ller, R., Lambrinos, D., Pfeifer, R., Labhart, T., & Wehner, R. (1998). Modeling Ant Navigation with an autonomous agent. in Pfeifer, R., Blumberg, B., Meyer, J. A., and Wilson, S. W. (editors)From animals to animats 5 Cambridge, Mass.: MIT Press. Morgan, M. S. (1997). The technology of analogical models: Irving Fisher's monetary worlds. Philosophy of Science , 64:S304-S314. Morse, T. M., Ferree, T. C., & Lockery, S. R. (1998). Robust spatial navigation in a robot inspired by chemotaxis in Caenorrhabditis elegans. Adaptive Behaviour, 6: 393-410. Mura, F., & Shimoyama, I. (1998). Visual guidance of a small mobile robot using active, biologically- inspired, eye movements. Proceedings - IEEE International Conference on Robotics and Automation, 3: p 1859-1864. Nagel, E. (1961). The structure of science . London: Routledge & Kegan Paul. Nelson, G. M., & Quinn, R. D. (1998). Posture control of a cockroach-like robot. Proceedings - IEEE International Conference on Robotics and Automation, 1: 157-162. Onstad, D. W. (1988). Population-dynamics theory - the roles of analytical, simulation, and supercomputer models. Ecological Modelling, 43:111-124. Oreskes, N., Shrader-Frechette, K., & Belitz, K. (1994). Verification, validation and confirmation of numerical models in the earth sciences. Science, 263:641-646. Orzack, S. H., & Sober, E. (1993). A critical assessment of Levin's The strategy of model building in the social sciences (1966). Quarterly Review of Biology, 68:533-546. Palladino, P. (1991). Defining ecology - ecological theories, mathematical-models, and applied biology in the 1960s and 1970s. Journal of the History of Biology, 24:223-243. Palsson, B. O., & Lee, I. (1993). Model complexity has a significant effect on the numerical value and interpretation of metabolic sensitivity coefficients. Journal of Theoretical Biology, 161:299-315. Patel, G. N., Holleman, J. H., & DeWeerth, S. P. (1998). Analog VLSI model of intersegmental coordination with nearest neighbour coupling. in M. I. Jordan, M. J. Kearns, & S. A. Solla (editors), Advances in Neural Information Processing Systems 10(pp. 719-726). Cambridge MA: MIT Press. Pattee, H. H. (1989). Simulations, realizations and theories of life. in C. Langton (editor), Artificial Life. Redwood City, Cal: Addison-Wesley. Peirce, G. J., & Ollanson, J. G. (1987). Eight reasons why optimal foraging theory is a complete waste of time. Oikos, 49:111-125. Peremans, H., Walker, A., & Hallam, J. C. T. (1998). 3D object localization with a binaural sonar head - inspirations from biology. Proceedings - IEEE International Conference on Robotics and Automation, 2795-2800. Pfeifer, R. (1996). Building "fungus eaters": design principles of autonomous agents. From animals to animats 4: Proceedings of the 4th International Conference on Simulation of Adaptive Behaviour, 3-11. Cambridge MA: MIT Press. Pfeiffer, F., Etze, J., & Weidemann, H. (1995). Six-legged technical walking considering biological principles. Robotics and Autonomous Systems, 14: 223-232. Pichon, J.-M., Blanes, C., & Franceschini, N. (1989). Visual guidance of a mobile robot equipped with a network of self-motion sensors. Wolfe, W. J. and Chun, W. H. (editors)Mobile Robots IV Philadelphia(pp. 44-53). Bellingham: Society of Photo-optical Instrumentation Engineers. Pornsin-Sirirak, N., & Tai, Y.-C. (1999) Microbat [Web Page]. URL http://secretary.erc.caltech.edu/~tai/research/nick/. Pratt, J., & Pratt, G. (1998a). Intuitive control of a planar biped walking robot. Proceedings - IEEE International Conference on Robotics and Automation, 2014-2021. Pratt, J. E., & Pratt, G. A. (1998b). Exploiting natural dynamics in the control of a planar biped walking robot. Proceedings of the Thirty-sixth Annual Allerton Conference on Communication, Control and Computing. Prescott, T. J., & Ibbotson, C. (1997). A robot trace-maker: modelling the fossil evidence of early invertebrate behaviour. Artificial Life, 3:289-306. Putnam, H. (1975). Philosophical Papers vol. 1. Cambridge: Cambridge University Press. Quinn, R. D., & Espenscheid, K. S. (1993). Control of a hexapod robot using a biologically inspired neural network. in R. D. Beer, R. E. Ritzmann, & T. McKenna (editors), Biological Neural Networks in Invertebrate Neuroethology and Robotics. London: Academic Press. Quinn, R. D., & Ritzmann, R. E. (1998). Construction of a hexapod robot with cockroach kinematics benefits both robotics and biology. Connection Science, 10:239-254. Raaijmakers, J. G. W. (1994). Mathematical models in memory research. in M. A. Croon, & F. J. R. van de Vijver (editors), Viability of mathematical models in the social and behavioural science. Lisse: Swets and Zeitlinger. Raibert, M. H. (1986). Legged robots that balance. Cambridge, Mass: MIT Press. Redhead, M. (1980). Models in Physics. British Journal for the Philosophy of Science, 31:145-163. Reeke, G. N., & Sporns, O. (1993). Behaviourally based modeling and computational approaches to neuroscience. Annual Review of Neuroscience, 16:597-623. Rexstad, E., & Innis, G. S. (1985). Model simplification - Three applications. Ecological Modelling, 27:1-13. Ringle, M. (1979). Philosophy and A.I. in M. Ringle (editor), Philosophical Perspectives in Artificial Intelligence. Sussex: Harvester Press. Robertson, M. (1989). Idiosyncratic motor units generating innate motor patterns: neurones and circuits in the locust flight system. in R. Durbin, C. Miall, & G. Mitchison (editors), The computing neuron. Wokingham: Addison-Wesley. Rosenblueth, A., & Wiener, N. (1945). The role of models in science. Philosophy of Science, 12:316-321. Rothenberg, J. (1989). The nature of modeling. in L. E. Widman, K. A. Loparo, & N. R. Nielson (editors), Artificial Intelligence, Simulation and Modelling. New York: John Wiley and sons. Rucci, M., Edelman, G., & Wray, J. (1999). Adaptation of orienting behavior: from the barn owl to a robotic system. Ieee Transactions on Robotics and Automation, 15: p 96-110. Russell, R. (1998). Odour sensing robot draws inspiration from the insect world. Lithgow, B. and Cosic, I. (editors)Proceedings of the 2nd International Conference on Bioelectromagnetism Melbourne, Australia(pp. 49-50). IEEE. Rykiel, E. (1996). Testing ecological models: the meaning of validation. Ecological Modelling, 90:229-244. Saito, F., & Fukuda, T. (1996). A first result of the brachiator III - a new brachiation robot modeled on a siamang. in Langton, C. and Shimohara, K. (editors)Proceedings of ALife V Cambridge MA: MIT Press. Saksida, L. M., Raymond, S. M., & Touretzky, D. S. (1997). Shaping robot behavior using principles from instrumental conditioning. Robotics and Autonomous Systems, 22:231-249. Salmon, W. C. (1996). Rationality and objectivity in science or Tom Kuhn meets Tom Bayes. reprinted in D. Papineau (editor), The Philosophy of Science. Oxford: Oxford University Press. Scassellati, B. (1998). Imitation and mechanisms of joint attention: a developmental structure for building social skills on a humanoid robot. in C. Nehaniv (editor), Computation for Metaphors, Analogy and Agents: Springer Lecture Notes in Artificial Intelligence(Vol. 1562). Berlin: Springer-Verlag. Schaal, S., & Sternad, D. (2001). Origins and violations of the 2/3 power law in rhythmic three-dimensional arm movements. Experimental Brain Research, 136:60-72. Schall, J., & Hanes, D. (1998). Neural mechanisms of selection and control of visually guided eye movements. Neural Networks, 11: p 1241-1251. Schenk, H. (1996). Modeling the effects of temperature on growth and persistence of tree species: a critical review of tree population models. Ecological Modelling, 92:1-32. Schultz, R. L., & Sullivan, E. M. (1972). Developments in simulation in social and administrative science. in H. Guetzkow, P. Kotter, & R. L. Schultz (editors), Simulation in Social and Administrative Science. New Jersey: Prentice Hall. Schwartz, E. L. (1990). Introduction. E. L. Schwartz (editor), Computational Neuroscience. Cambridge, Mass.: MIT Press. Scutt, T., & Damper, R. (1997). Biologically-motivated learning in adaptive mobile robots. SMC '97 IEEE International Conference on Systems, Man and Cybernetics (pp. 475-480). Segev, I. (1992). Single neuron models: oversimple, complex and reduced. Trends in Neurosciences, 15:414-421. Sejnowski, T. J., Koch, C., & Churchland, P. S. (1988). Computational Neuroscience. Science, 241:1299-1305. Selverston, A. I. (1993). Modeling of neuronal circuits: what have we learned? Annual Review of Neuroscience, 16:531-546. Shannon, C. E. (1951). Presentation of a Maze-Solving Machine. in Foerster, V. (editor)Transactions of 8th Conference of the Josiah Macy Foundation . Shannon, R. E. (1975). Systems Simulation: the art and science. Englewood Cliffs, NJ: Prentice Hall. Sharkey, N. E., & Ziemke, T. (1998). Editorial: Biorobotics . Connection Science, 10:163-166. Sharpe, T., & Webb, B. (1998). Simulated and situated models of chemical trail following in ants. Pfeifer, Rolf, Blumberg, Bruce, Meyer, Jean-Arcady, and Wilson, Stewart W.From Animals to Animats 5: Proceedings of the Fifth International Conference on the Simulation of Adaptive Behaviour Zurich(pp. 195-204). Cambridge, MA: MIT Press. Shepherd, G. M. (1990). The significance of real neuron architectures for neural network simulations. in E. L. Schwartz (editor), Computational Neuroscience. Cambridge, Mass.: MIT Press. Shibata, T., & Schaal, S. (1999). Robot gaze stabilisation based on mimesis of oculomotor dynamics and vestibulocerebellar learning. Advanced Robotics, 13:351-352. Shimoyama, I., Bekey, G., & Asaad, S. (1996). Introduction - Special Issue from International Workshop on Biorobotics. Robotics and Autonomous Systems, 18:7-11. Sporns, O. (forthcoming). The emergence of complex neuronal properties in an embodied model of the visual system. in T. Consi, & B. Webb (editors), Biorobotics. AAAI Press. Spriet, J. A., & Vansteenkiste, G. C. (1982). Computer-aided modelling and simulation . London: Academic Press. Srinivasan, M. V., Chahl, J. S., Weber, K., & Venkatesh, S. (1999). Robot navigation inspired by principles of insect vision. Robotics and Autonomous Systems , 26:203-216. Srinivasan, M. V., & Venkatesh, S. (1997). From Living Eyes to Seeing Machines. Oxford: Oxford University Press. Stogdill, R. M. (1970). Introduction: the student and model building. in R. M. Stogdill (editor), The process of Model Building in the Behavioral Sciences. Ohio: Ohio State University Press. Suppe, F. (1977). The structure of scientific theories. Urbana: University of Illinois Press. Taddeucci, D., & Dario, P. (1998a). Experiments in Synthetic psychology for tactile perception in robots: steps toward implementing humanoid robots. Proceedings - IEEE International Conference on Robotics and Automation, 2262-2267. Takanobu, H., Yajima, T., Nakazawa, M., Takamishi, A., Ohtsuki, K., & Onishi, M. (1998b). Quantification of masticatory efficiency with a masticatory robot. Proceedings - IEEE International Conference on Robotics and Automation, 1635-1640. Taylor, P. J. (1989). Revising models and generating theory. Oikos, 54:121-126. Touretsky, D. S., & Saksida, L. M. (1997). Operant conditioning in Skinnerbots. Adaptive Behaviour, 5:219-247. Triantafyllou, M. S., & Triantafyllou, G. S. (1995). An efficient swimming machine. Scientific American, 272:40-48. Ulinski, P. S. (1999). Modeling Cortical Circuitry. in P. S. Ulinski, E. G. Jones, & A. Peters (editors), Cerebral Cortex(Vol. 13pp. 1-17). New York: Kluwer Academic. Uttal, W. R. (1990). On some two-way barriers between models and mechanisms. Perception and Psychophysics, 48:188-203. Van Fraassen, B. C. (1980). The scientific image. Oxford: Clarendon Press. Verschure, P. F. M. J. (1996). Connectionist Explanation: taking positions in the Mind-Brain dilemma. In G. Dorfner (editor), Neural Networks and a New AI(pp. 133-188). London: Thompson. Voegtlin, T., & Verschure, P. F. M. J. (1999). What can robots tell us about brains? A synthetic approach towards the study of learning and problem solving. Reviews in the Neurosciences, 10:291-310. Vogel, S. (1999). Cat's paws and catapults. London: Penguin Books. Walker, I. D. (1995). A successful multifingered hand design - the case of the racoon. IEEE/RSJ International Conference on Intelligent Robots and Systems, 186-193. Los Alamitos, Calif.: IEEE Computer Society Press. Walter, W. G. (1961). The Living Brain. Harmondsworth, Middlesex: Penguin Books. Wartofsky, M. W. (1979). Models: representation and the scientific understanding. Dordrecht: D.Reidel. Webb, B. (1995). Using robots to model animals: a cricket test. Robotics and Autonomous Systems, 16: 117-134. Webb, B. (1991). Do computer simulations really cognize? Journal of Experimental and Theoretical Artificial Intelligence, 3:247-254. Webb, B. (1994). Robotic Experiments in Cricket Phonotaxis. Cliff, David, Husbands, Philip, Meyer, Jean-Arcady, and Wilson, Stewart W.From Animals to Animats 3: Proceedings of the Third International Conference on the Simulation of Adaptive Behaviour Brighton(pp. 45-54). Cambridge, MA: MIT Press. Webb, B., & Scutt, T. (2000). A simple latency dependent spiking neuron model of cricket phonotaxis. Biological Cybernetics, 82:247-269. Wehner, R. (1987). Matched filters - neural models of the external world. Journal of Comparative Physiology A, 161:511-531. Wehner, R. (1994). The polarization-vision project: championing organismic biology. in K. Schildberger, & N. Elsner (editors), Neural Basis of Behavioural Adaptations(pp. 103-143). Stuttgart: Gustav Fischer Verlag. Weiner, J. (1995). On the practice of ecology. Journal of Ecology, 83:153-158. Weitzenfeld, J. S. (1984). Valid reasoning by analogy. Philosophy of Science, 51:137-149. Widman, L. E., & Loparo, K. A. (1989). A critical survey. L. E. Widman, K. A. Loparo, & N. R. Nielson (editors), Artificial Intelligence, Simulation and Modelling. New York: John Wiley and sons. Wiener, N. (1948). Cybernetics. Cambridge MA: MIT Press. Williamson, M. W. (1998). Rhythmic robot arm control using oscillators. 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems, 77-83. Los Alamitos, Calif.: IEEE Computer Society Press. Wilson, H. R. (1999). Non-fourier cortical processes in texture, form and motion perception. in P. S. Ulinski, E. G. Jones, & A. Peters (editors), Cerebral Cortex(Vol. 13pp. 445-477). New York: Kluwer Academic. Yamaguchi, H. (1998). A cooperative hunting behaviour by mobile robot troops. in Proceedings of the IEEE Conference on Robotics and Automation (pp. 3204-3209). Young, J. F. (1969). Cybernetics. London: Iliffe Books Ltd. Zalzala, A. M. S., & Morris, A. S. (editors). (1996). Neural networks for robotic control: Theory and Applications. London: Ellis Horwood. Zeigler, B. P. (1976). Theory of Modelling and Simulation. New York: John Wiley. References 1. mailto:b.h.webb at stir.ac.uk 2. http://www.bbsonline.org/documents/a/00/00/04/64/bbs00000464-00/www.stir.ac.uk/psychology/Staff/bhw1/ From checker at panix.com Mon Dec 26 02:25:07 2005 From: checker at panix.com (Premise Checker) Date: Sun, 25 Dec 2005 21:25:07 -0500 (EST) Subject: [Paleopsych] Boston GLobe: Back to utopia Message-ID: Back to utopia http://www.boston.com/news/globe/ideas/articles/2005/11/20/back_to_utopia?mode=PF [Would someone please read these books and tell me what a non-capitalist society would look like?] Can the antidote to today's neoliberal triumphalism be found in the pages of far-out science fiction? By Joshua Glenn | November 20, 2005 IN 1888, when Massachusetts newspaperman Edward Bellamy published his science fiction novel ''Looking Backward," set in a Boston of the year 2000, it sold half a million copies. Never mind the futuristic inventions (electric lighting, credit cards) and visionary city planning; what readers responded to was the transformation of a Gilded Age city of labor strikes and social unrest into a socialist utopia (Bellamy called it ''nationalist") of full employment and material abundance. By 1890 there were 162 reformist Bellamy Clubs around the country, with a membership that included public figures like the influential novelist, editor, and critic William Dean Howells; and from 1891-96, the Bellamy-inspired Nationalist Party helped propel the Populist Movement. The Bellamyites fervently believed, to paraphrase the slogan of today's anti-globalization movement, that another world was possible. But during the Cold War - thanks to Stalinism and the success of such dystopian fables as Aldous Huxley's ''Brave New World" and George Orwell's ''Nineteen Eighty-Four" - all radical programs promising social transformation became suspect. Speaking for his fellow chastened liberals at a Partisan Review symposium in 1952, for example, the theologian and public intellectual Reinhold Niebuhr dismissed what he called the utopianism of the 1930s as ''an adolescent embarrassment." Niebuhr and other influential anti-utopians of mid-century - Isaiah Berlin, Hannah Arendt, Karl Popper - had a point. From Plato's ''Republic" to Thomas More's 1517 traveler's tale ''Utopia" (the title of which became a generic term), to the idealistic communism of Rousseau and other pre- and post-French Revolution thinkers, to Bellamy's ''Looking Backward" itself, utopian narratives have often shared a naive and unseemly eagerness to force square pegs into round holes via thought control and coercion. By the end of the 20th century, most utopian projects did look proto-totalitarian. In recent years, however, certain eminent contrarians - most notably Fredric Jameson, author of the seminal ''Postmodernism, Or, the Cultural Logic of Late Capitalism" (1991) and Russell Jacoby, author most recently of ''The End of Utopia" (1999) and ''Picture Imperfect: Utopian Thought for an Anti-Utopian Age" (2005)-have lamented the wholesale abandonment of such utopian ideas of the left as the abolition of property, the triumph of solidarity, and the end of racism and sexism. The question, for thinkers like these, is how to revive the spirit of utopia - the current enfeeblement of which, Jameson claims, ''saps our political options and tends to leave us all in the helpless position of passive accomplices and impotent handwringers" - without repeating the errors of what Jacoby has dubbed ''blueprint utopianism," that is, a tendency to map out utopian society in minute detail. How to avoid, as Jameson puts it, effectively ''colonizing the future"? Is the thought of a noncapitalist utopia even possible after Stalinism, after decades of anticommunist polemic on the part of brilliant and morally engaged intellectuals? Or are we all convinced, in a politically paralyzing way, that Margaret Thatcher had it right when she crowed that ''there is no alternative" to free-market capitalism? Borrowing Sartre's slogan, coined after the Soviet invasion of Hungary, about being neither communist nor anticommunist but ''anti-anticommunist," Jameson suggests we give ''anti-anti-utopianism" a try. In his latest book, ''Archaeologies of the Future," just published by Verso, he invites us to explore an overlooked canon of anti-anti-utopian narratives that some, to echo Niebuhr, might find embarrassingly adolescent: offbeat science fiction novels of the 1960s and '70s. Jameson, a professor of comparative literature at Duke, isn't talking about ''Star Trek" novelizations. Because of the Cold War emphasis on dystopias, Cold War writers like Philip K. Dick, Ursula K. Le Guin, and Samuel R. Delany had to find radical new ways to express their inexpressible hopes about the future, claims Jameson. At this moment of neoliberal triumphalism, he suggests, we should take these writers seriously - even if their ideas are packaged inside lurid paperbacks. In Dick's uncanny novels, the author demands of us that we decide for ourselves what's real and what isn't. ''Martian Time-Slip" (1964), for example, is partly told from the perspective of a 10-year-old schizophrenic colonist on Mars, where civilization is devolving into ''gubbish." And ''The Three Stigmata of Palmer Eldritch" (1965) is a psychedelic odyssey of hallucinations-within-hallucinations from which no reader emerges unscathed. Delany, meanwhile, is best known for ''Trouble on Triton" (1976), a self-consciously post-structuralist novel that depicts a future where neither heterosexuality nor homosexuality is the norm. Le Guin, author of a fantasy series for children, ''The Earthsea Trilogy," explores Taoist, anarchist, and feminist themes in novels like ''The Left Hand of Darkness" (1969) and ''The Dispossessed" (1974). Fans of Dick, Delany, and their ilk warn neophytes not to read too many of their books too quickly: Doing so, as this reader can attest, tends to result in pronounced feelings of irreality, paranoia, and angst. In ''Archaeologies," Jameson characterizes utopian narratives (which he classifies as a subgenre of science fiction) as being, at the level of content, less a vision of a truly different world than a situation-specific response to a concrete historical dilemma: the immiseration of the working class during the later 19th century, in Bellamy's case. Such content is ''vacuous," he sniffs, and of interest primarily to antiquarians. The ability of utopian narratives in particular, and science fiction in general, to break the paralyzing spell of the quotidian has less to do with its content than with its form, he argues persuasively. (Buck Rogers-type science fiction in the mode of ''extrapolation and mere anticipation of all kinds of technological marvels," as Jameson puts it, is far less effective at doing so.) It requires a tremendous effort to imagine a daily life that is politically, economically, socially, and psychologically truly different from our own. And this effort, Jameson writes, warps the structure of science fiction. As a result, he claims, even Dick's amphetamine-fuelled potboilers are as productively alienating as the plays of Brecht and Beckett. But isn't it perverse to describe novels quite so alienating as utopian? The title character of Dick's ''Palmer Eldritch," for example, is an industrialist-turned-evil demiurge who brings to mankind a ''negative trinity" of ''alienation, blurred reality, and despair" in the form of Chew-Z, a drug that inducts users into a hallucinatory semireality from which they can never finally escape. Le Guin's ''The Dispossessed," meanwhile, was written as a pointed critique of typical utopian narratives: It's set on Annares, a planet whose hippie-like inhabitants value voluntary cooperation, local control, and mutual tolerance - but who have preserved their grooviness through dogmatic conformism and an entrenched bureaucracy that stifles innovation. Le Guin's protagonist abandons Annares for a nearby world, one that is superior in important respects because its inhabitants value the free market; later editions of the book are subtitled ''An Ambiguous Utopia." Delany, finally, gave ''Triton" (set on a Neptunian colony where no one goes hungry and everyone is sexually confused) the subtitle ''An Ambiguous Heterotopia," to signal his own critique not only of utopian narratives but of Le Guin's vestigial nostalgia for pastoral communes. Asked in a recent interview why the science fiction novels that he calls utopian portray future societies not even remotely like the cloud-cuckoo-land the term suggests, Jameson explained that the problem confronting Cold War science fiction writers was how to describe utopia ''negatively," in terms of what it won't be like. ''There is, in effect, a ban on graven images, meaning you can't represent the future in a realistic way," he said. Anti-anti-utopian writing ''has to be about freeing the imagination from the present," Jameson continued, ''rather than trying to offer impoverished pictures of what life in the future's going to be." Dystopias aren't the only example of ''negative" utopianism, Jameson points out in ''Archaeologies." The rise to popularity in the mid-1960s and early '70s of disaster novels - about atomic warfare, meteors hitting the Earth, environmental collapse, and so forth - ought to be interpreted as evidence of a collective desire to start over from scratch, he writes. He points to books like Dick's ''Dr. Bloodmoney, or How We Got Along After the Bomb" (1965), a pastoral set in a post-apocalyptic Berkeley; Le Guin's ''The Lathe of Heaven" (1971), about an overpopulated Portland, Ore., made livable by a plague; and John Brunner's ''The Sheep Look Up" (1972), about an Earth whose air is unbreathable. These books are more utopian, in a way, than Bellamy-style idylls, Jameson claims, because the latter offer false hope that ameliorative reforms might transform society. ''What utopian thought wants to make us aware of is the need for complete systemic change, change in the totality of social relations, and not just an improvement in bourgeois culture," he said. ''If we want a [bourgeois idyll], we can go to Celebration, Fla." If discussing a future society that can't be represented realistically is complicated and off-putting, that's because ''it's a new form of thinking," Jameson insisted. ''It's a new dimension of the exercise of the imagination." Jameson, who's been writing about Dick, Le Guin, Delany, Brunner, and others in the pages of scholarly journals like Science Fiction Studies for 30 years, is reticent when it comes to the question of what makes a great anti-anti-utopian narrative. ''The talent or the greatness of science fiction writers," he said, ''lies in what individual solutions they have for a formal problem - the ban on graven images - that cannot be resolved. There's no universal recipe." But when it comes to the power of science fiction to spring us from what he claims is our current state of political paralysis, Jameson is enthusiastic. ''It's only when people come to realize that there is no alternative," he said, ''that they react against it, at least in their imaginations, and try to think of alternatives." Can reading science fiction, I asked, help us decide between various utopian alternatives - urban vs. pastoral, statist vs. anarchistic? No, replied Jameson, insisting there are ''utopian elements" in each of these. What science fiction does afford us, he said, ''is not a synthesis of these elements but a process where the imagination begins to question itself, to move back and forth among the possibilities." What contemporary science fiction author most inspires this ideal process? In ''Archaeologies," Jameson suggests it might be a former doctoral student of his, Kim Stanley Robinson, who wrote his dissertation on Philip K. Dick and whose popular trilogy, ''Red Mars" (1992), ''Green Mars" (1993), and ''Blue Mars" (1995), explores the political, economic, and ecological crises that ensue when 21st-century colonists from Earth begin terraforming Mars. Instead of asking the reader to decide on any one of the colonists' competing utopian ideologies, Jameson said, Robinson ''goes back and forth between these various visions, [allowing us to see] it's not a matter of choosing between them but of using them to destabilize our own existence, our own social life at present." In the final analysis, Jameson writes in ''Archaeologies," the demanding exercise of holding incompatible visions in mind is what ''gives utopia its savor and its bitter freshness, when the thought of utopias is still possible." Joshua Glenn writes the Examined Life column for Ideas. E-mail jglenn at globe.com. From checker at panix.com Mon Dec 26 02:25:23 2005 From: checker at panix.com (Premise Checker) Date: Sun, 25 Dec 2005 21:25:23 -0500 (EST) Subject: [Paleopsych] Slate: Theory of Anything? - Physicist Lawrence Krauss turns on his own. Message-ID: Theory of Anything? - Physicist Lawrence Krauss turns on his own. By Paul Boutin http://www.slate.com/id/2131014/?nav=fo [I am quite patient and will be glad to wait for string theorists to come up with something testable.] Theory of Anything? Physicist Lawrence Krauss turns on his own. By Paul Boutin Posted Wednesday, Nov. 23, 2005, at 11:44 AM ET Lawrence Krauss, a professor of physics and astronomy at Case Western Reserve University, has a reputation for shooting down pseudoscience. He opposed the teaching of [23]intelligent design on The NewsHour With Jim Lehrer. He penned an essay for the New York Times that [24]dissed President Bush's proposal for a manned Mars mission. Yet in his latest book, [25]Hiding in the Mirror, Krauss turns on his own--by taking on string theory, the leading edge of theoretical physics. Krauss is probably right that string theory is a threat to science, but his book proves he's too late to stop it. String theory, which stretches back to the late 1960s, has become in the last 20 years the field of choice for up-and-coming physics researchers. Many of them hope it will deliver a "Theory of Everything"--the key to a few elegant equations that explain the workings of the entire universe, from quarks to galaxies. Elegance is a term theorists apply to formulas, like E=mc^2, which are simple and symmetrical yet have great scope and power. The concept has become so [26]associated with string theory that Nova's three-hour 2003 series on the topic was titled [27]The Elegant Universe (you can watch the whole thing online for free [28]here). Yet a demonstration of string theory's mathematical elegance was conspicuously absent from Nova's special effects and on-location shoots. No one explained any of the math onscreen. That's because compared to E=mc^2, string theory equations look like spaghetti. And unfortunately for the aspirations of its proponents, the ideas are just as hard to explain in words. Let's give it a shot anyway, by retracing the 20^th century's three big breakthroughs in understanding the universe. Step 1: Relativity (1905-1915). Einstein's [31]Special Theory of Relativity says matter and energy (E and m in the famous equation) are equivalent. His [32]General Theory of Relativity says gravity is caused by the warping of space due to the presence of matter. In 1905, this seemed like opium-smoking nonsense. But Einstein's complex math (E=mc^2 is the easy part) accurately predicted oddball behaviors in stars and galaxies that were later observed and confirmed by astronomers. Step 2: Quantum mechanics (1900-1927). Relativistic math works wonderfully for predicting events at the galactic scale, but physicists found that subatomic particles don't obey the rules. Their behavior follows complex probability formulas rather than graceful high-school geometry. The results of particle physics experiments can't be determined exactly--you can only calculate the likeliness of each possible outcome. Quantum's elegant equation is the [33]Heisenberg uncertainty principle. It says the position (x) and momentum (p) of any one particle are never completely knowable at the same time. The closest you can get is a function related to Planck's constant (h), the theoretical minimum unit to which the universe can be quantized. Einstein dismissed this probabilistic model of the universe with his famous quip, "God does not play dice." But just as Einstein's own theories were vindicated by real-world tests, he had to adjust his worldview when experimental results matched quantum's crazy predictions over and over again. These two breakthroughs left scientists with one major problem. If relativity and quantum mechanics are both correct, they should work in agreement to model the Big Bang, the point 14 billion years ago at which the universe was at the same time supermassive (where relativity works) and supersmall (where quantum math holds). Instead, the math breaks down. Einstein spent his last three decades unsuccessfully seeking a formula to reconcile it all--a Theory of Everything. Step 3: String theory (1969-present). String theory proposes a solution that reconciles relativity and quantum mechanics. To get there, it requires two radical changes in our view of the universe. The first is easy: What we've presumed are subatomic particles are actually tiny vibrating strings of energy, each 100 billion billion times smaller than the protons at the nucleus of an atom. That's easy to accept. But for the math to work, there also must be more physical dimensions to reality than the three of space and one of time that we can perceive. The most popular string models require 10 or 11 dimensions. What we perceive as solid matter is mathematically explainable as the three-dimensional manifestation of "strings" of elementary particles vibrating and dancing through multiple dimensions of reality, like shadows on a wall. In theory, these extra dimensions surround us and contain myriad parallel universes. Nova's "The Elegant Universe" used Matrix-like computer animation to convincingly visualize these hidden dimensions. Sounds neat, huh--almost too neat? Krauss' book is subtitled The Mysterious Allure of Extra Dimensions as a polite way of saying String Theory Is for Suckers. String theory, he explains, has a catch: Unlike relativity and quantum mechanics, it can't be tested. That is, no one has been able to devise a feasible experiment for which string theory predicts measurable results any different from what the current wisdom already says would happen. Scientific Method 101 says that if you can't run a test that might disprove your theory, you can't claim it as fact. When I asked physicists like Nobel Prize-winner [34]Frank Wilczek and string theory superstar [35]Edward Witten for ideas about how to prove string theory, they typically began with scenarios like, "Let's say we had a particle accelerator the size of the Milky Way ..." Wilczek said strings aren't a theory, but rather a search for a theory. Witten bluntly added, "We don't yet understand the core idea." If stringers admit that they're only theorizing about a theory, why is Krauss going after them? He dances around the topic until the final page of his book, when he finally admits, "Perhaps I am oversensitive on this subject ... " Then he slips into passive-voice scientist-speak. But here's what he's trying to say: No matter how elegant a theory is, it's a baloney sandwich until it survives real-world testing. Krauss should know. He spent the 1980s proposing formulas that worked on a chalkboard but not in the lab. He finally made his name in the '90s when astronomers' observations confirmed his seemingly outlandish theory that most of the energy in the universe resides in empty space. Now Krauss' field of theoretical physics is overrun with theorists freed from the shackles of experimental proof. The string theorists blithely create mathematical models positing that the universe we observe is just one of an infinite number of possible universes that coexist in dimensions we can't perceive. And there's no way to prove them wrong in our lifetime. That's not a Theory of Everything, it's a Theory of Anything, sold with whizzy PBS special effects. It's not just scientists like Krauss who stands to lose from this; it's all of us. Einstein's theories paved the way for nuclear power. Quantum mechanics spawned the transistor and the computer chip. What if 21^st-century physicists refuse to deliver anything solid without a galaxy-sized accelerator? "String theory is textbook post-modernism fueled by irresponsible expenditures of money," Nobel Prize-winner Robert Laughlin [36]griped to the San Francisco Chronicle earlier this year. Krauss' book won't turn that tide. Hiding in the Mirror does a much better job of explaining string theory than discrediting it. Krauss knows he's right, but every time he comes close to the kill he stops to make nice with his colleagues. Last year, Krauss told a New York Times reporter that string theory was "a colossal failure." Now he writes that the Times quoted him "out of context." In spite of himself, he has internalized the postmodern jargon. Goodbye, Department of Physics. Hello, String Studies. Related in Slate _________________________________________________________________ Superstring theory is "currently the only plausible candidate for a Theory of Everything," according to [37]this 1996 article by Jim Holt. In 2004, Amanda Schaffer [38]labeled Elegant Universe author and string theory aficionado Brian Greene "the closest thing physics has to a pop star." David Greenberg [39]debunks the myth that Einstein's theory begat moral relativism and artistic modernism. Holt writes about the end of the universe [40]here. Learn about "quantum weirdness" in [41]this dialogue. [42]Paul Boutin is a Silicon Valley writer who spent 15 years as a software engineer and manager. References 21. http://www.slate.com/?id=3944 23. http://www.pbs.org/newshour/bb/religion/july-dec05/evolution_8-05.html 24. http://genesis1.phys.cwru.edu/~krauss/nytimesjan6.html 25. http://www.amazon.com/exec/obidos/tg/detail/-/0670033952 26. http://www.google.com/search?q=%2522string+theory%2522+elegant 27. http://www.pbs.org/wgbh/nova/elegant/ 28. http://www.pbs.org/wgbh/nova/elegant/program.html 29. http://www.slate.com/id/2131014/?nav=fo#ContinueArticle 30. http://ad.doubleclick.net/jump/slate.arts/slate;kw=slate;sz=300x250;ord=4740? 31. http://archive.ncsa.uiuc.edu/Cyberia/NumRel/SpecialRel.html 32. http://archive.ncsa.uiuc.edu/Cyberia/NumRel/GenRelativity.html 33. http://en.wikipedia.org/wiki/Uncertainty_principle 34. http://en.wikipedia.org/wiki/Frank_Wilczek 35. http://en.wikipedia.org/wiki/Edward_Witten 36. http://sfgate.com/cgi-bin/article.cgi?f=/c/a/2005/03/14/MNGRMBOURE1.DTL 37. http://www.slate.com/id/3119/ 38. http://www.slate.com/id/2103335/ 39. http://www.slate.com/id/74164/ 40. http://www.slate.com/id/2096491/entry/2096506/ 41. http://www.slate.com/id/2082874/entry/2082873/ 42. http://paulboutin.weblogger.com/ From checker at panix.com Mon Dec 26 02:25:34 2005 From: checker at panix.com (Premise Checker) Date: Sun, 25 Dec 2005 21:25:34 -0500 (EST) Subject: [Paleopsych] Promethea: Doublethink Message-ID: Doublethink http://www.promethea.org/Misc_Compositions/Doublethink.html [This all assumes that nearly everyone dislikes the System but just can't admit it, even to themselves. It would be more correct to say that the masses are too distracted by bread and circuses to worry.] One of the more profound realizations on the part of Eric Blair, who wrote under the pen name George Orwell, is first illustrated in this passage of his landmark dystopian novel Nineteen Eighty-Four: The Party said that Oceania had never been in alliance with Eurasia. He, Winston Smith, knew that Oceania had been in alliance with Eurasia as short a time as four years ago. But where did that knowledge exist? Only in his own consciousness, which in any case must soon be annihilated. And if all others accepted the lie which the Party imposed -- if all records told the same tale -- then the lie passed into history and became truth. "Who controls the past," ran the Party slogan, "controls the future: who controls the present controls the past." And yet the past, though of its nature alterable, never had been altered. Whatever was true now was true from everlasting to everlasting. It was quite simple. All that was needed was an unending series of victories over your own memory. "Reality control," they called it: in Newspeak, "doublethink." ... His mind slid away into the labyrinthine world of doublethink. To know and not to know, to be conscious of complete truthfulness while telling carefully constructed lies, to hold simultaneously two opinions which cancelled out, knowing them to be contradictory and believing in both of them, to use logic against logic, to repudiate morality while laying claim to it, to believe that democracy was impossible and that the Party was the guardian of democracy, to forget, whatever it was necessary to forget, then to draw it back into memory again at the moment when it was needed, and then promptly to forget it again, and above all, to apply the same process to the process itself -- that was the ultimate subtlety: consciously to induce unconsciousness, and then, once again, to become unconscious of the act of hypnosis you had just performed. Even to understand the word "doublethink" involved the use of doublethink. The profoundest relevance of doublethink likely eludes most readers of Nineteen Eighty-Four. After all, it is such a plentiful, dense and multifaceted book. A wealth of original concepts, perspicacious analysis, and chiefly, effortless prose immerse the unsuspecting reader in the senses and thoughts of Orwell's protagonist Winston Smith. Arrested by the almost painful clarity and harsh reality feel of the storytelling and imagery, the reader may very easily be led astray from applying adequate attention to some of Orwell's most brilliant and important insights. (My own favorite piece of writing in the book which momentarily distracted me from its own implications is the marvelous passage which concludes: "But you could not have pure love or pure lust nowadays. No emotion was pure, because everything was mixed up with fear and hatred. Their embrace had been a battle, the climax a victory. It was a blow struck against the Party. It was a political act.") Orwell's artistic mastery distracts from his content unless his readers read well - carefully, thoughtfully, repeatedly. Ironically this problem might have been avoided in a mediocre novel, although we cannot seriously wish that Orwell had sacrificed his own goals simply to compensate for others' failures of attention, and not only because that would deprive us of the masterpiece. Evidence that in particular doublethink has received lax and insufficient attention from millions of Orwell's readers comes in the form of the unwitting neologism, "doublespeak." Somehow, perhaps because of the expression "double talk," readers have conflated Orwell's Newspeak with Orwell's doublethink to make doublespeak, a word which Orwell never used in Nineteen Eighty-Four, so doublethink comes down into common colloquy and oratory as the supposedly Orwellian doublespeak. But doublespeak is a mere offshoot in meaning from Newspeak, a mere subset of the abuse of language - disingenuous, manipulative, often internally contradictory meanings in politicized words and phrases. (Doublethink produces instances of doublespeak.) An understanding of doublespeak is useful, but the idea is not nearly as profound as doublethink, missing most of Orwell's subtle point. Doublethink refers to resolving contradictions which (otherwise) cannot be resolved, by keeping at least two alternate versions of something in mind at once, remembering only the approved one in any circumstance. One does not experience cognitive dissonance unless one fails at proper doublethink, in which case raw discomfort, almost physical pain, may be experienced. Other psychologically important Orwellian Newspeak neologisms, such as crimestop, blackwhite, and goodthink, are contained within doublethink. From crimestop to blackwhite to goodthink, Orwell describes the process as more and more instinctive. Orwell explains doublethink most explicitly here: Doublethink means the power of holding two contradictory beliefs in one's mind simultaneously, and accepting both of them. The Party intellectual knows in which direction his memories must be altered; he therefore knows that he is playing tricks with reality; but by the exercise of doublethink he also satisfies himself that reality is not violated. The process has to be conscious, or it would not be carried out with sufficient precision, but it also has to be unconscious, or it would bring with it a feeling of falsity and hence of guilt. In his extreme circumstance of secret rebellion against doublethink, Winston finds that it helps him retain a sense of his own sanity - given the exigency of resisting nearly the whole of apparent human belief - to use a notion of accessing objectivity, or keep safe a recognition of an objective reality: To tell deliberate lies while genuinely believing in them, to forget any fact that has become inconvenient, and then, when it becomes necessary again, to draw it back from oblivion for just so long as it is needed, to deny the existence of objective reality and all the while to take account of the reality which one denies -- all this is indispensably necessary. Even in using the word doublethink it is necessary to exercise doublethink. For by using the word one admits that one is tampering with reality; by a fresh act of doublethink one erases this knowledge; and so on indefinitely, with the lie always one leap ahead of the truth. Here we see that an objective "true" perspective can serve as a useful model within a context, proving very useful indeed to Winston in going against the Party's "collective solipsism" - though we do not see evidence supporting Ayn Rand's assertion that objectivity exists as a universal, perfectly accessible ideal entirely independent of context and perspective. Indeed, doublethink describes an internal conflict within the mind; it is even possible to follow doublethink with two things which do not really conflict practically, merely because one consciously and unconsciously believes it proper. The most important thing we can now do with Orwell's concept of doublethink is to apply it to our own subtle daily discomfort which leads to forgetting. It is difficult to contemplate the full extent of what goes on that we know should be changed, so we ignore even what we know. It is difficult to think of the extent of misery which is experienced in this world if not in our own lives; of the injustice, and of the misinformation and lies. It is especially difficult to even learn about most of it, for its sheer breadth and depth. The worst is the depth; to know that unbounded monstrosities are committed in the name of the established order and are nonetheless not even common topics of conversation, much less grounds for immediate rebellion against that order - that is painfully unbelievable. If a shocking number of these and a great many lesser but similar acts are the work of those in power over us directly and indirectly, it is difficult to continue to see the situation as evidence shows it: that much of the worst is done by those who are supposed the best, and that in value "the high" is really often the low, and that the things on which people expend so much energy and attention are really unimportant. Thousands of affirmations that one is wrong in this unpopular opinion are given every day, ranging from outright declarations that all is well, that "the best" among us are really deserving, that the powerful are as they should be, and that whatever gains mass attention must therefore be important, to more numerous slight hints which nonetheless nudge us toward the same conclusions. It is difficult to know and always remember. Therefore it is easier to ignore the contradictions between what one has learned by experience and what one is meant to think, rationalizing as necessary and improvising justification as it seems required. Instead of the effort of trying to remember and its consequence of discomfort, perhaps a painful somberness, even a feeling of being jarred to half-craziness by knowing terrible facts which should not ever have happened to become facts to be knowable - oh, it is so much easier to forget what one does know, at least enough of the time to render our responses occasional, symbolic, and ineffectual. Yes, it is easier to receive the comfort of more content company, those who have also reconciled, or those who have always remained ignorant of seeing our world as unacceptable. That is the reward for doublethink. We - the rememberers of forbidden things - must remember with all the other forbidden things this secret piece of knowledge: we slink into doublethink far more unconsciously than consciously. We do not necessarily get a clear choice. There may not be a point when we even become aware of the "choice" which an outsider to ourselves might be able to see we have "made." It is more likely simply unconscious forgetting, our relatively more unaware nervous processes adjusting to seek pleasure or comfort instead of pain or discomfort. With practice, this forgetting becomes second nature, and if it cannot offer us the really aware peaks and valleys of happiness possible in this modern world of ours only for those who become dissidents and rebels of some sort, then at least we have secured a foggy contentment, making livable the seemingly unbearable. No one is immune to this. No one is above this. Everyone appears to have the basic capacity for doublethink; as Orwell pointed out, even the ability to really understand doublethink requires its use. Nor are the most dominant themselves simply above doublethink because they take substantial advantage of it in others; as Orwell relates here regarding war, they are the most possessed by doublethink: It is precisely in the Inner Party that war hysteria and hatred of the enemy are strongest. In his capacity as an administrator, it is often necessary for a member of the Inner Party to know that this or that item of war news is untruthful, and he may often be aware that the entire war is spurious and is either not happening or is being waged for purposes quite other than the declared ones: but such knowledge is easily neutralized by the technique of doublethink. Even the most carefully resistive probably succumb to doublethink, to some slight extent, and in some ways. What can we do about it? Our only weapons against our own weaker instincts are these: one, to harden ourselves to become stronger in the face of mental discomfort, two, to find compatriots and companions who support us instead of those crowds who encourage conformity, and three, to actively try to remain aware of things we might forget - to fight for conscious appraisal instead of quasi-consciousness, shocking ourselves into looking from a different perspective when necessary. These are our three weapons against doublethink. From shovland at mindspring.com Mon Dec 26 17:21:31 2005 From: shovland at mindspring.com (Steve Hovland) Date: Mon, 26 Dec 2005 09:21:31 -0800 Subject: [Paleopsych] Epigenetics (We are software, not hardware :-)) Message-ID: The conventional wisdom on genes goes something like this: DNA is transcribed onto RNA, which form proteins, which are responsible for just about every process in the body, from eye color to ability to fight off illness. But even as the finishing touches were being applied to the sequencing of the human genome (completed in April 2003), unaccountable anomalies kept creeping in, strangely reminiscent of the quarks and dark matter and sundry weird forces that keep muddying the waters of theoretical physics. Enter the science of epigenetics, which attempts to explain the mysterious inner layers of the genetic onion that may account for why identical twins aren?t exactly identical and other conundrums, including why some people are predisposed to mental illness while others are not. Scientific American devotes a two-part article to the topic in its November and December 2003 issues. To summarize: Only two percent of our DNA - via RNA - codes for proteins. Until very recently, the rest was considered "junk," the byproduct of millions of years of evolution. Now scientists are discovering that some of this junk DNA switches on RNA that may do the work of proteins and interact with other genetic material. "Malfunctions in RNA-only genes," explains Scientific American, "can inflict serious damage." Epigenetics delves deeper into the onion, involving "information stored in the proteins and chemicals that surround and stick to DNA." Methylation is a chemical process that, among other things, aids in the transcription of DNA to RNA and is believed to defend the genome against parasitic genetic elements called transpons. An 2003 MIT study created mice with an inborn deficiency of a methylating enzyme. Eighty percent of these mice died of cancer within nine months. A five-year Human Epigenome Project to map all the DNA methyl sites was launched in October 2003 in the UK. A MedLine search of epigenetics and bipolar disorder revealed but two articles, such is the topic?s novelty. Arturas Petronis MD, PhD, Head of the Krembil Family Epigenetics Laboratory at the University of Toronto, in an article in the Nov 2003 American Journal of Medical Genetics, fills in some of the blanks: We know that there is a high concordance of identical twins with bipolar disorder, but epigenetics, he explains, may account for the 30 to 70 percent of cases where only one twin has the illness. Identical twins share the same DNA, but their epigenetic material may be different. Moreover, whereas DNA variations are permanent, epigenetic changes are in a process of flux and generally accumulate over time. This may explain, Dr Petronis theorizes, why bipolar disorder tends to manifest at ages 20?30 and 45-50, which coincides with major hormonal changes, which may "substantially affect regulation of genes ... via their epigenetic modifications." The dynamics of epigenetic changes may also account for the fluctuating course of bipolar, Dr Petronis speculates, perhaps more so than static DNA variations. Finally, the fact that epigenetic anomalies can be reversed makes them inviting targets for a new generation of meds, Scientific American points out. In a 2003 pilot study, Dr Petronis and his colleagues investigated the epigenetic gene modification in a section of the dopamine 2 receptor genes in two pairs of identical twins, one pair with both partners having schizophrenia and the other having only one partner with the illness. What they discovered was that the partner with schizophrenia from the mixed pair had more in common, epigenetically, with the other set of twins than his own unaffected twin. This may be the first time you have heard of epigenetics. Clearly, it won?t be the last. From checker at panix.com Tue Dec 27 01:28:08 2005 From: checker at panix.com (Premise Checker) Date: Mon, 26 Dec 2005 20:28:08 -0500 (EST) Subject: [Paleopsych] Economist: The proper study of mankind Message-ID: The proper study of mankind http://www.economist.com/surveys/PrinterFriendly.cfm?story_id=5299220 5.12.20 [All articles in this series on human evolution below.] New theories and techniques have revolutionised our understanding of humanity's past and present, says Geoffrey Carr (interviewed here) SEVEN hundred and forty centuries ago, give or take a few, the skies darkened and the Earth caught a cold. Toba, a volcano in Sumatra, had exploded with the sort of eruptive force that convulses the planet only once every few million years. The skies stayed dark for six years, so much dust did the eruption throw into the atmosphere. It was a dismal time to be alive and, if Stanley Ambrose of the University of Illinois is right, the chances were you would be dead soon. In particular, the population of one species, known to modern science as Homo sapiens, plummeted to perhaps 2,000 individuals. The proverbial Martian, looking at that darkened Earth, would probably have given long odds against these peculiar apes making much impact on the future. True, they had mastered the art of tool-making, but so had several of their contemporaries. True, too, their curious grunts allowed them to collaborate in surprisingly sophisticated ways. But those advantages came at a huge price, for their brains were voracious consumers of energy--a mere 2% of the body's tissue absorbing 20% of its food intake. An interesting evolutionary experiment, then, but surely a blind alley. This survey will attempt to explain why that mythical Martian would have been wrong. It will ask how these apes not only survived but prospered, until the time came when one of them could weave together strands of evidence from fields as disparate as geology and genetics, and conclude that his ancestors had gone through a genetic bottleneck caused by a geological catastrophe. Not all of his contemporaries agree with Dr Ambrose about Toba's effect on humanity. The eruption certainly happened, but there is less consensus about his suggestion that it helped form the basis for what are now known as humanity's racial divisions, by breaking Homo sapiens into small groups whose random physical quirks were preserved in different places. The idea is not, however, absurd. It is based on a piece of evolutionary theory called the founder effect, which shows how the isolation of small populations from larger ones can accelerate evolutionary change, because a small population's average characteristics are likely to differ from those of the larger group from which it is drawn. Like much evolutionary theory, this is just applied common sense. But only recently has such common sense been applied systematically to areas of anthropology that have traditionally ignored it and sometimes resisted it. The result, when combined with new techniques of genetic analysis, has been a revolution in the understanding of humanity's past. And anthropology is not the only human science to have been infused with evolutionary theory. Psychology, too, is undergoing a makeover and the result is a second revolution, this time in the understanding of humanity's present. Such understanding has been of two types, which often get confused. One is the realisation that many human activities, not all of them savoury, happen for exactly the same reasons as in other species. For example, altruistic behaviour towards relatives, infidelity, rape and murder are all widespread in the animal kingdom. All have their own evolutionary logic. No one argues that they are anything other than evolutionarily driven in species other than man. Yet it would be extraordinary if they were not so driven in man, because it would mean that natural selection had somehow contrived to wipe out their genetic underpinnings, only for them to re-emerge as culturally determined phenomena. Understanding this shared evolutionary history with other species is important; much foolishness has flowed from its denial. But what is far more intriguing is the progress made in understanding what makes humanity different from other species: friendship with non-relatives; the ability to conceive of what others are thinking, and act accordingly; the creation of an almost unimaginably diverse range of artefacts, some useful, some merely decorative; and perhaps most importantly, the use of language, which allows collaboration on a scale denied to other creatures. There are, of course, gaps in both sets of explanations. And this field of research being a self-examination, there are also many controversies, not all driven by strictly scientific motives. But the outlines of a science of human evolution that can explain humanity's success, and also its continuing failings, are now in place. It is just a question of filling in the canvas--or the cave wall. http://www.economist.com/surveys/PrinterFriendly.cfm?story_id=5299209 The long march of everyman It all started in Africa OUT of Africa, always something new", wrote Caius Plinius Secundus, a Roman polymath who helped to invent the field of natural history. Pliny wrote more truly than he could possibly have realised. For one fine day, somewhere between 85,000 and 60,000 years before he penned those words, something did put its foot over the line that modern geographers draw to separate Africa from Asia. And that something--or, rather, somebody--did indeed start something new, namely the peopling of the world. Writing the story of the spread of humanity is one of the triumphs of modern science, not least because the ink used to do it was so unexpected. Like students of other past life forms, researchers into humanity's prehistoric past started by looking in the rocks. The first fossilised human to be recognised as such was unearthed in 1856 in the Neander Valley near Dusseldorf in Germany. Neanderthal man, as this skeleton and its kin became known, is now seen as a cousin of modern humans rather than an ancestor, and subsequent digging has revealed a branching tree of humanity whose root can be traced back more than 4m years (see article). Searching for human fossils, though, is a frustrating exercise. For most of their existence, people were marginal creatures. Bones from periods prior to the invention of agriculture are therefore excedingly rare. The resulting data vacuum was filled by speculation scarcely worthy of the name of theory, which seemed to change with every new discovery. Then, in the 1980s, a geneticist called Allan Wilson decided to redefine the meaning of the word "fossil". He did so in a way that instantly revealed another 6 billion specimens, for Wilson's method made a fossil out of every human alive. Living fossils In retrospect, Wilson's insight, like many of the best, is blindingly obvious. He knew, as any biologist would, that an organism's DNA carries a record of its evolutionary past. In principle, looking at similarities and differences in the DNA sequences of living organisms should allow a researcher to reconstruct the family tree linking those organisms. In practice, the sexual mixing that happens with each generation makes this tedious even with today's DNA-analysis techniques. With those available in the 1980s it would have been impossible. Wilson, however, realised he could cut through the problem by concentrating on an unusual type of DNA called mitochondrial DNA. Mitochondria are the parts of a cell that convert energy stored in sugar into a form that the rest of the cell can use. Most of a cell's genes are in its nucleus, but mitochondria, which are the descendants of bacteria that linked up with one of humanity's unicellular ancestors some 2 billion years ago, retain a few genes of their own. Mitochondrial genomes are easy to study for three reasons. First, they are small, which makes them simple to analyse. Second, mitochondria reproduce asexually, so any changes between the generations are caused by mutation rather than sexual mixing. Third, in humans at least, mitochondria are inherited only from the mother. In 1987, Rebecca Cann, one of Wilson's students, applied his insight to a series of specimens taken from people whose ancestors came from different parts of the world. By analysing the mutational differences that had accumulated since their mitochondria shared a common ancestor, she was able to construct a matriline (or, perhaps more accurately, a matritree) connecting them. The result was a revelation. Whichever way you drew the tree (statistics not being an exact science, there was more than one solution), its root was in Africa. Homo sapiens was thus unveiled as an African species. But Dr Cann went further. Using estimates of how often mutations appear in mitochondrial DNA (the so-called molecular clock), she and Wilson did some matridendrochronology. The result suggests that all the lines converge on the ovaries of a single woman who lived some 150,000 years ago. There was much excited reporting at the time about the discovery and dating of this African "Eve". She was not, to be clear, the first female Homo sapiens. Fossil evidence suggests the species is at least 200,000 years old, and may be older than that. And you can now do a similar trick for the patriline using part of the male (Y) chromosome in the cell nucleus, because this passes only from father to son. Unfortunately for romantics, the most recent common ancestor of the Y-chromosome is a lot more recent than its mitochondrial equivalent. African Adam was born 60,000-90,000 years ago, and so could not have met African Eve. Nevertheless, these two pieces of DNA as they have weaved their ways down the generations have filled in, in surprising detail, the highways and byways of human migration across the face of the planet. Sons of Adam, daughters of Eve Detail, however is not the same as consensus, and there are two schools of thought about how people left Africa in the first place. Appropriately, some of their main protagonists are at the rival English universities of Oxford and Cambridge. The Oxford school, championed by Stephen Oppenheimer, believes that the descendants of a single emigration some 85,000 years ago, across the strait of Bab el Mandeb at the southern end of the Red Sea, are responsible for populating the rest of the world. The Cambridge school, championed by Robert Foley and Marta Miraz?n Lahr, agrees that there was, indeed, a migration across this strait, though probably nearer to 60,000 years ago. However, it argues that many non-Africans are the descendants of at least one subsequent exodus. Both schools agree that the Bab el Mandebites spread rapidly along the coast of southern Arabia and thence along the south coast of Asia to Australia, though Dr Oppenheimer has them turning inland, too, once they crossed the strait of Hormuz. But it is in describing what happened next that the two versions really part company, for it is here that the descendants of the Oxford migration run into the eruption of Toba. That Toba devastated South and South-East Asia is not in doubt. Thick layers of ash from the eruption have been found as far afield as northern Pakistan. The question is whether there were people in Asia at the time. One of the most important pieces of evidence for Dr Oppenheimer's version of events is some stone tools in the ash layer in Malaysia, which he thinks were made by Homo sapiens. Molecular clocks have a regrettable margin of error, but radioactive dating is a lot more accurate. If he is right, modern humans must have left Africa before the eruption. The tools might, however, have been crafted by an earlier species of human that lived there before Homo sapiens. For Dr Oppenheimer, the eruption was a crucial event, dividing the nascent human population of Asia into two disconnected parts, which then recolonised the intermediate ground. In the Cambridge version, Homo sapiens was still confined to Africa 74,000 years ago, and would merely have suffered the equivalent of a nuclear winter, not an ash-fall of up to five metres--though Dr Ambrose and his colleagues think even that would have done the population no good. The Cambridge version is far more gentle. The descendants of its subsequent exodus expanded north-eastwards into central Asia, and thence scattered north, south, east and west--though in a spirit of open-mindedness, Sacha Jones, a research student in Dr Foley's department, is looking in the ash layer in India to see what she can find there. Which version is correct should eventually be determined by the Genographic Project, a huge DNA-sampling study organised by Spencer Wells, a geneticist, at the behest of America's National Geographic Society and IBM. But both already have a lot in common. Both, for example, agree that the Americas seem to have been colonised by at least two groups. The Cambridge school, though, argues that one of these is derived ultimately from the first Bab el Mandeb crossing while the other is descended from the later migrants. Both also agree that Europe received two waves of migration. The ancestors of the bulk of modern Europeans came via central Asia about 35,000 years ago, though some people in the Balkans and other parts of southern Europe trace their lines back to an earlier migration from the Middle East. But the spread of agriculture from its Middle Eastern cradle into the farthest reaches of Europe does not, as some researchers once thought, seem to have been accompanied by a mass movement of Middle Eastern farmers. The coming together of two groups of humans can be seen in modern India, too. In the south of the subcontinent, people have Y-chromosomes derived almost exclusively from what the Cambridge school would interpret as being northern folk (and the Oxford school as the western survivors of Toba). However, more than 20% of their mitochondria arrived in Asia with the first migration from Africa (or, according to taste, clung on along the south-eastern fringes of the ash plume). That discovery speaks volumes about what happened when the two groups met. It suggests that many modern south Indians are descended from southern-fringe women, but few from southern-fringe men--implying a comprehensive conquest of the southerners by the northerners, who won extra southern wives. This observation, in turn, helps explain why Y-chromosome Adam lived so much more recently than mitochondrial Eve. Displacement by conquest is one example of a more general phenomenon--that the number of offspring sired by individual males is more variable than the number born by individual females. This means that more males than females end up with no offspring at all. Male gene lines therefore die out faster than female ones, so those remaining are more likely, statistically, to converge in the recent past. Successful male gene lines, though, can be very successful indeed. Students of animal behaviour refer to the top male in a group as the "alpha". Such dominant animals keep the others under control and father a large proportion, if not all, of the group's offspring. One of the curiosities of modern life is that voters tend to elect alpha males to high office, and then affect surprise when they behave like alphas outside politics too. But in the days when alphas had to fight rather than scheme their way to the top, they tended to enjoy the sexual spoils more openly. And there were few males more alpha in their behaviour than Genghis Khan, a man reported to have had about 500 wives and concubines, not to mention the sexual opportunities that come with conquest. It is probably no coincidence, therefore, that one man in every 12 of those who live within the frontiers of what was once the Mongol empire (and, indeed, one in 200 of all men alive today) have a stretch of DNA on their Y-chromosomes that dates back to the time and birthplace of the great Khan. http://www.economist.com/surveys/PrinterFriendly.cfm?story_id=5299196 Meet the relatives A large and diverse family WHEN Homo sapiens emerged as a species, he was not alone. The world he entered was already peopled by giants and dwarfs, elves, trolls and pixies--in other words, creatures that looked humanlike, but were not the genuine article. Or, at least, not as genuine as Homo sapiens has come to believe himself to be. Like the story of Homo sapiens himself, the story of the whole human family begins in Africa. About 4.5m years ago, probably in response to a drying of the climate that caused forest cover in that continent to shrink, one species of great ape found itself pushed out into the savannah, an ecological niche not normally occupied by apes. Over the next 300,000 years these apes evolved an upright stance. No one know for sure why, but one plausible explanation, advanced by Peter Wheeler of John Moores University in Liverpool, is that standing upright reduces exposure to sunlight. To an animal adapted to the forest's shade, the remorseless noonday sun of the savannah would have been a threat. Dr Wheeler's calculations suggest that walking upright decreases exposure at noon by a third compared with going on all fours, since less of the body's surface faces the overhead sun. Humanity, in the form of Australopithecus anamensis, had arrived. Australopithecines of various species lasted for over 3m years. But half-way through that period something interesting happened. One of them begat a species known to science variously as Homo rudolfensis and Homo habilis. All modern great apes make tools out of sticks and leaves to help them earn their living, and there is no reason to believe that this was not true of australopithecines. But, aided by hands that no longer needed to double as part-time feet, Homo habilis began to exploit a new and potent material that needs both precision and strength to work--stone. This provided its immediate descendants with a powerful technology, but also gave its distant descendants in human palaeontology laboratories an additional way of tracing their ancestry, for stone tools often survive where bones do not. Homo habilis's successor species, Homo erectus, did not bestride the globe in the way that his eventual descendant Homo sapiens did, but he certainly stuck his nose out of Africa. Indeed, the first fossil erectus discovered was in Java, in 1891, and the second one, several decades later, turned up in China, near Beijing. It was not until 1960 that erectus bones were found in Africa. Homo erectus is a frustrating species. His tools are found all over the southern half of Eurasia, as well as in Africa. But China and Java aside, his bones are scarce outside Africa. There are two skullcaps from Georgia and half of one from India. He did, however, leave lots of descendants. Naming fossils is a game that beautifully illustrates Henry Kissinger's witticism about academic disputes being so bitter because the stakes are so low. The best definition of a species that biologists have been able to come up with is "a group of creatures capable of fertile interbreeding, given the chance", which clearly makes it hard to determine what species a particular fossil belongs to. Researchers therefore have to fall back on the physical characteristics of the bones they find. That allows endless scope for argument between so-called splitters, who seem to want to give a new name to every skull discovered, and lumpers, who like to be as inclusive as possible. Some splitters, for example, argue that the African version of Homo erectus should be called Homo ergaster. Whatever the niceties, it is clear that by 500,000 years ago, if not before, Homo erectus was breaking up into anatomically different populations. Splitters would like to turn the Georgia fossils, an early twig of the erectus tree, into Homo georgicus. There is also Homo rhodesiensis, found in southern Africa, Homo heidelbergensis from Europe, and a whole drawer's-worth of specimens known to some as Homo helmei and to others as archaic Homo sapiens. How little is really known, though, was thrown into sharp relief by the announcement just over a year ago that yet another species, Homo floresiensis, had been found. It was discovered on Java's nearish neighbour island, Flores. Finding a new species of human is always exciting, but what is particularly intriguing about Homo floresiensis is how small it was--barely a metre tall when fully grown. Perhaps inevitably, though to the disgust of its discoverers, Homo floresiensis became known to journalists as the hobbit, after J.R.R. Tolkien's fictional humanoid. Homo neanderthalensis, the descendant of Homo heidelbergensis, by contrast, was if not a giant then at least a troll. Though he stood five or ten centimetres shorter than a modern European Homo sapiens, the thickness of his bones suggests he was a lot heavier. Both Homo neanderthalensis and Homo floresiensis were certainly around when Homo sapiens left Africa--whichever version of that story turns out to be the correct one. There may also have been some lingering populations of other hominid species. That raises the intriguing question of what happened when these residents met the sapiens wave. Some researchers believe there was interbreeding, echoing the ideas of an older school of palaeoanthropology called multiregionalism. The multiregionalists thought either that pre-sapiens hominids were all a vast, interbreeding species that gradually evolved into sapiens everywhere, or, against all Darwinian logic, that Homo sapiens arose independently in several places by some unknown process of parallel evolution. As recently as 2002, Alan Templeton, then at the University of Washington at St Louis, claimed to have found a number of genetic trees whose roots were 400,000-800,000 years old, and yet which included non-Africans. That, if confirmed, would support multiregionalism. Meanwhile, John Relethford, of the State University of New York's campus at Oneonta, has criticised the conclusions of studies on mitochondrial DNA extracted from the bones of Neanderthals. This does not resemble DNA from any known modern humans, which led the authors of the work to conclude there was no interbreeding. Dr Relethford points out that Neanderthal DNA brought into the sapiens population by interbreeding could subsequently have been lost by chance in the lottery of who does and who does not reproduce. Similar losses are known to have happened in Australia, where mitochondrial DNA from human fossils is absent from modern Australians. Most students of the field, though, think there was no interbreeding, full stop. Either Homo sapiens persecuted his cousins into extinction or, with his superior technology, he outhunted, outgathered and outbred them. The next question is where that technology--or, rather, the brainpower to invent and make it--came from. http://www.economist.com/surveys/PrinterFriendly.cfm?story_id=5299185 If this is a man Why it pays to be brainy THANKS to Dr Cann and her successors, the story of how Homo sapiens spread throughout the world is getting clearer by the day. But why did it happen? What was it that gave the species its edge, and where did it come from? Here, the picture blurs. Until recently, it was common to speak of an Upper Palaeolithic revolution in human affairs--what Jared Diamond, of the University of California at Los Angeles, called the Great Leap Forward. Around 40,000 years ago, so the argument ran, humanity underwent a mental step-change. The main evidence for this was the luxuriant cave art that appeared in Europe shortly after this time. Palaeopsychologists see this art as evidence that the artists could manipulate abstract mental symbols--and so they surely could. But it is a false conclusion (though it was widely drawn before Dr Cann's work) that this mental power actually evolved in Europe. Since all humans can paint (some, admittedly, better than others), the mental ability to do so, if not the actual technique, must have emerged in Africa before the first emigrants left. Indeed, evidence of early artistic leanings in that continent has now turned up in the form of drilled beads made of shells and coral, and--more controversially--of stones that have abstract patterns scratched on to them and bear traces of pigment. That certainly pushes the revolution back a few tens of millennia. The oldest beads seem to date from 75,000 years ago, and an inspired piece of lateral thinking suggests that clothing appeared at about the same time. Mark Stoneking and his colleagues at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, applied the molecular-clock technique to human lice. They showed that head lice and body lice diverged 75,000 years ago. Since body lice live in clothing, and most other species of mammal support only one species of louse, the inference is that body lice evolved at the same time as clothes. That is an interesting coincidence, and some think it doubly interesting that it coincides with the eruption of Toba. It may be evidence of a shift of thought patterns of the sort that the Upper Palaeolithic revolutionaries propose. On the other hand, there are also signs of intellectual shifts predating this period. Sally McBrearty, of the University of Connecticut, and Alison Brooks, of George Washington University, have identified 14 traits, from making stone blades to painting images, which they think represent important conceptual advances. Ten of them, including fishing, mining, engaging in long-distance trade and making bone tools, as well as painting and making beads, seem to be unique to modern Homo sapiens. However, four, including grinding pigments (for what purpose remains unknown, but probably body painting), stretch back into the debatable past of Homo helmei. Given the fragmentary nature of the evidence from Africa, which has not been explored with the same sort of archaeological fine-tooth comb as Europe, the speed of the emergence of modern behaviour is still debatable. One thing, however, that clearly played no part in distinguishing Homo sapiens from his hominid contemporaries was a bigger brain. Modern people do, indeed, have exceedingly large brains, measuring about 1,300 cm?. Other mammals that weigh roughly the same as human beings--sheep, for example--have brains with an average volume of 180cm?. In general, there is a well-established relationship between body size and brain size that people very much do not fit. But as Dr Oppenheimer shows (see chart 2), most of this brain expansion happened early in human evolutionary history, in Homo habilis and Homo erectus. The brains of modern people are only about 6% larger than those of their immediate African predecessors. Perhaps more surprisingly, they are smaller than those of Neanderthals. There is no doubt that this early brain growth set the scene for what subsequently happened to Homo sapiens, but it does not explain the whole story, otherwise Homo erectus would have built cities and flown to the moon. Flying to the moon may, in fact, be an apt analogy. Just as a space rocket needs several stages to lift it into orbit, so the growth of human intelligence was probably a multi-stage process, with each booster having its own cause or causes. What those causes were, and when they operated, remains a matter of vehement academic dispute. But there are several plausible hypotheses. The most obvious idea--that being clever helps people to survive by learning about their surroundings and being able to solve practical problems--is actually the least favoured explanation, at least as the cause of the Great Leap Forward. But it was probably how intelligence got going in the pre-human primate past, and thus represented the first stage of the rocket. Many primates, monkeys in particular, are fruit-eaters. Eating fruit is mentally taxing in two ways. The first is that fruiting trees are patchily distributed in both space and time (though in the tropics, where almost all monkeys live, there are always trees in fruit somewhere). An individual tree will provide a bonanza, but you have to find it at the right moment. Animals with a good memory for which trees are where, and when they last came into fruit, are likely to do better than those who rely on chance. Also, fruit (which are a rare example of something that actually wants to be eaten, so that the seeds inside will be scattered) signal to their consumers when they are ready to munch by changing colour. It is probably no coincidence, therefore, that primates have better colour vision than most other mammals. But that, too, is heavy on the brain. The size of the visual cortex in a monkey brain helps to explain why monkeys have larger brains than their weight seems to warrant. The intelligence rocket's second stage was almost certainly a way of dealing with the groups that fruit-eating brought into existence. Because trees in the tropics come into fruit at random, an animal needs a lot of fruit trees in its range if it is to avoid starving. Such a large range is difficult for a lone animal to defend. On the other hand, a tree in fruit can feed a whole troop. For both these reasons, fruit-eating primates tend to live in groups. But if you have to live in a group, you might as well make the most of it. That means avoiding conflict with your rivals and collaborating with your friends--which, in turn, means keeping track of your fellow critters to know who is your enemy and who your ally. That, in turn, demands a lot of brain power. One of the leading proponents of this sort of explanation for intelligent minds is Robin Dunbar, of Liverpool University in England. A few years ago, he showed that the size of a primate's brain, adjusted for the size of its body, is directly related to the size of group it lives in. (Subsequent work has shown that the same relationship holds true for other social mammals, such as wolves and their kin.) Humans, with the biggest brain/body ratio of all, tend to live in groups of about 150. That is the size of a clan of hunter-gathers. Although the members of such a clan meet only from time to time, since individual families forage separately, they all agree on who they are. Indeed, as Dr Dunbar and several other researchers have noticed, many organisations in the modern world, such as villages and infantry companies, are about this size. Living in collaborative groups certainly brings advantages, and those may well offset the expense of growing and maintaining a large brain. But even more advantage can be gained if an animal can manipulate the behaviour of others, a phenomenon dubbed Machiavellian intelligence by Andrew Whiten and Richard Byrne, of the University of St Andrews in Scotland. Size isn't everything Monkeys and apes manage this to a certain extent. They seem to have a limited "theory of mind"--the ability to work out what others are thinking--which is an obvious prerequisite for the would-be simian politician. They also engage in behaviour which, to the cynical human zoologist, looks suspiciously like lying. But it is those two words, "cynical" and "suspiciously", that give the game away. For it is humans themselves, with their ability to ponder not only what others are thinking, but also what those others are thinking about them, who are the past masters of such manipulation. And it is here that the question of language enters the equation. Truly Machiavellian manipulation is impossible without it. And despite claims for talking chimpanzees, parrots and dolphins, real language--the sort with complex grammar and syntax--is unique to Homo sapiens. Dr Dunbar's hypothesis is that language arose as a substitute for the physical grooming that other group-living primates use to maintain bonds of friendship. Conversation--or gossip, as he refers to it--certainly does seem to have the same bond-forming role as grooming. And, crucially for the theory, groups rather than just pairs can "groom" each other this way. Dr Dunbar sees the 150-strong group size of Homo sapiens as both a consequence and a cause of verbal grooming, with large groups stimulating the emergence of language, and language then permitting the emergence of larger groups still. Language, therefore, is the result of a process of positive feedback. Once established, it can be deployed for secondary purposes. Furthering the Machiavellian ends outlined by Dr Whiten and Dr Byrne would be one such purpose, and this would drive other feedback loops as people evolve more and more elaborate theories of mind in order to manipulate and avoid manipulation. But language would also promote collaborative activities such as trade and the construction of sophisticated artefacts by allowing specialisation and division of labour. Not everyone agrees with the details of this thesis, but the idea that the evolution of mental powers such as language has been driven by two-way feedback loops rather than one-way responses to the environment is a powerful one. Terrence Deacon, a researcher at the University of California at Berkeley, for instance, thinks that language evolved in a feedback loop with the complex culture that it allowed humans to create. Changes in culture alter and complicate the environment. Natural selection causes evolutionary changes that give people the means to exploit their new, more complex circumstances. That makes the cultural environment still more complicated. And so on. Dr Deacon believes this process has driven the capacity for abstract thought that accounts for much of what is referred to as intelligence. He sees it building up gradually in early hominids, and then taking off spectacularly in Homo sapiens. The peacock mind Perhaps the most intriguing hypothesis about the last stage of the mental-evolution rocket, though, is an idea dreamed up by Geoffrey Miller, of the University of New Mexico. He thinks that the human mind is like a peacock's tail, a luxuriant demonstration of its owner's genetic fitness. At first sight this idea seems extraordinary, but closer examination suggests it is disturbingly plausible. Lots of features displayed by animals are there to show off to the opposite sex. Again, this involves a feedback loop. As the feature becomes more pronounced, the judge becomes more demanding until the cost to the displayer balances the average reproductive benefit. Frequently, only one sex (usually the male) does the showing off. That makes the sexually selected feature obvious, because it is absent in the other sex. Dr Miller, though, argues that biologists have underplayed the extent to which females show off to males, particularly in species such as songbirds where the male plays a big part in raising the young, and so needs to be choosy about whom he sets up home with. Like male birds, male humans are heavily involved in childrearing, so if the mind is an organ for showing off, both sexes would be expected to possess it--and be attracted by it--in more or less equal measure. Dr Miller suggests that many human mental attributes evolved this way--rather too many, according to some of his critics, who think that he has taken an interesting idea to implausible extremes. But sexual selection does provide a satisfying explanation for such otherwise perplexing activities as painting, carving, singing and dancing. On the surface, all of these things look like useless dissipations of energy. All, however, serve to demonstrate physical and mental prowess in ways that are easy to see and hard to fake--precisely the properties, in fact, that are characteristic of sexually selected features. Indeed, a little introspection may suggest to the reader that he or she has, from time to time, done some of these things to show off to a desirable sexual partner. Crucially, language, too, may have been driven by sexual selection. No doubt Machiavelli played his part: rhetoric is a powerful political skill. But seduction relies on language as well, and encourages some of the most florid speech of all. Nor, in Dr Miller's view of the world, is the ability to make useful things exempt from sexual selection. Well-made artefacts as much as artful decorations indicate good hand-eye co-ordination and imagination. Whether Dr Miller's mental peacock tails have an underlying unity is unclear. It could be the ability to process symbols; or it could be that several different abilities have evolved independently under a single evolutionary pressure--the scrutiny of the opposite sex. Or it could be that sexual selection is not the reason after all, or at least not the main part of it. But it provides a plausible explanation for modern humanity's failure to interbreed with its Neanderthal contemporaries, whether or not such unions would have been fertile: they just didn't fancy them. http://www.economist.com/surveys/PrinterFriendly.cfm?story_id=5299176 The concrete savannah Evolution and the modern world THE eruption of Toba marked the beginning rather than the end of hostilities between Homo sapiens and the climate. Views differ about whether the eruption was the trigger, but it is clear that an ice age started shortly afterwards. Though the species spread throughout Asia, Australia and Europe (the populating of the Americas is believed by most researchers to have happened after the ice began to retreat, although not everybody agrees), it was constrained by ecological circumstances in much the same way as any other animal. The world's population 10,000 years ago was probably about 5m--a long way from the imperial 6-billion-strong species that bestrides the globe today. The killer application that led to humanity's rise is easy to identify. It is agriculture. When the glaciers began to melt and the climate to improve, several groups learned how to grow crops and domesticate animals. Once they had done that, there was no going back. Agriculture enabled man to shape his environment in a way no species had done before. In truth, agriculture turned out to be a Faustian bargain. Both modern and fossil evidence suggests that hunter-gatherers led longer, healthier and more leisured lives than did farmers until less than a century ago. But farmers have numbers on their side. And numbers beget numbers, which in turn beget cities. The path from Catalhoyuk in Anatolia, the oldest known town, to the streets of Manhattan is but a short one, and the lives of people today, no matter how urbane and civilised, are shaped in large measure by the necessities of their evolutionary past. That fact has, however, only recently begun to be widely recognised. For many years, psychology, like anthropology, operated in a strange intellectual vacuum. Psychologists did not deny man's evolutionary past, but they did not truly acknowledge it, either. Many in the field seemed to feel that humanity had somehow transcended evolution. Indeed, those of a Marxist inclination more or less required that to be true. How else could people be perfectible? Dissenters were usually treated with disdain. But, at about the time that Dr Cann was publishing the work that would expose the fallacy of multiregionalism, a group who dubbed themselves "evolutionary psychologists" began to stick their heads above the academic parapets. Eve's psyche Studying the behaviour of humans is more difficult than studying that of other animals, for two reasons. One is that the students come from the same species as the studied, which both reduces their objectivity and causes them to take certain things for granted, or even fail to notice them altogether. The other is that human culture is, indeed, far more complex than the cultures of other species. There is nothing wrong with studying that culture, of course. It is endlessly fascinating. But it is wrong to assume that it is the cause of human nature, rather than a consequence; that is akin to mistaking the decorative finishes of a building for the underlying civil engineering. The aim of evolutionary psychology is to try to detect the Darwinian fabric through the cultural decoration, by asking basic questions. Many of those questions, naturally, address sensitive issues of sex and violence--another reason evolutionary psychologists are not universally popular. David Buss, of the University of Texas, demonstrated experimentally what most people know intuitively--that women value high status in a mate in a way that men do not. Helen Fisher, of Rutgers University, has dissected the evolutionary factors that cause marriages to succeed or fail. She thinks, for example, that the tendency of females to prefer high-status mates is at odds with the increasing economic independence of women in the modern world. Laura Betzig, of the University of Michigan, put an explicitly Darwinian spin on the tendency of powerful men to accumulate harems. Randy Thornhill, of the University of New Mexico, has shown that physical beauty is far from being in the eye of the beholder. In fact, those features rated beautiful, most notably bodily symmetry, are good predictors of healthy, desirable attributes such as strong immune systems--in other words, aesthetic sensibilities have evolutionary roots. Karl Grammer, of the Ludwig Boltzmann Institute of Urban Ethology, in Vienna, has shown that body odour, too, is correlated with symmetry and linked to immunological strength. Dr Thornhill, meanwhile, has raised quite a few hackles by arguing that a propensity to rape is an evolved characteristic of men rather than a pathology. Even murder has not escaped the attention of the evolutionary psychologists. Martin Daly and Margo Wilson, of McMaster University in Hamilton, Ontario, showed that adults are far more likely to kill their stepchildren than their biological children--a fact that had escaped both police forces and sociologists around the world. They then dared to propose a Darwinian explanation for this, namely that step-parents have no direct interest (in the evolutionary sense) in the welfare of stepchildren. However, something similar to this list of human behaviours that have yielded to evolutionary psychology could be found in many species. Indeed, it was often comparisons with other species that sparked the investigations in the first place. The males of many other species gather harems, but females rarely do so; female swallows prefer their mates to have symmetrical tails and they are also more faithful to high-status males; both male lions and male baboons kill the infants of females in groups they have just taken over; and so on. Where evolutionary explanations of behaviour become really interesting is when they home in on what is unique to humanity. Playing games with the truth One uniquely human characteristic is the playing of games with formal rules. Evolutionary psychology has not yet sought to explain this, but it has exploited it extensively to develop and test its ideas. In their different ways, the games devised by Leda Cosmides and John Tooby, of the University of California at Santa Barbara, and Robert Axelrod, of the University of Michigan, have underpinned that part of evolutionary psychology devoted to uniquely human behaviour. For not all games are about competition. Many also require trust, a sense of justice and sometimes self-denial. Cases of animals apparently making sacrifices, occasionally of their own lives, to help others are not rare in nature, but at first sight they are surprising. What is in it for the sacrificer? The usual answer, worked out in the 1960s by William Hamilton, is that the beneficiary is a relative whose reproductive output serves to carry genes found in the sacrificer into the next generation, albeit at one remove. Translated into human terms, this is good old-fashioned nepotism. In a few species, though--mankind being the most obvious--people will make sacrifices for non-relatives, or "friends". The assumption is that the favour will be paid back at some time in the future. The question is, how can the sacrificer be sure that will happen? Dr Axelrod used a branch of maths called game theory to come up with at least part of the answer. He showed mathematically that as long as you can recognise and remember your fellow creatures, it makes sense to follow the proverb "fool me once, shame on you; fool me twice, shame on me" and trust them provided they don't cheat you. (Sometimes in science it is necessary to prove the obvious before you go on to the less obvious.) Dr Cosmides and Dr Tooby used a different sort of game to show that humans are thus, as Dr Axelrod's model suggests they should be, acutely sensitive to unfair treatment. They did this by presenting some problems of formal logic to their experimental subjects as a card game. When the problems were presented using cards with letters and numbers on opposite faces, and the subjects had to work out which cards needed to be turned over to yield the required answers, people found them hard to do and more often than not got them wrong. However, when the problems were presented in a form that required the subjects to decide whether people were being treated fairly or not, they found them really easy. The researchers' conclusion is that humans are hard-wired not for logic but for detecting injustice. Trust, and the detection and punishment of injustice, lie at the heart of human society. They are so important that people will actually harm their own short-term interests to punish those they regard as behaving unfairly. Another game, for example, involves two people dividing a sum of money ($100, say). One makes the division and the other accepts or rejects it. If it is rejected, neither player gets any money. On the face of it, even a 99:1 division should be accepted, since the second player will be one dollar better off. In practice, though, few people will accept less than a 70:30 split. They will prefer to punish the divider's greed rather than take a small benefit themselves. This makes no sense in a one-off transaction, but makes every sense if the two participants are likely to deal with each other repeatedly. And that, before the agricultural population boom (and also, for the most part, after it) was the normal state of affairs. The people an individual dealt with routinely would have been the members of his circle of 150. Strangers would have been admitted to this circle only after prolonged vetting. Such bonds of trust, described by Matt Ridley, a science writer, as "the origins of virtue" in his book of that name, underlie the exchanges of goods and services that are the basis of economics. They may also, though, underlie another sensitive subject that social scientists do not like biologists treading on: race. Robert Kurzban, a colleague of Dr Cosmides and Dr Tooby, took the racial bull by the horns by reversing the old saw about beauty. Dr Thornhill's work overturned the folk wisdom that beauty is in the beholder's eye by showing that universal standards of beauty have evolved, and there are good reasons for them. Dr Kurzban, by contrast, thinks he has shown that race really does exist only in the eye--or, rather, the mind--of the beholder, not the biology of the person being beheld, and does so for good Darwinian reasons. First impressions count Dr Kurzban observes that the three criteria on which people routinely, and often prejudicially, assess each other are sex, age and race. Judgments based on sex and age make Darwinian sense, because people have evolved in a context where these things matter. But until long-distance transport was invented, few people would have come across members of other races. Dr Kurzban believes that perceptions of racial difference are caused by the overstimulation of what might be called an "otherness detector" in the human mind. This is there to sort genuine strangers, who will need to work hard to prove they are trustworthy, from those who are merely unfamiliar members of the clan. It will latch on to anything unusual and obvious--and there is little that is more obvious than skin colour. But other things, such as an odd accent, will do equally well. Indeed, Dr Dunbar thinks that the speed with which accents evolve demonstrates that they are used in precisely this sort of way. If Dr Kurzban is right (and experiments he has done suggest that assessments of allegiance are easily "rebadged" away from skin colour by recognisable tokens such as coloured T-shirts, as any sports fan could probably have told him), it explains why race-perception is such a powerful social force, even though geneticists have failed to find anything in humans that would pass muster as geographical races in any other species. In fact, one of the striking things about Homo sapiens compared with, say, the chimpanzee is the genetic uniformity of the species. The only "racial" difference that has a well-established function is skin colour. This balances the need to protect the skin from damage by ultraviolet light (which requires melanin, the pigment that makes skin dark) and the need to make vitamin D (which results from the action of sunlight on a chemical in the skin). This explains dark, opaque skins in the tropics and light, transparent ones nearer the poles. The test is that dark-skinned arctic dwellers, such as the Inuit of North America, have diets rich in vitamin D, and so do not need to make it internally. As to other physical differences, they may be the result of founder effects, as described by Dr Ambrose, or possibly of sexual selection, which can sometimes pick up and amplify arbitrary features. Darwinian thinking can lead in other unexpected directions, too. Pursue Dr Buss's observation about women preferring high-status males to its logical conclusion, and you have a plausible explanation for the open-endedness of economic growth. Psychologists of a non-evolutionary bent sometimes profess themselves puzzled by the fact that once societies leave penury behind (the cited income level varies, but $10,000 per person per year seems about the mark), they do not seem to get happier as they get richer. That may be because incomes above a certain level are as much about status as about material well-being. Particularly if you are a man, status buys the best mates, and frequently more of them. But status is always relative. It does not matter how much you earn if the rest of your clan earn more. People (and men, in particular) are always looking for ways to enhance their status--and a good income is an excellent way of doing so. Aristotle Onassis, a man who knew a thing or two about both wealth and women, once said: "If women didn't exist, all the money in the world would have no meaning." Perhaps the founding father of economics is not really Adam Smith, who merely explained how to get rich, but Charles Darwin, who helped to explain why. http://www.economist.com/surveys/PrinterFriendly.cfm?story_id=5299160 Starchild Evolution is still continuing WHAT, then, of the future? Sitting in the comfort of the concrete savannah, has humanity stopped evolving? To help answer that question, it is instructive to look at a paper published earlier this year by Gregory Cochran. Dr Cochran, a scientist who, in the tradition of Darwin himself, works independently of an academic institution, looked at the unusual neurological illnesses commonly suffered by Ashkenazi Jews. Traditional wisdom has it that these diseases, which are caused by faulty genes, are a consequence of inbreeding in a small, closed population. The fact that they persist is said to show that human evolution has stopped in our ever more mollycoddled and medicalised world. Dr Cochran begged not only to differ, but to draw precisely the opposite conclusion. He sees these diseases as evidence of very recent evolution. Until a century or two ago, the Ashkenazim--the Jews of Europe--were often restricted by local laws to professions such as banking, which happened to require high intelligence. This is the sort of culturally created pressure that might drive one of Dr Deacon's feedback loops for mental abilities (though it must be said that Dr Deacon himself is sceptical about this example). Dr Cochran, however, suspects that this is exactly what happened. He thinks the changes in the brain brought about by the genes in question will be shown to enhance intelligence when only one copy of a given disease gene is present (you generally need two copies, one from each parent, to suffer the adverse symptoms). Indeed, in the case of Gaucher's disease, which is not necessarily lethal, there is evidence that sufferers are more intelligent than average. If Ashkenazi Jews need to be more intelligent than others, such genes will spread, even if they sometimes cause disease. The fact is, you can't stop evolution. Those who argue the opposite, pointing to the survival thanks to modern medicine of people who would previously have died, are guilty of more than just gross insensitivity. They have tumbled into an intellectual pitfall that has claimed many victims since Darwin first published his theory. Evolution is not about progress. It is about adaptation. Sometimes adaptation and progress are the same. Sometimes they are the opposite. (Ask a tapeworm, which has "degenerated" into a mere egg-laying machine by a very successful process of adaptation.) If a mutation provides a better adaptation, as Dr Cochran thinks these disease genes did in financiers, it will spread. Given the changes that humanity has created in its own habitat, it seems unlikely that natural selection has come to a halt. If Dr Deacon is right, it may even be accelerating as cultural change speeds up, although the current rapid growth in the human population will disguise that for a while, because selection works best in a static population. The next big thing Evolution, then, has not stopped. Indeed, it might be about to get an artificial helping hand in the form of genetic engineering. For the fallacy of evolutionary progress has deep psychological roots, and those roots lie in Dr Miller's peacock-tail version of events. The ultimate driver of sexual selection is the need to produce offspring who will be better than the competition, and will thus be selected by desirable sexual partners. Parents know what traits are required. They include high intelligence and a handful of physical characteristics, some of which are universal and some of which vary according to race. That is why, once the idea of eliminating disease genes has been aired, every popular discussion on genetic engineering and cloning seems to get bogged down in intelligence, height and (in the West) fair hair and blue eyes. This search for genetic perfection has an old and dishonourable history, of course, starting with the eugenic movement of the 19th century and ending in the Nazi concentration camps of the 20th, where millions of the confr?res of Dr Cochran's subjects were sent to their deaths. With luck, the self-knowledge that understanding humanity's evolution brings will help avert such perversions in the future. And if genetic engineering can be done in a way that does not harm the recipient, it would not make sense to ban it in a liberal society. But the impulse behind it will not go away because, progressive or not, it is certainly adaptive. Theodosius Dobzhansky, one of the founders of genetics, once said that "nothing in biology makes sense except in the light of evolution". And that is true even of humanity's desire to take control of the process itself. http://www.economist.com/surveys/PrinterFriendly.cfm?story_id=5299169 Acknowledgment and sources The author would like to acknowledge the help of the numerous researchers named in the text. The following books and website may be of interest to readers who wish to learn more about the subject. "Out of Eden", by Stephen Oppenheimer (Constable and Robinson, paperback) "The Journey of Man", by Spencer Wells (Penguin, paperback) "From Lucy to Language", by Donald Johanson and Blake Edgar (Weidenfeld and Nicolson) "Grooming, Gossip and the Evolution of Language", by Robin Dunbar (Faber and Faber, paperback) "The Symbolic Species", by Terrence Deacon (W.W. Norton) "The Mating Mind", by Geoffrey Miller (William Heinemann) "The Origins of Virtue", by Matt Ridley (Penguin, paperback) http://www.nationalgeographic.com/genographic From shovland at mindspring.com Mon Dec 26 18:55:14 2005 From: shovland at mindspring.com (Steve Hovland) Date: Mon, 26 Dec 2005 10:55:14 -0800 Subject: [Paleopsych] Bush's biggest blunder in 2005 Message-ID: Waking up. From shovland at mindspring.com Mon Dec 26 23:54:29 2005 From: shovland at mindspring.com (Steve Hovland) Date: Mon, 26 Dec 2005 15:54:29 -0800 Subject: [Paleopsych] Japanese Baby Message-ID: A commercial worth watching -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: JapaneseBaby.mpg Type: video/mpeg Size: 1468420 bytes Desc: not available URL: From checker at panix.com Tue Dec 27 23:01:52 2005 From: checker at panix.com (Premise Checker) Date: Tue, 27 Dec 2005 18:01:52 -0500 (EST) Subject: [Paleopsych] BBS: Boden, Margaret A. (1994). Precis of The creative mind Message-ID: BBS: Boden, Margaret A. (1994). Precis of The creative mind http://www.bbsonline.org/documents/a/00/00/04/34/bbs00000434-00/bbs.boden.html Below is the unedited preprint (not a quotable final draft) of: Boden, Margaret A. (1994). Precis of The creative mind: Myths and mechanisms. Behavioral and Brain Sciences 17 (3): 519-570. The final published draft of the target article, commentaries and Author's Response are currently available only in paper. _________________________________________________________________ EUR For information on becoming a commentator on this or other BBS target articles, write to: [1]bbs at soton.ac.uk For information about subscribing or purchasing offprints of the published version, with commentaries and author's response, write to: [2]journals_subscriptions at cup.org (North America) or [3]journals_marketing at cup.cam.ac.uk (All other countries). EUR _________________________________________________________________ Precis of "THE CREATIVE MIND: MYTHS AND MECHANISMS" London: Weidenfeld & Nicolson 1990 (Expanded edn., London: Abacus, 1991.) Margaret A. Boden School of Cognitive and Computing Sciences University of Sussex England FAX: 0273-671320 [4]maggieb at syma.susx.ac.uk Keywords creativity, intuition, discovery, association, induction, representation, unpredictability, artificial intelligence, computer music, story-writing, computer art, Turing test Abstract What is creativity? One new idea may be creative, while another is merely new: what's the difference? And how is creativity possible? -- These questions about human creativity can be answered, at least in outline, using computational concepts. There are two broad types of creativity, improbabilist and impossibilist. Improbabilist creativity involves (positively valued) novel combinations of familiar ideas. A deeper type involves METCS: the mapping, exploration, and transformation of conceptual spaces. It is impossibilist, in that ideas may be generated which -- with respect to the particular conceptual space concerned -- could not have been generated before. (They are made possible by some transformation of the space.) The more clearly conceptual spaces can be defined, the better we can identify creative ideas. Defining conceptual spaces is done by musicologists, literary critics, and historians of art and science. Humanist studies, rich in intuitive subtleties, can be complemented by the comparative rigour of a computational approach. Computational modelling can help to define a space, and to show how it may be mapped, explored, and transformed. Impossibilist creativity can be thought of in "classical" AI-terms, whereas connectionism illuminates improbabilist creativity. Most AI-models of creativity can only explore spaces, not transform them, because they have no self-reflexive maps enabling them to change their own rules. A few, however, can do so. A scientific understanding of creativity does not destroy our wonder at it, nor make creative ideas predictable. Demystification does not imply dehumanization. _________________________________________________________________ Chapter 1: The Mystery of Creativity Creativity surrounds us on all sides: from composers to chemists, cartoonists to choreographers. But creativity is a puzzle, a paradox, some say a mystery. Inventors, scientists, and artists rarely know how their original ideas arise. They mention intuition, but cannot say how it works. Most psychologists cannot tell us much about it, either. What's more, many people assume that there will never be a scientific theory of creativity -- for how could science possibly explain fundamental novelties? As if all this were not daunting enough, the apparent unpredictability of creativity seems (to many people) to outlaw any systematic explanation, whether scientific or historical. Why does creativity seem so mysterious? Artists and scientists typically have their creative ideas unexpectedly, with little if any conscious awareness of how they arose. But the same applies to much of our vision, language, and common-sense reasoning. Psychology includes many theories about unconscious processes. Creativity is mysterious for another reason: the very concept is seemingly paradoxical. If we take seriously the dictionary-definition of creation, "to bring into being or form out of nothing", creativity seems to be not only beyond any scientific understanding, but even impossible. It is hardly surprising, then, that some people have "explained" it in terms of divine inspiration, and many others in terms of some romantic intuition, or insight. From the psychologist's point of view, however, "intuition" is the name not of an answer, but of a question. How does intuition work? In this book, I argue that these matters can be better understood, and some of these questions answered, with the help of computational concepts. This claim in itself may strike some readers as absurd, since computers are usually assumed to have nothing to do with creativity. Ada Lovelace is often quoted in this regard: "The Analytical Engine has no pretensions whatever to originate anything. It can do [only] whatever we know how to order it to perform." If this is taken to mean that a computer can do only what its program enables it to do, it is of course correct. But it does not follow that there can be no interesting relations between creativity and computers. We must distinguish four different questions, which are often confused with each other. I call them Lovelace questions, and state them as follows: (1) Can computational concepts help us to understand human creativity? (2) Could a computer, now or in the future, appear to be creative? (3) Could a computer, now or in the future, appear to recognize creativity? (4) Could a computer, however impressive its performance, really be creative? The first three of these are empirical, scientific, questions. In Chapters 3-10, I argue that the answer to each of them is "Yes". (The first Lovelace question is discussed in each of those chapters; in chapters 7-8, the second and third are considered also.) The fourth Lovelace question is not a scientific enquiry, but a philosophical one. (More accurately, it is a mix of three complex, and highly controversial, philosophical problems.) I discuss it in Chapter 11. However, one may answer "Yes" to the first three Lovelace questions without necessarily doing so for the fourth. Consequently, the fourth Lovelace question is ignored in the main body of the book, which is concerned rather with the first three Lovelace questions. Chapter 2: The Story so Far This chapter draws on some of the previous literature on creativity. But it is not a survey. Its aim is to introduce the main psychological questions, and some of the historical examples, addressed in detail later in the book. The main writers mentioned are Poincare (1982), Hadamard (1954) Koestler (1975), and Perkins (1981). Among the points of interest in Poincare's work are his views on associative memory. He described our ideas as "something like the hooked atoms of Epicurus," flashing in every direction like "a swarm of gnats, or the molecules of gas in the kinematic theory of gases". He was well aware that how the relevant ideas are aroused, and how they are joined together, are questions which he could not answer in detail. Another interesting aspect of Poincare's approach is his distinction between four "phases" of creativity, some conscious some unconscious. These four phases were later named (by Hadamard) as preparation, incubation, inspiration and verification (evaluation). Hadamard, besides taking up Poincare's fourfold distinction, spoke of finding problem-solutions "quite different" from any he had previously tried. If (as Poincare had claimed) the gnat-like ideas were only "those from which we might reasonably expect the desired solution", then how could such a thing happen? Perkins has studied the four phases, and criticizes some of the assumptions made by Poincare and Hadamard. In addition, he criticizes the romantic notion that creativity is due to some special gift. Instead, he argues that "insight" involves everyday psychological capacities, such as noticing and remembering. (The "everyday" nature of creativity is discussed in Chapter 10.) Koestler's view that creativity involves "the bisociation of matrices" comes closest to my own approach. However, his notion is very vague. The body of my book is devoted to giving a more precise account of the structure of "matrices" (of various kinds), and of just how they can be "bisociated" so as to result in a novel idea -- sometimes (as in Hadamard's experience) one quite different from previous ideas. (Matrices appear in my terminology as conceptual spaces, and different forms of bisociation as association, analogy, exploration, or transformation.) Among the examples introduced here are Kekule's discovery of the cyclical structure of the benzene molecule, Kepler's (and Copernicus') thoughts on elliptical orbits, and Coleridge's poetic imagery in Kubla Khan. Others mentioned in passing include Coleridge's announced intention to write a poem about an ancient mariner, Bach's harmonically systematic set of preludes and fugues, the jazz-musician's skill in improvising a melody to fit a chord sequence, and our everyday ability to recognize that two different apples fall into the same class. All these examples, and many others, are mentioned in later chapters. Chapter 3: Thinking the Impossible Given the seeming paradoxicality of the concept of creativity (noted in Chapter 1), we need to define it carefully before going further. This is not straightforward (over 60 definitions appear in the psychological literature (Taylor, 1988)). Part of the reason for this is that creativity is not a natural kind, such that a single scientific theory could explain every case. We need to distinguish "improbabilist" and "impossibilist" creativity, and also "psychological" and "historical" creativity. People of a scientific cast of mind, anxious to avoid romanticism and obscurantism, generally define creativity in terms of novel combinations of familiar ideas. Accordingly, the surprise caused by a creative idea is said to be due to the improbability of the combination. Many psychometric tests designed to measure creativity work on this principle. The novel combinations must be valuable in some way, because to call an idea creative is to say that it is not only new, but interesting. However, combination-theorists often omit value from their definition of creativity (although psychometricians may make implicit value-judgements when scoring the novel combinations produced by their experimental subjects). A psychological explanation of creativity focusses primarily on how creative ideas are generated, and only secondarily on how they are recognized as being valuable. As for what counts as valuable, and why, these are not purely psychological questions. They also involve history, sociology, and philosophy, because value-judgments are largely culture-relative (Brannigan, 1981; Schaffer, in press.) Even so, positive evaluation should be explicitly mentioned in definitions of creativity. Combination-theorists may think they are not only defining creativity, but explaining it, too. However, they typically fail to explain how it was possible for the novel combination to come about. They take it for granted, for instance, that we can associate similar ideas and recognize more distant analogies, without asking just how such feats are possible. A psychological theory of creativity needs to explain how associative and analogical thinking works (matters discussed in Chapters 6 and 7, respectively). These two cavils aside, what is wrong with the combination-theory? Many ideas which we regard as creative are indeed based on unusual combinations. For instance, the appeal of Heath-Robinson machines lies in the unexpected uses of everyday objects; and poets often delight us by juxtaposing seemingly unrelated concepts. For creative ideas such as these, a combination-theory, supplemented by psychological explanations of association and analogy, might suffice. Many creative ideas, however, are surprising in a deeper way. They concern novel ideas that not only did not happen before, but which -- we intuitively feel -- could not have happened before. Before considering just what this "could not" means, we must distinguish two further senses of creativity. One is psychological, or personal: I call it P-creativity. The other is historical: H-creativity. The distinction between P-creativity and H-creativity is independent of the improbabilist/impossibilist distinction made above: all four combinations occur. However, I use the P/H distinction primarily to compare cases of impossibilist creativity. Applied to impossibilist examples, a valuable idea is P-creative if the person in whose mind it arises could not (in the relevant sense of "could not") have had it before. It does not matter how many times other people have already had the same idea. By contrast, a valuable idea is H-creative if it is P-creative and no-one else, in all human history, has ever had it before. H-creativity is something about which we are often mistaken. Historians of science and art are constantly discovering cases where other people have had an idea popularly attributed to some national or international hero. Even assuming that the idea was valued at the time by the individual concerned, and by some relevant social group, our knowledge of it is largely accidental. Whether an idea survives, and whether historians at a given point in time happen to have evidence of it, depend on a wide variety of unrelated factors. These include flood, fashion, rivalries, illness, trade-patterns, and wars. It follows that there can be no systematic explanation of H-creativity, no theory that explains all and only H-creative ideas. For sure, there can be no psychological explanation of this historical category. But all H-creative ideas, by definition, are P-creative too. So a psychological explanation of P-creativity would include H-creative ideas as well. What does it mean to say that an idea "could not" have arisen before? Unless we know that, we cannot make sense of P-creativity (or H-creativity either), for we cannot distinguish radical novelties from mere "first-time" newness. An example of a novelty that clearly could have happened before is a newly-generated sentence, such as "The deckchairs are on the top of the mountain, three miles from the artificial flowers". I have never thought of that sentence before, and probably no-one else has, either. Chomsky remarked on this capacity of language-speakers to generate first-time novelties endlessly, and called language "creative" accordingly. But the word "creative" was ill-chosen. Novel though the sentence about deckchairs is, there is a clear sense in which it could have occurred before. For it can be generated by any competent speaker of English, following the same rules that can generate other English sentences. To come up with a new sentence, in general, is not to do something P-creative. The "coulds" in the previous paragraph are computational "coulds". In other words, they concern the set of structures (in this case, English sentences) described and/or produced by one and the same set of generative rules (in this case, English grammar). There are many sorts of generative system: English grammar is like a mathematical equation, a rhyming-schema for sonnets, the rules of chess or tonal harmony, or a computer program. Each of these can (timelessly) describe a certain set of possible structures. And each might be used, at one time or another, in actually producing those structures. Sometimes, we want to know whether a particular structure could, in principle, be described by a specific schema, or set of abstract rules. -- Is "49" a square number? Is 3,591,471 a prime? Is this a sonnet, and is that a sonata? Is that painting in the Impressionist style? Could that geometrical theorem be proved by Euclid's methods? Is that word-string a sentence? Is a benzene-ring a molecular structure describable by early nineteenth-century chemistry (before Kekule had his famous vision in 1865)? -- To ask whether an idea is creative or not (as opposed to how it came about) is to ask this sort of question. But whenever a structure is produced in practice, we can also ask what generative processes actually went on in its production. -- Did a particular geometer prove a particular theorem in this way, or in that? Was the sonata composed by following a textbook on sonata-form? Did Kekule rely on the then-familiar principles of chemistry to generate his seminal idea of the benzene-ring, and if not how did he come up with it? -- To ask how an idea (creative or otherwise) actually arose, is to ask this type of question. We can now distinguish first-time novelty from impossibilist originality. A merely novel idea is one which can be described and/or produced by the same set of generative rules as are other, familiar, ideas. A genuinely original, or radically creative, idea is one which cannot. It follows that the ascription of (impossibilist) creativity always involves tacit or explicit reference to some specific generative system. It follows, too, that constraints -- far from being opposed to creativity -- make creativity possible. To throw away all constraints would be to destroy the capacity for creative thinking. Random processes alone, if they happen to produce anything interesting at all, can result only in first-time curiosities, not radical surprises. (As explained in Chapter 9, randomness can sometimes contribute to creativity -- but only in the context of background constraints.) Chapter 4: Maps of the Mind The definition of (impossibilist) creativity given in Chapter 3 implies that, with respect to the usual mental processing in the relevant domain (chemistry, poetry, music ...), a creative idea may be not just improbable, but impossible. How could it arise, then, if not by magic? And how can one impossible idea be more surprising, more creative, than another? If an act of creation is not mere combination, what is it? How can such creativity possibly happen? To understand this, we need to think of creativity in terms of the mapping, exploration, and transformation of conceptual spaces. (The notion of a conceptual space is used informally in this chapter; later, we see how conceptual spaces can be described more rigorously.) A conceptual space is a style of thinking. Its dimensions are the organizing principles which unify, and give structure to, the relevant domain. In other words, it is the generative system which underlies that domain and which defines a certain range of possibilities: chess-moves, or molecular structures, or jazz-melodies. The limits, contours, pathways, and structure of a conceptual space can be mapped by mental representations of it. Such mental maps can be used (not necessarily consciously) to explore -- and to change -- the spaces concerned. Evidence from developmental psychology supports this view. Children's skills are at first utterly inflexible. Later, imaginative flexibility results from "representational redescriptions" (RRs) of (fluent) lower-level skills (Clark & Karmiloff-Smith, in press; Karmiloff-Smith, 1993). These RRs provide many-levelled maps of the mind, which are used by the subject to do things he or she could not do before. For example, children need RRs of their lower-level drawing-skills in order to draw non-existent, or "funny", objects: a one-armed man, or seven-legged dog. Lacking such cognitive resources, a 4-year-old simply cannot spontaneously draw a one-armed man, and finds it very difficult even to copy a drawing of a two-headed man. But 10-year-olds can explore their own man-drawing skill, by using strategies such as distorting, repeating, omitting, or mixing parts. These imaginative strategies develop in a fixed order: children can change the size or shape of an arm before they can insert an extra one, and long before they can give the drawn man wings in place of arms. The development of RRs is a mapping-exercise, whereby people develop explicit mental representations of knowledge already possessed implicitly. Few AI-models of creativity contain reflexive descriptions of their own procedures, and/or ways of varying them. Accordingly, most AI-models are limited to exploring their conceptual spaces, rather than transforming them (see Chapters 7 & 8). Conceptual spaces can be explored in various ways. Some exploration merely shows us something about the nature of the relevant conceptual space which we had not explicitly noticed before. When Dickens described Scrooge as "a squeezing, wrenching, grasping, scraping, clutching, covetous old sinner", he was exploring the space of English grammar. He was reminding the reader (and himself) that the rules of grammar allow us to use seven adjectives before a noun. That possibility already existed, although its existence may not have been realized by the reader. Some exploration, by contrast, shows us the limits of the space, and identifies specific points at which changes could be made in one dimension or another. To overcome a limitation in a conceptual space, one must change it in some way. One may also change it, of course, without yet having come up against its limits. A small change (a "tweak") in a relatively superficial dimension of a conceptual space is like opening a door to an unvisited room in an existing house. A large change (a "transformation"), especially in a relatively fundamental dimension, is more like the instantaneous construction of a new house, of a kind fundamentally different from (albeit related to) the first. A complex example of structural exploration and change can be found in the development of post-Renaissance Western music, based on the generative system known as tonal harmony. From its origins to the end of the nineteenth century, the harmonic dimensions of this space were continually tweaked to open up the possibilities (the rooms) implicit in it from the start. Finally, a major transformation generated the deeply unfamiliar (yet closely related) space of atonality. Each piece of tonal music has a "home-key", from which it starts, from which (at first) it did not stray, and in which it must finish. Reminders of the home-key were constantly provided, as fragments of scales, chords. or arpeggios. As time passed, the range of possible home-keys became increasingly well-defined (Bach's "Forty-Eight" was designed to explore, and clarify, the tonal range of the well-tempered keys). Soon, travelling along the path of the home-key alone became insufficiently challenging. Modulations between keys were then allowed, within the body of the composition. At first, only a small number of modulations (perhaps only one, followed by its "cancellation") were tolerated, between strictly limited pairs of harmonically-related keys. Over the years, the modulations became more daring, and more frequent -- until in the late nineteenth century there might be many modulations within a single bar, not one of which would have appeared in early tonal music. The range of harmonic relations implicit in the system of tonality gradually became apparent. Harmonies that would have been unacceptable to the early musicians, who focussed on the most central or obvious dimensions of the conceptual space, became commonplace. Moreover, the notion of the home-key was undermined. With so many, and so daring, modulations within the piece, a "home-key" could be identified not from the body of the piece, but only from its beginning and end. Inevitably, someone (it happened to be Schoenberg) eventually suggested that the convention of the home-key be dropped altogether, since it no longer constrained the composition as a whole. (Significantly, Schoenberg suggested new musical constraints: using every note in the chromatic scale, for instance.) However, exploring a conceptual space is one thing: transforming it is another. What is it to transform such a space? One example has just been mentioned: Schoenberg's dropping the home-key constraint to create the space of atonal music. Dropping a constraint is a general heuristic, or method, for transforming conceptual spaces. The deeper the generative role of the constraint in the system concerned, the greater the transformation of the space. Non-Euclidean geometry, for instance, resulted from dropping Euclid's fifth axiom. Another very general way of transforming conceptual spaces is to "consider the negative": that is, to negate a constraint. One well-known instance concerns Kekule's discovery of the benzene-ring. He described it like this: "I turned my chair to the fire and dozed. Again the atoms were gambolling before my eyes.... [My mental eye] could distinguish larger structures, of manifold conformation; long rows, sometimes more closely fitted together; all twining and twisting in snakelike motion. But look! What was that? One of the snakes had seized hold of its own tail, and the form whirled mockingly before my eyes. As if by a flash of lightning I awoke." This vision was the origin of his hunch that the benzene-molecule might be a ring, a hunch that turned out to be correct. Prior to this experience, Kekule had assumed that all organic molecules are based on strings of carbon atoms. But for benzene, the valencies of the constituent atoms did not fit. We can understand how it was possible for him to pass from strings to rings, as plausible chemical structures, if we assume three things (for each of which there is independent psychological evidence). First, that snakes and molecules were already associated in his thinking. Second, that the topological distinction between open and closed curves was present in his mind. And third, that the "consider the negative" heuristic was present also. Taken together, these three factors could transform "string" into "ring". A string-molecule is an open curve: one having at least one end-point (with a neighbour on only one side). If one considers the negative of an open curve, one gets a closed curve. Moreover, a snake biting its tail is a closed curve which one had expected to be an open one. For that reason, it is surprising, even arresting ("But look! What was that?"). Kekule might have had a similar reaction if he had been out on a country walk and happened to see a snake with its tail in its mouth. But there is no reason to think that he would have been stopped in his tracks by seeing a Victorian child's hoop. A hoop is a hoop, is a hoop: no topological surprises there. (No topological surprises in a snaky sine-wave, either: so two intertwined snakes would not have interested Kekule, though they might have stopped Francis Crick dead in his tracks, a century later.) Finally, the change from open curves to closed ones is a topological change, which by definition will alter neighbour-relations. And Kekule was an expert chemist, who knew very well that the behaviour of a molecule depends partly on how the constituent atoms are juxtaposed. A change in atomic neighbour-relations is very likely to have some chemical significance. So it is understandable that he had a hunch that this tail-biting snake-molecule might contain the answer to his problem. Plausible though this talk of conceptual spaces may be, it is -- thus far -- largely metaphorical. I have claimed that in calling an idea creative one should specify the particular set of generative principles with respect to which it is impossible. But I have not said how the (largely tacit) knowledge of literary critics, musicologists, and historians of art and science might be explicitly expressed within a psychological theory of creativity. Nor have I said how we can be sure that the mental processes specified by the psychologist really are powerful enough to generate such-and-such ideas from such-and-such structures. This is where computational psychology can help us. I noted above, for example, that representational redescription develops explicit mental representations of knowledge already possessed implicitly. In computational terms, one could -- and Karmiloff-Smith does -- put this by saying that knowledge embedded in procedures becomes available, after redescription, as part of the system's data-structures. Terms like procedures and data-structures are well understood, and help us to think clearly about the mapping and negotiation of conceptual spaces. In general, whatever computational psychology enables us to say, it enables us to say relatively clearly. Moreover, computational questions can be supplemented by computational models. A functioning computer program, in effect, enables the system to use its maps not just to contemplate the relevant conceptual territory, but to explore it actively. So as well as saying what a conceptual space is like (by mapping it), we can get some clear ideas about how it is possible to move around within it. In addition, those (currently, few) AI-models of creativity which contain reflexive descriptions of their own procedures, and ways of varying them, can transform their own conceptual spaces, as well as exploring them. The following chapters, therefore, employ a computational approach in discussing the account of creativity introduced in Chapters 1-4. Chapter 5: Concepts of Computation Computational concepts drawn from "classical" (as well as connectionist) AI can help us to think about the nature, form, and negotiation of conceptual spaces. Examples of such concepts, most of which were inspired by pre-existing psychological notions in the first place, include the following: generative system, heuristic (both introduced in previous chapters), effective procedure, search-space, search-tree, knowledge representation, semantic net, scripts, frames, what-ifs, and analogical representation. Each of these concepts is briefly explained in Chapter 5, for people who (unlike BBS-readers) may know nothing about AI or computational psychology. And they are related to a wide range of everyday and historical examples -- some of which will be mentioned again in later chapters. My main aim, here, is to encourage the reader to use these concepts in considering specific cases of human thought. A secondary aim is to blur the received distinction between "the two cultures". The differences between creativity in art and science lie less in how new ideas are generated than in how they are evaluated, once they have arisen. The uses of computational concepts in this chapter are informal, even largely metaphorical. But in bringing a computational vocabulary to bear on a variety of examples, the scene is set for more detailed consideration (in Chapters 6-8) of some computer models of creativity. In Chapter 5, I refer very briefly to a few AI-programs (such as chess-machines and Schankian question-answering programs). Only two are discussed at any length: Longuet-Higgins' (1987) work on the perception of tonal harmony, and Gelernter's (1963) geometry (theorem-proving) machine. Longuet-Higgins' work is not intended as a model of musical creativity. Rather, it provides (in my terminology) a map of a certain sort of musical space: the system of tonal harmony introduced in Chapter 4. In addition, it suggests some ways of negotiating that space, for it identifies musical heuristics that enable the listener to appreciate the structure of the composition. Just as speech perception is not the same as speech production, so appreciating music is different from composing it. Nevertheless, some of the musical constraints that face composers working in this particular genre have been identified in this work. I also mention Longuet-Higgins' recent work on musical expressiveness, but do not describe it here. In (Boden, in press), I say a little more about it. Without expression, music sounds "dead", even absurd. In playing the notes in a piano-score, for instance, pianists add such features as legato, staccato, piano, forte, sforzando, crescendo, diminuendo, rallentando, accelerando, ritenuto, and rubato. But how? Can we express this musical sensibility precisely? That is, can we specify the relevant conceptual space? Longuet-Higgins (in preparation), using a computational method, has tried to specify the musical skills involved in playing expressively. Working with two of Chopin's piano-compositions, he has discovered some counterintuitive facts. For example, a crescendo is not uniform, but exponential (a uniform crescendo does not sound like a crescendo at all, but like someone turning-up the volume-knob on a wireless); similarly, a rallentando must be exponentially graded (in relation to the number of bars in the relevant section) if it is to sound "right". Where sforzandi are concerned, the mind is highly sensitive: as little as a centisecond makes a difference between acceptable and clumsy performance. This work is not a study of creativity. It does not model the exploration of a conceptual space, never mind its transformation. But it is relevant because creativity can be ascribed to an idea (including a musical performance) only by reference to a particular conceptual space. The more clearly we can map this space, the more confidently we can identify and ask questions about the creativity involved in negotiating it. A pianist whose playing-style sounds "original", or even "idiosyncratic", is exploring and transforming the space of expressive skills which Longuet-Higgins has studied. Gelernter's program, likewise, was not focussed on creativity as such. (It was not even intended as a model of human psychology.) Rather, it was an early exercise in automatic problem-solving, in the domain of Euclidean geometry. However, it is well known that the program was capable of generating a highly elegant proof (that the base-angles of an isosceles triangle are equal), whose H-creator was the fourth-century mathematician Pappus. Or rather, it is widely believed that Gelernter's program could do this. The ambiguity, not to say the mistake, arises because the program's proof is indeed the same as Pappus' proof, when both are written down on paper in the style of a geometry text-book. But the (creative) mental processes by which Pappus did this, and by which the modern geometer is able to appreciate the proof, were very different from those in Gelernter's program -- which were not creative at all. Consider (or draw) an isosceles triangle ABC, with A at the apex. You are required to prove that the base-angles are equal. The usual method of proving this, which the program was expected to employ, is to construct a line bisecting angle BAC, running from A to D (a point on the baseline, BC). Then, the proof goes as follows: Consider triangles ABD and ACD. AB = AC (given) AD = DA (common) Angle BAD = angle DAC (by construction) Therefore the two triangles are congruent (two sides and included angle equal) Therefore angle ABD = angle ACD. Q.E.D. By contrast, the Gelernter proof involved no construction, and went as follows: Consider triangles ABC and ACB. Angle BAC = angle CAB (common) AB = AC (given) AC = AB (given) Therefore the two triangles are congruent (two sides and included angle equal) Therefore angle ABC = angle ACB. Q.E.D. And, written down on paper, this is the outward form of Pappus' proof, too. The point, here, is that Pappus' own notes (as well as the reader's geometrical intuitions) show that in order to produce or understand this proof, a human being considers one and the same triangle rotated (as Pappus put it, lifted up and replaced in the trace left behind by itself). There were thus two creative aspects of this proof. First, when "congruence" is in question, the geometer normally thinks of two entirely separate triangles (or, sometimes, two distinct triangles having one side in common). Second, Euclidean geometry deals only with points, lines, and planes -- so one would expect any proof to be restricted to two spatial dimensions. But Pappus (and you, when you thought about this proof) imagined lifting and rotating the triangle in the third dimension. He was, if you like, cheating. However, to transform a rule (an aspect of some conceptual space) is to change it: in effect, to cheat. In that sense, transformational creativity always involves cheating. Gelernter's geometry-program did not cheat -- not merely because it was too rigid to cheat in any way, but also because it could not have cheated in this way. It knew nothing of the third dimension. Indeed, it had no visual, analogical, representation of triangles at all. It represented a triangle not as a two-dimensional spatial form, but as a list of three letters (e.g. ABC) naming points in an abstract coordinate space. Similarly, it represented an angle as a list of three letters naming the vertex and one of the points on each of the two rays. Being unable to inspect triangles visually, it even had to prove that every different letter-name for what we can see to be the same angle was equivalent. So it had to prove (for instance) that angle XYZ is the same as angle ZYX, and angle BAC the same as angle CAB. Consequently, this program was incapable not only of coming up with Pappus' proof in the way he did, but even of representing such a proof -- or of appreciating its elegance and originality. Its mental maps simply did not allow for the lifting and replacement of triangles in space (and it had no heuristics enabling it to transform those maps). How did it come up with its pseudo-Pappus proof, then? Treating the "ABC's" as (spatially uninterpreted) abstract vectors, it did a massive brute-search to find the proof. Since this brute search succeeded, it did not bother to construct any extra lines. This example shows how careful one must be in ascribing creativity to a person, and in answering the second Lovelace question about a program. We have to consider not only the resulting idea, but also the mental processes which gave rise to it. Brute force search is even less creative than associative (improbabilist) thinking, and problem-dimensions which can be mapped by some systems may not be representable by others. (Analogously, a three-year old not showing flexible imagination in drawing a funny man: rather, she is showing incompetence in drawing an ordinary man.) It should not be assumed from the example of Pappus (or Kekule) that visual imagery is always useful in mapping and transforming one's ideas. An example is given of a problem for which a visual representation is almost always constructed, but which hinders solution. Where mental maps are concerned, visual maps are not always best. Chapter 6: Creative Connections This chapter deals with associative creativity: the spontaneous generation of new ideas, and/or novel combinations of familiar ideas, by means of unconscious processes of association. Examples include not only "mere associations" but also analogies, which may then be consciously developed for purposes of rhetorical exposition or problem-solving. In Chapter 6, I discuss the initial association of ideas. (The evaluation and use of analogy are addressed in Chapter 7.) One of the richest veins of associative creativity is poetic imagery. I consider some specific examples taken from Coleridge's poem The Ancient Mariner. For this poem (and also for his Kubla Khan), we have unusually detailed information about the literary sources of the imagery concerned. The literary scholar John Livingston Lowes (1951) studied Coleridge's Notebooks written while preparing for and writing the poem, and followed up every source mentioned there -- and every footnote given in each source. Despite the enormous quantity and range of Coleridge's reading, Lowes makes a subtle, and intuitively compelling, case in identifying specific sources for the many images in the poem. However, an intuitively compelling case is one thing, and an explicit justification or detailed explanation is another. Lowes took for granted that association can happen (he used Coleridge's term: the hooks and eyes of memory), without being able to say just how these hooks and eyes can come together. I argue that connectionism, and specifically PDP (parallel distributed processing), can help us to understand how such unexpected associations are possible. Among the relevant questions to which PDP-models offer preliminary answers are the following: How can ideas from very different sources (such as Captain Cook's diaries and Priestley's writings on optics) be spontaneously thought of together? How can two ideas be merged to produce a new structure, which shows the influence of both ancestor-ideas without being a mere "cut-and-paste" combination? How can the mind be "primed" (for instance, by the decision to write a poem about a seaman), so that one will more easily notice serendipitous ideas? Why may someone notice -- and remember -- something fairly uninteresting (such as a word in a literary text), if it occurs in an interesting context? How can a brief phrase conjure up from memory an entire line or stanza, from this or some other poem? And how can we accept two ideas as similar (the words "love" and "prove" as rhyming, for instance) in respect of a feature not identical in both? The features of connectionist models which suggest answers to these questions are their powers of pattern-completion, graceful degradation, sensitization, multiple constraint-satisfaction, and "best-fit" equilibration. The computational processes underlying these features are described informally in Chapter 6 (I assume that it is not necessary to do so for BBS-readers). The message of this chapter is that the unconscious, "insightful", associative aspects of creativity can be explained -- in outline, at least -- in computational terms. Connectionism offers some specific suggestions about what sorts of processes may underlie the hooks and eyes of memory. This is not to say, however, that all aspects of poetry -- or even all poetic imagery -- can be explained in this way. Quite apart from the hierarchical structure of natural language itself, some features of a poem may require thinking of a type more suited (at present) to symbolic models. For example, Coleridge's use of "The Sun came up upon the left" and "The Sun now rose upon the right" as the opening-lines of two closely-situated stanzas enabled him to indicate to the reader that the ship was circumnavigating the globe, without having to detail all the uneventful miles of the voyage. (Compare Kubrick's use of the spinning thigh-bone turning into a space-ship, as a highly compressed history of technology, in his film 2001, A Space Odyssey.) But these expressions, too, were drawn from his reading -- in this case, of the diaries of the very early mariners, who recorded their amazement at first experiencing the sunrise in the "wrong" part of the sky. Associative memory was thus involved in this poetic conceit, but it is not the entire explanation. Chapter 7: Unromantic Artists This chapter and the next describe and criticize some existing computer models of creativity. The separation into "artists" (Chapter 7) and "scientists" (Chapter 8) is to some extent an arbitrary rhetorical device. For example, analogy (discussed in Chapter 7) and induction and genetic algorithms (both outlined in Chapter 8) are all relevant to creativity in arts and sciences alike. In these two chapters, the second and third Lovelace-questions -- about apparent computer-creativity -- are addressed at length. However, the first Lovelace question, relating to human creativity, is still the over-riding concern. The computer models of creativity discussed in Chapter 7 include: a series of programs which produce line-drawings (McCorduck, 1991); a jazz-improviser (Johnson-Laird, 1991); a haiku-writer (Masterman & McKinnon Wood, 1968); two programs for writing stories (Klein et al., 1973; Meehan, 1981); and two analogy-programs (Chalmers, French, & Hofstadter, 1991; Holyoak & Thagard, 1989a, 1989b; Mitchell, 1993). In each case, the programmer has to try to define the dimensions of the relevant conceptual space, and to specify ways of exploring the space, so as to generate novel structures within it. Some evaluation, too, must be allowed for. In the systems described in this chapter, the evaluation is built into the generative procedures, rather than being done post hoc. (This is not entirely unrealistic: although humans can evaluate -- and modify -- their own ideas once they have produced them, they can also develop domain-expertise such that most of their ideas are acceptable without modification.) Sometimes, the results are comparable with non-trivial human achievements. Thus some of the computer's line-drawings are spontaneously admired, by people who are amazed when told their provenance. The haiku-program can produce acceptable poems, sometimes indistinguishable from human-generated examples (however, this is due to the fact that the minimalist haiku-style demands considerable projective interpretation by the reader). And the jazz-program can play -- composing its own chord-sequences, as well as improvising on them -- at about the level of a moderately competent human beginner. (Another jazz-improviser, not mentioned in the book, plays at the level of a mediocre professional musician; unlike the former example, it starts out with significant musical structure provided to it "for free" by the human user (Hodgson, 1990).) At other times, the results are clumsy and unconvincing, involving infelicities and absurdities of various kinds. This often happens when stories are computer-generated. Here, many rich conceptual spaces have to be negotiated simultaneously. Quite apart from the challenge of natural language generation, the model must produce sensible plots, taking account both of the motivation and action of the characters and of their common-sense knowledge. Where very simple plot-spaces, and very limited world-knowledge, are concerned, a program may be able (sometimes) to generate plausible stories. One, for example, produces Aesop-like tales, including a version of "The Fox and the Crow" (Meehan, 1981). A recent modification of this program (Turner, 1992), not covered in the book, is more subtle. It uses case-based reasoning and case-transforming heuristics to generate novel stories based on familiar ones; and because it distinguishes the author's goals from those of the characters, it can solve meta-problems about the story as well as problems posed within it. But even this model's story-telling powers are strictly limited, compared with ours. Models dealing with the interpretation of stories, and of concepts (such as betrayal) used in stories, are also relevant here. Computational definitions of interpersonal themes and scripts (Abelson, 1973), programs that can answer questions about (simple) stories and models which can -- up to a point -- interpret motivational and emotional structures within a story (Dyer, 1983) are all discussed. So, too, is a program that generates English text describing games of noughts-and-crosses (Davey, 1978). The complex syntax of the sentences is nicely appropriate to the structure of the particular game being described. Human writers, too, often use subtleties of syntax to convey certain aspects of their story-lines. The analogy programs described in Chapter 7 are ACME and ARCS (Holyoak & Thagard, 1989a, 1989b), and in the Preface to the paperback edition I add a discussion of Copycat (Chalmers et al., 1991; Mitchell, 1993), which I had originally intended to highlight in the main text. ACME and ARCS are an analogy-interpreter and an analogy-finder, respectively. Calling on a semantic net of over 30,000 items, to which items can be added by the user, these programs use structural, semantic, and pragmatic criteria to evaluate analogies between concepts (whose structure is pre-given by the programmers). Other analogy programs (e.g. Falkenhainer, Forbus, & Gentner, 1989) use structural and semantic similarity as criteria. But ARCS/ACME takes account also of the pragmatic context, the purpose for which the analogy is being sought. So a conceptual feature may be highlighted in one context, and downplayed in another. The context may be one of rhetoric or poetic imagery, or one of scientific problem-solving (ARCS/ACME forms part of an inductive program that compares the "explanatory coherence" of rival scientific theories (Thagard, 1992)). Examples of both types are discussed. The point of interest about Copycat is that it is a model of analogy in which the structure of the analogues is neither pre-assigned nor inflexible. The description of something can change as the system searches for an analogy to it, and its "perception" of an analogue may be permanently influenced by having seen it in a particular analogical relation to something else. Many analogies in the arts and sciences can be cited, to show that the same is true of the human mind. Among the points of general interest raised in this chapter is the inability of these programs (Copycat excepted) to reflect on what they have done, or to change their way of doing it. For instance, the line-drawing program that draws human acrobats in broadly realistic poses is unable to draw one-armed acrobats. It can generate acrobats with only one arm visible, if one arm is occluded by another acrobat in front. But that there might be a one-armed (or a six-armed) acrobat is strictly inconceivable. The reason is that the program's knowledge of human anatomy does not represent the fact that humans have two arms in a form which is separable from its drawing-procedures or modifiable by "imaginative" heuristics. It does not, for instance, contain anything of the form "Number of arms: 2", which might then be transformed by a "vary the variable" heuristic into "Number of arms: 1". Much as the four-year-old child cannot draw a "funny" one-armed man because she has not yet developed the necessary RR of her own man-drawing skill, so this program cannot vary what it does because -- in a clear sense -- it does not know what it is that it is doing. This failing is not shared by all current programs: some featured in the next chapter can evaluate their own ideas, and transform their own procedures, to some extent. Moreover, this failure is "bad news" only to those seeking a positive answer to the second and third Lovelace questions. It is useful to anyone asking the first Lovelace question, for it underlines the importance of the factors introduced in Chapter 4: reflexive mapping of thought, evaluation of ideas, and transformation of conceptual spaces. Chapter 8: Computer-Scientists Like analogy, inductive thinking occurs across both arts and science. Chapter 8 begins with a discussion of the ID3 algorithm. This is used in many learning programs, including a world-beater -- better than the human expert who "taught" it -- at diagnosing soybean diseases (Michalski & Chilausky, 1980). ID3 learns from examples. It looks for the logical regularities which underlie the classification of the input examples, and uses them to classify new, unexamined, examples. Sometimes, it finds regularities of which the human experts were unaware, such as unknown strategies for chess endgames (Michie & Johnston, 1984). In short, ID3 can not only define familiar concepts in H-creative ways, but can also define H-creative concepts. However, all the domain-properties it considers have to be specifically mentioned in the input. (It does not have to be told just which input properties are relevant: in the chess end-game example, the chess-masters "instructing" the program did not know this.) That is, ID3-programs can restructure their conceptual space in P-creative -- and even H-creative -- ways. But they cannot change the dimensions of the space, so as to alter its fundamental nature. Another program capable of H-discovery is meta-DENDRAL, an early expert system devoted to the spectroscopic analysis of a certain group of organic molecules. The original program, DENDRAL, uses exhaustive search to describe all possible molecules made up of a given set of atoms, and heuristics to suggest which of these might be chemically interesting. DENDRAL uses only the chemical rules supplied to it, but meta-DENDRAL can find new rules about how these compounds decompose. It does this by identifying unfamiliar patterns in the spectrographs of familiar compounds, and suggesting plausible explanations for them. For instance, if it discovers a smaller structure located near the point at which a molecule breaks, it may suggest that other molecules containing that sub-structure may break at these points too. This program is H-creative, up to a point. It not only explores its conceptual space (using evaluative heuristics and exhaustive search) but enlarges it too, by adding new rules. It generates hunches, which have led to the synthesis of novel, chemically interesting, compounds. And it has discovered some previously unsuspected rules for analysing several families of organic molecules. However it relies on sophisticated theories built into it by expert chemists (which is why its novel hypotheses, though sometimes false, are always plausible). It casts no light on how those theories might have arisen in the first place. Some computational models of induction were developed with an eye to the history of science (and to psychology), rather than for practical scientific puzzle-solving. Their aim was not to come up with H-creative ideas, but to P-create in the same way as human H-creators. Examples include BACON, GLAUBER, STAHL, and DALTON (Langley, Simon, Bradshaw, & Zytkow, 1987), whose P-creative activities are modelled on H-creative episodes recorded in the notebooks of human scientists. BACON induces quantitative laws from empirical data. Its data are measurements of various properties at different times. It looks for simple mathematical functions defining invariant relations between numerical data-sets. For instance, it seeks direct or inverse proportionalities between measurements, or between their products or ratios. It can define higher-level theoretical terms, construct new units of measurement, and use mathematical symmetry to help find invariant patterns in the data. It can cope with noisy data, finding a best-fit function (within predefined limits). BACON has P-created many physical laws, including Archimedes' principle, Kepler's third law, Boyle's law, Ohm's law, and Black's law. GLAUBER discovers qualitative laws, summarizing the data by classifying things according to (non-measurable) observable properties. Thus it discovers relations between acids, alkalis, and bases (all identified in qualitative terms). STAHL analyses chemical compounds into their elements. Relying on the data-categories presented to it, it has modelled aspects of the historical progression from phlogiston-theory to oxygen-theory. DALTON reasons about atoms and molecular structure. Using early atomic theory, it generates plausible molecular structures for a given set of components (it could be extended to cover other componential theories, such as particle physics or Mendelian genetics). These four programs have rediscovered many scientific laws. However, their P-creativity is shallow. They are highly data-driven, their discoveries lying close to the evidence. They cannot identify relevance for themselves, but are "primed" with appropriate expectations. (BACON expects to find linear relationships, and rediscovered Archimedes' principle only after being told that things can be immersed in known volumes of liquid and the resulting volume measured.) They cannot model spontaneous associations or analogies, only deliberate reasoning. Some can suggest experiments, to test hypotheses they have P-created, but they have no sense of the practices involved. They can learn, constructing P-novel concepts used to make further P-discoveries. But their discoveries are exploratory rather than transformational: they cannot fundamentally alter their own conceptual spaces. Some AI-models of creativity can do this, to some extent. For instance, the Automatic Mathematician (AM) explores and transforms mathematical ideas (Lenat, 1983). It does not prove theorems, or do sums, but generates "interesting" mathematical ideas (including expressions that might be provable theorems). It starts with 100 primitive concepts of set-theory (such as set, list, equality, and ordered pair), and 300 heuristics that can examine, combine, transform, and evaluate its concepts. One generates the inverse of a function (compare "consider the negative"). Others can compare, generalize, specialize, or find examples of concepts. Newly-constructed concepts are fed back into the pool. In effect, AM has hunches: its evaluation heuristics suggest which new structures it should concentrate on. For example, AM finds it interesting whenever the union of two sets has a simply expressible property which is not possessed by either of them (a set-theoretic version of the notion that emergent properties are interesting). Its value-judgments are often wrong. Nevertheless, it has constructed some powerful mathematical notions, including prime numbers, Goldbach's conjecture, and an H-novel theorem concerning maximally-divisible numbers (which the programmer had never heard of). In short, AM appears to be significantly P-creative, and slightly H-creative too. However, AM has been criticised (Haase, 1986; Lenat & Seely-Brown, 1984; Ritchie & Hanna, 1984; Rowe & Partridge, 1993). Critics have argued that some heuristics were included to make certain discoveries, such as prime numbers, possible; that the use of LISP provided AM with mathematical relevance "for free", since any syntactic change in a LISP expression is likely to result in a mathematically-meaningful string; that the program's exploration was too often guided by the human user; and that AM had fixed criteria of interest, being unable to adapt its values. The precise extent of AM's creativity, then, is unclear. Because EURISKO has heuristics for changing heuristics, it can transform not only its stock of concepts but also its own processing-style. For example, one heuristic asks whether a rule has ever led to any interesting result. If it has not (but has been used several times), it will be less often used in future. If it has occasionally been helpful, though usually worthless, it may be specialized in one of several different ways. (Because it is sometimes useful and sometimes not, the specializing-heuristic can be applied to itself.) Other heuristics generalize rules, or create new rules by analogy with old ones. Using domain-specific heuristics to complement these general ones, EURISKO has generated H-novel ideas in genetic engineering and VLSI-design (one has been patented, so was not "obvious to a person skilled in the art"). Other self-transforming systems described in this chapter are problem-solving programs based on genetic algorithms (GAs). GA-systems have two main features. They all use rule-changing algorithms (mutation and crossover) modelled on biological genetics. Mutation makes a random change in a single rule. Crossover mixes two rules, so that (for instance) the lefthand portion of one is combined with the righthand portion of the other; the break-points may be chosen randomly, or may reflect the system's sense of which rule-parts are the most useful. Most GA-systems also include algorithms for identifying the relatively successful rules, and rule-parts, and for increasing the probability that they will be selected for "breeding" future generations. Together, these algorithms generate a new system, better adapted to the task. An example cited in the book is an early GA-program which developed a set of rules to regulate the transmission of gas through a pipeline (Holland, Holyoak, Nesbitt, & Thagard, 1986). Its data were hourly measurements of inflow, outflow, inlet-pressure, outlet-pressure, rate of pressure-change, season, time, date, and temperature. It altered the inlet-pressure to allow for variations in demand, and inferred the existence of accidental leaks in the pipeline (adjusting the inflow accordingly). Although the pipeline-program discovered the rules for itself, the potentially relevant data-types were given in its original list of concepts. How far that compromises its creativity is a matter of judgment. No system can work from a tabula rasa. Likewise, the selectional criteria were defined by the programmer, and do not alter. Humans may be taught evaluative criteria, too. But they can sometimes learn -- and adapt -- them for themselves. GAs, or randomizing thinking, are potentially relevant to art as well as to science -- especially if the evaluation is done interactively, not automatically. That is, at each generation the selection of items from which to breed for the next generation is done by a human being. This methodology is well-suited to art, where the evaluative criteria are not only controversial but also imprecise -- or even unknown. Two recent examples (not mentioned in the book, but described in: Boden, in press) concern graphics (Sims, 1991; Todd & Latham, 1993). Sims' aim is to provide an interactive environment for graphic artists, enabling them to generate otherwise unimaginable images. Latham's is to produce his own art-works, but he too uses the computer to generate images he could not have developed unaided. In a run of Sims' GA-system, the first image is generated at random. Then the program makes various independent random mutations in the image-generating rule, and displays the resulting images. The human now chooses one image to be mutated, or two to be "mated", and the process is repeated. The program can transform its image-generating code (simple LISP-functions) in many ways. It can alter parameters in pre-existing functions, combine or separate functions, or nest one function inside another (so many-levelled hierarchies can arise). Many of Sims' computer-generated images are highly attractive, even beautiful. Moreover, they often cause a deep surprise. The change(s) between parent and offspring are sometimes amazing. The one appears to be a radical transformation of the other -- or even something entirely different. In short, we seem to have an example of impossibilist creativity. Latham's interactive GA-program is much more predictable. Its mutation operators can change only the parameters within the image-generating code, not the body of the function. Consequently, it never comes up with radical novelties. All the offspring in a given generation are obviously siblings, and obviously related to their parents. So the results of Latham's system are less exciting than Sims'. But it is arguably even more relevant to artistic creativity. The interesting comparison is not between the aesthetic appeal of a typical Latham-image and Sims-image, but between the discipline -- or lack of it -- which guides the exploration and transformation of the relevant visual space. Sims is not aiming for particular types of result, so his images can be fundamentally transformed in random ways at every generation. But Latham (a professional artist) has a sense of what forms he hopes to achieve, and specific aesthetic criteria for evaluating intermediate steps. Random changes at the margins are exploratory, and may provide some useful ideas. But fundamental transformations -- especially, random ones -- would be counterproductive. (If they were allowed, Latham would want to pick one and then explore its possibilities in a disciplined way.) This fits the account of (impossibilist) creativity given in Chapters 3 and 4. Creativity works within constraints, which define the conceptual spaces with respect to which it is identified. Maps or RRs (or LISP-functions) which describe the parameters and/or the major dimensions of the space can be altered in specific ways, to generate new, but related, spaces. Random changes are sometimes helpful, but only if they are integrated into the relevant style. Art, like science, involves discipline. Only after a space has been fairly thoroughly explored will the artist want to transform it in deeply surprising ways. A convincing computer-artist would therefore need not only randomizing operators, but also heuristics for constraining its transformations and selections in an aesthetically acceptable fashion. In addition, it would need to make its aesthetic selections (and perhaps guiding recommendations) for itself. And, to be true to human creativity, the evaluative rules should evolve also (Elton, 1993). Chapter 9: Chance, Chaos, Randomness, Unpredictability Unpredictability is often said to be the essence of creativity. And creativity is, by definition, surprising. But unpredictability is not enough. At the heart of creativity, as previous chapters have shown, lie constraints: the very opposite of unpredictability. Constraints and unpredictability, familiarity and surprise, are somehow combined in original thinking. In this chapter, I distinguish various senses of "chance", "chaos", "randomness", and "unpredictability". I also argue that a scientific explanation need not imply either determinism or predictability, and that even deterministic systems may be unpredictable. Below, it will suffice to mention a number of different ways in which unpredictability can enter into creativity. The first follows from the fact that creative constraints do not determine everything about the newly-generated idea. A style of thinking typically allows for many points at which two or more alternatives are possible. Several notes may be both melodious and harmonious; many words rhyme with moon; and perhaps there could be a ring-molecule with three, or five, atoms in the ring? At these points, some specific choice must be made. Likewise, many exploratory and transformational heuristics may be potentially available at a certain time, in dealing with a given conceptual space. But one or other must be chosen. Even if several heuristics can be applied at once (like parallel mutations in a GA-system), not all possibilities can be simultaneously explored. The choice has to be made, somehow. Occasionally, the choice is random, or as near to random as one can get. So it may be made by throwing a dice (as in playing Mozart's aleatory music); or by consulting a table of random numbers (as in the jazz-program); or even, possibly, as a result of some sudden quantum-jump inside the brain. There may even be psychological processes akin to GA-mechanisms, producing novel ideas in human minds. More often, the choice is fully determined, by something which bears no systematic relation to the conceptual space concerned. (Some examples are given below.) Relative to that style of thinking, the choice is made randomly. Certainly, nothing within the style itself could enable us to predict its occurrence. In either case, the choice must somehow be skilfully integrated into the relevant mental structure. Without such disciplined integration, it cannot lead to a positively valued, interesting, idea. With the help of this mental discipline, even flaws and accidents may be put to creative use. For instance, a jazz-drummer suffering from Tourette's syndrome is subject to sudden, uncontrollable, muscular tics, even when he is drumming. As a result, his drumsticks sometimes make unexpected sounds. But his musical skill is so great that he can work these supererogatory sounds into his music as he goes along. At worst, he "covers up" for them. At best, he makes them the seeds of unusual improvisations which he could not otherwise have thought of. One might even call the drummer's tics serendipitous. Serendipity is the unexpected finding of something one was not specifically looking for. But the "something" has to be something which was wanted, or at least which can now be used. Fleming's discovery of the dirty petri-dish, infected by Penicillium spores, excited him because he already knew how useful a bactericidal agent would be. Proust's madeleine did not answer any currently pressing question, but it aroused a flood of memories which he was able to use as the trigger of a life-long project. Events such as these could not have been foreseen. Both trigger and triggering were unpredictable. Who was to say that the dish would be left uncovered, and infected by that particular organism? And who could say that Proust would eat a madeleine on that occasion? Even if one could do this (perhaps the laboratory was always untidy, and perhaps Proust was addicted to madeleines), one could not predict the effect the trigger would have on these individual minds. This is so even if there are no absolutely random events going on in our brains. Chaos theory has taught us that fully deterministic systems can be, in practice, unpredictable. Our inescapable ignorance of the initial conditions means that we cannot forecast the weather, except in highly general (and short-term) ways. The inner dynamics of the mind are more complex than those of the weather, and the initial conditions -- each person's individual experiences, values, and beliefs -- are even more varied. Small wonder, then, if we cannot fully foresee the clouds of creativity in people's minds. To some extent, however, we can. Different thinkers have differing individual styles, which set a characteristic stamp on all their work in a given domain. Thus Dr. Johnson complained, "Who but Donne would have compared a good man to a telescope?". Authorial signatures are largely due to the fact that people can employ habitual ways of making "random" choices. There may be nothing to say, beforehand, how someone will choose to play the relevant game. But after several years of practice, their "random" choices may be as predictable as anything in the basic genre concerned. More mundane examples of creativity, which are P-creative but not H-creative, can sometimes be predicted -- and even deliberately brought about. Suppose your daughter is having difficulty mastering an unfamiliar principle in her physics homework. You might fetch a gadget that embodies the principle concerned, and leave it on the kitchen-table, hoping that she will play around with it and realise the connection for herself. Even if you have to drop a few hints, the likelihood is that she will create the central idea. Again, Socratic dialogue helps people to explore their conceptual spaces in (to them) unexpected ways. But Socrates himself, like those taking his role today, knew what P-creative ideas to expect from his pupils. We cannot predict creative ideas in detail, and we never shall be able to do so. Human experience is too richly idiosyncratic. But this does not mean that creativity is fundamentally mysterious, or beyond scientific understanding. Chapter 10: Elite or Everyman? Creativity is not a single capacity, and nor is it a special one. It is an aspect of intelligence in general, which involves many different capacities: noticing, remembering, seeing, speaking, classifying, associating, comparing, evaluating, introspecting, and the like. Chapter 10 offers evidence for this view, drawing on the work of Perkins (1981) and also on computational work of various kinds. For example, Kekule's description of "long rows, twining and twisting in snakelike motion", where "one of the snakes had seized hold of its own tail", assumes everyday powers of visual interpretation and analogy. These capacities are normally taken for granted in discussions of Kekule's H-creativity, but they require some psychological explanation. Relevant computational work on low-level vision suggests that Kekule's imagery was grounded in certain specific, and universal, visual capacities -- including the ability to identify lines and end-points. (His hunch, by contrast, required special expertise. As remarked in Chapter 4, only a chemist could have realized the potential significance of the change in neighbour-relations caused by the coalescence of end-points, or the "snake" which "seized hold of its tail".) Similarly, Mozart's renowned musical memory, and his reported capacity for hearing a whole symphony "all at once", can be related to computational accounts of powers of memory and comprehension common to us all. Certainly, his musical expertise was superior in many ways. He had a better grasp of the conceptual spaces concerned, and a better understanding -- better even than Salieri's -- of how to explore them so as to locate their farthest nooks and crannies. (Unlike Haydn, for example, he was not a composer who made adventurous transformations). But much of Mozart's genius may have lain in the better use, and the vastly more extended practice, of facilities we all share. Much -- but perhaps not all. Possibly, there was something special about Mozart's brain which predisposed him to musical genius (Gardner, 1983). However, we have little notion, at present, of what this could be. It may have been some cerebral detail which had the emergent effect of giving him greater musical powers. For example, the jazz-improvisation program described in Chapter 7 employed only very simple rules to improvise, because its short-term memory was deliberately constrained to match the limited STM of people. Human jazz-musicians cannot improvise hierarchically nested chord-sequences "on the fly", but have to compose (or memorize) them beforetimes. A change in the range of STM might enable someone to improvise and appreciate musical structures of a complexity not otherwise intelligible. But this musically significant change might be due to an apparently "boring" feature of the brain. Many other examples of creativity (drawn, for instance, from poetry, painting, music, and choreography) are cited in this chapter. They all rely on familiar capacities for their effect, and arguably for their occurrence too. We appreciate them intuitively, and normally take their accessibility -- and their origins -- for granted. But psychological explanations in computational terms may be available, at least in outline. The role of motivation and emotion is briefly mentioned, but is not a prime theme. This is not because motivation and emotion are in principle outside the reach of a computational psychology. Some attempts have been made to bring these matters within a computational account of the mind (e.g. Boden, 1972; Sloman, 1987). But such attempts provide outline sketches rather than functioning models. Still less is it because motivation is irrelevant to creativity. But the main topic of the book is how (not why) novel ideas arise in human minds. Chapter 11: Of Humans and Hoverflies The final chapter focusses on two questions. One is the fourth Lovelace question: could a computer really be creative? The other is whether any scientific explanation of creativity, whether computational or not, would be dehumanizing in the sense of destroying our wonder at it -- and at the human mind in general. With respect to the fourth Lovelace question, the answer "No" may be defended in at least four different ways. I call these the brain-stuff argument, the empty-program argument, the consciousness argument, and the non-human argument. Each of these applies to intelligence (and intentionality) in general, not just to creativity in particular. The brain-stuff argument (Searle, 1980) claims that whereas neuroprotein is a kind of stuff which can support intelligence, metal and silicon are not. This empirical claim is conceivably correct, but we have no specific reason to believe it. Moreover, the associated claim -- that it is intuitively obvious that neuroprotein can support intentionality and that metal and silicon cannot -- must be rejected. Intuitively speaking, that neuroprotein supports intelligence is utterly mysterious: how could that grey mushy stuff inside our skulls have anything to do with intentionality? Insofar as we understand this, we do so because of various functions that nervous tissue makes possible (as the sodium pump enables action potentials, or "messages", to pass along an axon). Any material substrate capable of supporting all the relevant functions could act as the embodiment of mind. Whether neurochemistry describes the only such substrate is an empirical question, not to be settled by intuitions. The empty-program argument is Searle's (1980) claim that a computational psychology cannot explain understanding, because programs are all syntax and no semantics: their symbols are utterly meaningless to the computer itself. I reply that a computer program, when running in a computer, has proto-semantic (causal) properties, in virtue of which the computer does things -- some of which are among the sorts of thing which enable understanding in humans and animals (Boden, 1988, ch. 8; Sloman, 1986). (This is not to say that any computer-artefact could possess understanding in the full sense, or what I have termed "intrinsic interests", grounded in evolutionary history (Boden, 1972).) The consciousness argument is that no computer could be conscious, and therefore -- since consciousness is needed for the evaluation phase, and even for much of the preparation phase -- no computer can be creative. I reply that it's not obvious that evaluation must be carried out consciously. A creative computer might recognize (evaluate) its creative ideas by using relevant reflexive criteria without also having consciousness. Moreover, some aspects of consciousness can be illuminated by a computational account, although admittedly "qualia" present an unsolved problem. The question must remain open -- not just because we do not know the answer, but because we do not clearly understand how to ask the question. According to the non-human argument, to regard computers as truly intelligent is not a mere factual mistake, but a moral absurdity: only members of the human, or animal, community should be granted moral and epistemological consideration (of their interests and opinions). If we ever agreed to remove all the scare-quotes around the psychological words we use in describing computers, so inviting them to join our human community, we would be committed to respecting their goals and judgments. This would not be a purely factual matter, but one of moral and political choice -- about which it is impossible to legislate now. In short, each of the four negative replies to the last Lovelace question is challengeable. But even someone who does accept a negative answer here can consistently accept positive answers to the first three Lovelace questions. The main argument of the book remains unaffected. The second theme of this final chapter is the question whether, where creativity is in question, scientific explanation in general should be spurned. Many people, from Blake to Roszak, have seen the natural sciences as dehumanizing in various ways. Three are relevant here: the ignoring of mentalistic concepts, the denial of cherished beliefs, and the destructive demystification of some valued phenomena. The natural sciences have had nothing to say about psychological phenomena as such; and scientifically-minded psychologists have often conceptualized them in reductionist (e.g. behaviourist, or physiological) terms. To ignore something is not necessarily to deny it. But, given the high status of the natural sciences, the fact that they have not dealt with the mind has insidiously downplayed its importance, if not its very existence. This charge cannot be levelled at computational psychology, however. Intentional concepts, such as representation, lie at the heart of it, and of AI. Some philosophers claim that these sciences have no right to use such terms. Even so, they cannot be accused of deliberately ignoring intentional phenomena, or of rejecting intentionalist vocabulary. The second charge of dehumanization concerns what science explicitly denies. Some scientific theories have rejected comforting beliefs, such as geocentrism, special creation, or rational self-control. But a scientific psychology need not -- and a computational psychology does not -- deny creativity, as astronomy denies geocentrism. On the contrary, the preceding chapters have acknowledged creativity again and again. Even to say that it rests on universal features of human minds is not to deny that some ideas are surprising, and special, requiring explanation of how they could possibly arise. However, the humanist's worry concerns not only denial by rejection, but also denial by explanation. The crux of the third type of anti-scientific resistance is the feeling that scientific explanation of any kind must drive out wonder: that to explain something is to cease to marvel at it. Not only do we wonder at creativity, but positive evaluation is essential to the concept. So it may seem that to explain creativity is insidiously to downgrade it -- in effect, to deny it. Certainly, many examples can be given where understanding drives out wonder. For instance, we may marvel at the power of the hoverfly to fly to its mate hovering nearby (so as to mate in mid-air). Many people might be tempted to describe the hoverfly's activities in terms of its goals and beliefs, and perhaps even its determination in going straight to its mate without any coyness or prevarication. How wonderful is the mind of the humble hoverfly! In fact, the hoverfly's flight-path is determined by a simple and inflexible rule, hardwired into its brain. This rule transforms a specific visual signal into a specific muscular response. The fly's initial change of direction depends on the particular approach-angle subtended by the target-fly. The creature, in effect, always assumes that the size and velocity of the seen target (which may or may not be a fly) are those corresponding to hoverflies. When initiating a new flight-path, the fly's angle of turn is selected on this rigid, and fallible, basis. Moreover, the fly's path cannot be adjusted in midflight, there being no way in which it can be influenced by feedback from the movement of the target animal. This evidence must dampen the enthusiasm of anyone who had marvelled at the psychological subtlety of the hoverfly's behaviour. The insect's intelligence has been demystified with a vengeance, and it no longer seems worthy of much respect. One may see beauty in the evolutionary principles that enabled this simple computational mechanism to develop, or in the biochemistry that makes it function. But the fly itself cannot properly be described in anthropomorphic terms. Even if we wonder at evolution, and at insect-neurophysiology, we can no longer wonder at the subtle mind of the hoverfly. Many people fear that this disillusioned denial of intelligence in the hoverfly is a foretaste of what science will say about our minds too. A few "worrying" examples can indeed be given: for instance, think of how perceived sexual attractiveness turns out to relate to pupil-size. In general, however, this fear is mistaken. The mind of the hoverfly is much less marvellous than we had imagined, so our previous respect for the insect's intellectual prowess is shown up as mere ignorant sentimentality. But computational explanations of thinking can increase our respect for human minds, by showing them to be much more complex and subtle than we had previously recognized. Consider, for instance, the many different ways (some are sketched in Chapters 4 and 5) in which Kekule could have seen snakes as suggesting ring-molecules. Think of the rich analogy-mapping in Coleridge's mind, which drew on naval memoirs, travellers' tales, and scientific reports to generate the imagery of The Ancient Mariner (Chapter 6). Bear in mind the mental complexities (outlined in Chapter 7) of generating an elegant story-line, or improvising a jazz-melody. And remember the many ways in which random events (the mutations described in Chapter 8, or the serendipities cited in Chapter 9) may be integrated into pre-existing conceptual spaces with creative effect. Writing about Coleridge's imagery, Livingston Lowes said: "I am not forgetting beauty. It is because the worth of beauty is transcendent that the subtle ways of the power that achieves it are transcendently worth searching out." His words apply not only to literary studies of creativity, but to scientific enquiry too. A scientific psychology, whether computational or not, allows us plenty of room to wonder at Mozart, or at our friends' jokes. Psychology leaves poetry in place. Indeed, it adds a new dimension to our awe on encountering creative ideas, for it helps us to see the richness, and yet the discipline, of the underlying mental processes. To understand, even to demystify, is not necessarily to denigrate. A scientific explanation of creativity shows how extraordinary is the ordinary person's mind. We are, after all, humans -- not hoverflies. REFERENCES Abelson, R. P. (1973) The structure of belief systems. In: Computer models of thought and language, eds. R. C. Schank & K. M. Colby (pp. 287-340). Boden, M. A. (1972) Purposive explanation in psychology. Cambridge, Mass.: Harvard University Press. Boden, M. A. (1988) Computer models of mind: Computational approaches in theoretical psychology. Cambridge: Cambridge University Press. Boden, M. A. (1990) The creative mind: Myths and mechanisms. London: Weidenfeld & Nicolson. (Expanded edn., London: Abacus, 1991.) Boden, M. A. (in press) What is creativity? In: Dimensions of creativity, ed. M. A. Boden. Cambridge, Mass.: MIT Press. Brannigan, A. (1981) The social basis of scientific discoveries. Cambridge: Cambridge University Press. Chalmers, D. J., French, R. M., & Hofstadter, D. R. (1991) High-level perception, representation, and analogy: A critique of artificial intelligence methodology. CRCC Technical Report 49. Center for Research on Concepts and Cognition, Indiana University, Bloomington, Indiana. Clark, A., & Karmiloff-Smith, A. (in press) The cognizer's innards. Mind and Language. Davey, A. (1978) Discourse production: A computer model of some aspects of a speaker. Edinburgh: Edinburgh University Press. Dyer, M. G. (1983) In-depth understanding: A computer model of integrated processing for narrative comprehension. Cambridge, Mass.: MIT Press. Elton, M. (1993) Towards artificial creativity. In: Proceedings of LUTCHI symposium on creativity and cognition, ed. E. Edmonds (un-numbered). Loughborough: University of Loughborough. Falkenhainer, B., Forbus, K. D., & Gentner, D. (1989) The structure-mapping engine: Algorithm and examples, AI Journal, 41, 1-63. Gardner, H. (1983) Frames of mind: The theory of multiple intelligences. London: Heinemann. Gelernter, H. L. (1963) Realization of a geometry-theorem provi machine. In: Computers and thought, eds. E. A. Feigenbaum & J. Feldman, pp. 134-152. New Yo}McGraw-Hill. Haase, K. W. (1986) Discovery systems. Proc. European Conf. on AI, 1, 546-555. Hadamard, J. (1954) An essay on the psychology of invention in the mathematical field. New York: Dover. Hodgson, P. (1990) Understanding computing, cognition, and creativity. MSc thesis, University of the West of England. Holland, J. H., Holyoak, K. J., Nisbett, R. E., & Thagard, P. R. (1986) Induction: Processes of inference, learning, and discovery. Cambridge, Mass.: MIT Press. Holyoak, K. J., & Thagard, P. R. (1989a) Analogical mapping by constraint satisfaction. Cognitive Science, 13, 295-356. Holyoak, K. J., & Thagard, P. R. (1989b) A computational model of analogical problem solving. In: S. Vosniadou & A. Ortony (Eds.), Similarity and analogical reasoning (pp. 242-266). Cambridge: Cambridge University Press. Johnson-Laird, P. N. (1991) Jazz improvisation: A theory at the computational level. In: Representing musical structure, eds. P. Howell, R. West, & I. Cross. (pp. 291-326). London: Academic Press. Karmiloff-Smith, A. (1993) Beyond modularity: A developmental perspective on cognitive science. Cambridge, Mass.: MIT Press. Klein, S., Aeschlimann, J. F., Balsiger, D. F., Converse, S. L., Court, C., Foster, M., Lao, R., Oakley, J. D., & Smith, J. (1973) Automatic novel writing: A status report. Technical Report 186. Madison, Wis.: University of Wisconsin Computer Science Dept. Koestler, A. (1975) The act of creation. London: Picador. Langley, P., Simon, H. A., Bradshaw, G. L., & Zytkow, J. M. (1987) Scientific discovery: Computational explorations of the creative process. Cambridge, Mass.: MIT Press. Lenat, D. B. (1983) The role of heuristics in learning by discovery: Three case studies. In: Machine learning: An artificial intelligence approach, eds. R. S. Michalski, J. G. Carbonell, & T. M. Mitchell (pp. 243-306). Palo Alto, Calif.: Tioga. Lenat, D. B., & Seely-Brown, J. (1984) Why AM and EURISKO appear to work. AI Journal, 23, 269-94. Livingston Lowes, J. (1951) The road to Xanadu: A study in the ways of the imagination. London: Constable. Longuet-Higgins, H. C. (1987) Mental processes: Studies in cognitive science. Cambridge, Mass.: MIT Press. Longuet-Higgins, H. C. (in preparation) Musical aesthetics. In: Artificial intelligence and the mind: New breakthroughs or dead ends?, eds. M. A. Boden & A. Bundy. London: Royal Society & British Academy (to appear). McCorduck, P. (1991) Aaron's code. San Francisco: W. H. Freeman. Masterman, M., & McKinnon Wood, R. (1968) Computerized Japanese haiku. In: Cybernetic Serendipity, ed. J. Reichardt (pp. 54-5). London: Studio International. Meehan, J. (1981) TALE-SPIN. In: Inside computer understanding: Five programs plus miniatures, eds. R. C. Schank & C. J. Riesbeck (pp. 197-226). Hillsdale, NJ: Erlbaum. Michie, D., & Johnston, R. (1984) The creative computer: Machine intelligence and human knowledge. London: Viking. Michalski, R. S., & Chilausky, R. L. (1980) Learning by being told and learning from examples: an experimental comparison of two methods of knowledge acquisition in the context of developing an expert system for soybean disease diagnosis. International Journal of Policy Analysis and Information Systems, 4, 125-61. Mitchell, M. (1993) Analogy-making as perception. Cambridge, Mass.: MIT Press. Perkins, D. N. (1981) The mind's best work. Cambridge, Mass.: Harvard University Press. Poincare, H. (1982) The foundations of science: Science and hypothesis, The value of science, Science and method. Washington: University Press of America. Ritchie, G. D., & Hanna, F. K. (1984) AM: A case study in AI methodology. AI Journal, 23, 249-63. Rowe, J., & Partridge, D. (1993) Creativity: A survey of AI approaches. Artificial Intelligence Review, 7, 43-70. Schaffer, S. (in press) Making up discovery. In: Dimensions of creativity, ed M. A. Boden. Cambridge, Mass.: MIT Press. Searle. J. R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3, 473-497. Sims, K. (1991) Artificial evolution for computer graphics. Computer Graphics, 25 (no.4), July 1991, 319-328. Sloman, A. (1986) What sorts of machines can understand the symbols they use? Proceedings of the Aristotelian Society, Supplementary Volume, 60, 61-80. Sloman, A. (1987) Motives, mechanisms, and emotions. Cognition and Emotion, 1, 217-33. (Reprinted in: The philosophy of artificial intelligence, ed. M. A. Boden. Oxford: Oxford University Press, pp. 231-47). Taylor, C. W. (1988) Various approaches to and definitions of creativity. In: The nature of creativity: Contemporary psychological perspectives, ed. R. J. Sternberg (pp. 99-121). Cambridge: Cambridge University Press. Thagard, P. R. (1992) Conceptual revolutions. Princeton, NJ: Princeton University Press. Todd, S., & Latham, W. (1992) Evolutionary art and computers. London: Academic Press. Turner, S. (1992) MINSTREL: A model of story-telling and creativity. Technical note UCLA-AI-17-92. Los Angeles: AI Laboratory, University of California at Los Angeles. References 1. mailto:bbs at cogsci.soton.ac.uk 2. mailto:journals_subscriptions at cup.org 3. mailto:journals_marketing at cup.cam.ac.uk 4. mailto:maggieb at syma.susx.ac.uk From checker at panix.com Tue Dec 27 23:10:51 2005 From: checker at panix.com (Premise Checker) Date: Tue, 27 Dec 2005 18:10:51 -0500 (EST) Subject: [Paleopsych] BBS: Francisco J. Gil-White: Common misunderstandings of memes (and genes) Message-ID: Francisco J. Gil-White: Common misunderstandings of memes (and genes) http://www.bbsonline.org/documents/a/00/00/12/44/bbs00001244-00/Memes2.htm The promise and the limits of the genetic analogy to cultural transmission processes [1]fjgil at psych.upenn.edu ; [2]http://www.psych.upenn.edu/~fjgil/ Assistant Professor of Psychology University of Pennsylvania 3815 Walnut Street, Suite 400 Philadelphia PA 19104-6196 Word Count: Abstract = 248 words; Main text = 12,313; References = 1,154; Entire Text = 13,903. Short Abstract: `Memetics' suffers from conceptual confusion and not enough empirical work. This paper attempts to attenuate the former problem by resolving the conceptual controversies. I criticize the overly literal insistence--by both critics and advocates--on the genetic analogy, which asks us to think about memes as bona-fide replicators in the manner of genes, and to see all cultural transmission processes as ultimately for the reproductive benefit of memes, rather than their human vehicles. A Darwinian approach to cultural transmission, I argue, requires neither. It is possible to have Darwinian processes without genes, or even close analogues of them. The cognitive mechanisms responsible for social-learning make clear why. Long Abstract: `Memetics' suffers from conceptual confusion and not enough empirical work. This paper attempts to attenuate the former problem by resolving the conceptual controversies, which requires that we not speculate about cultural transmission without being informed about the cognitive mechanisms responsible for social learning. I criticize the overly literal insistence--by both critics and advocates--on the genetic analogy, which asks us to think about memes as bona-fide replicators in the manner of genes, and to see all cultural transmission processes as ultimately for the reproductive benefit of memes, rather than their human vehicles. A Darwinian approach to cultural transmission, I argue, requires neither. It is possible to have Darwinian processes without genes, or even close analogues of them. The insistence on a close genetic analogy is in fact based on a poor understanding of genes and evolutionary genetics, and of the kinds of simplifications that are legitimate in evolutionary models. Some authors have insisted that the only admissible definition for a `meme' is `selfish replicator.' However, since the only agreement as to the definition of `meme' is that it is what gets passed on through non-genetic means, only conceptual confusion can result from trying to make a hypothesis into a definition. This paper will argue that, although memes are not, in fact, `selfish replicators,' they can and should be analyzed with Darwinian models. It will argue further that the `selfish meme' theoretical calque imported from genetics does much more to distort than enlighten our understanding of cultural processes. KEYWORDS: Cultural transmission, culture, evolutionary genetics, meme, memetics, replicator, social-learning. Given an incredibly simplistic notion of genes, memes are not in the least like genes. . .One problem with interdisciplinary work is that any one worker is likely to know much more about one area than any of the others. Geneticists know much more about the complexities of genetics than of social groups. Conversely, anthropologists and sociologists tent to be well-versed in the details of social groups. To them genetics looks pretty simple.--Hull (2000:46) Many of the claims made about memes could be false because the analogy to genes has not proven productive.--Aunger (2000:8) Introduction Should we demand that `memes' be exactly like genes if we are to apply Darwinian tools of analysis to culture? No. The formal similarities between genes and what (after Dawkins 1989[1975]) are now called `memes'--the units of cultural transmission and evolution--suggest cultural transmission processes are ripe for Darwinian analysis. A vigorous debate is emerging over how to think about `memes' (for a recent compendium of views see Aunger 2000). This is an evolutionary but also cognitive issue because memes are stored in human brains. New fields will always use analogies and borrowed yardsticks, and these can be a source of fresh insights, but also cause misunderstanding. The yardstick which requires `memes' to be essentially identical to genes if Darwinian analyses are to apply is a source of much confusion. This regrettable error is advanced by both critics and defenders of `memetics' and--to boot--the specific arguments are often based on a poor understanding of genes and evolutionary genetics. The standard chosen is therefore not only erroneous but would indict evolutionary genetics as well (genes, it turns out, are not sufficiently like `genes' either!). There are too many insistent definitions of `meme'--typical in a new research program given that careers (especially in social science) are often boosted by getting particular definitions adopted. The prize is large because the term `meme' is on everybody's lips. If definitions were advanced only with conceptual progress in mind, this would be fine. But here, more than in other fields, the various protagonists must be aware that the contest is memetic, yielding a tendency to produce `catchy' definitions that `sell well' at the expense of conceptual advance and scientific utility. The definition of meme as a `replicator' is very catchy. Introduced by Dawkins (1989[1975]), and developed by Dennett (1995) and Blackmore (1999, 2000), it has helped mobilize our intuitions for population-driven processes involving genes, which are bona-fide replicators producing perfect descendant copies of themselves. As a heuristic device there is nothing wrong with this. But as a statement of what Universal Darwinism is--i.e. find a replicator, then apply Darwinism--it is a garden path. And a tortuous one. Consider that Blackmore (2000:26) says "memes are replicators," but on the preceding page claims that, "As long as we accept that people do, in fact, imitate each other, and that information of some kind is passed on when they do, then, by definition, memes exist." By definition? By definition `replication' takes place when perfect copies are produced, not when "information of some kind [my emphasis] is passed on..." Proponents of memetics who uphold the `gene standard' must weaken and mutilate the meaning of `replication'--which they take to result from `imitation'--in order to claim that memes are `replicators' and that, since they are, Darwinism applies. They insist, therefore, not on the concept `replication' but on the word, the use of which is assumed magically indispensable to the possibility of Darwinian science. But this is absurd. Critics of memetics who also uphold this same `gene standard,' on the other hand, stick closely to the definition of `replication' as `perfect copying,' and this is good (why butcher the language?). However, they fetishize the concept, for they accuse that the poor copying fidelity of memes--i.e. memes are not, after all, replicators--supposedly makes Darwinian analyses to culture inapplicable in principle. In my view, these critics, as much as the proponents, are chasing a mirage. Replication is not necessary for cumulative adaptations through selective processes (Boyd & Richerson 2000:153-158), and is therefore not the standard both critics and proponents are looking for. Replication is a red herring. The `selfish meme,' like its ancestor the `selfish gene,' is another catchy idea. It answers the question cui bono? by saying that the unit being transmitted--the meme--is the `entity' which `benefits' in the cultural selective process. Again, this began with Dawkins (1983:109), who stated that a meme is "a unit of cultural inheritance...naturally selected by virtue of its...consequences on its own survival and replication," and again developed by Dennett and Blackmore. In this picture "We humans. . .have become just the physical `hosts' needed for the memes to get around. This is how the world looks from a `meme's eye view'" (Blackmore 1999:8). In a manner parallel to the `gene's eye view,' we are here supposed to interpret every meme that succeeds at proliferating as having done so by dint of being well designed for proliferation. Cultural selection is reduced to the continuous editing of meme content until memes end up optimally designed for colonizing human brains. I will argue that only some rather specialized kinds of memes satisfy this analytical calquing from genetics to culture. But, again, this does nothing to wreck the applicability of Darwinian analysis or the usefulness of thinking in terms of memes--it merely indicts the fetishizing of the genetic analogy. Reducing cultural transmission to `selfish memes' requires that we ignore much of social-learning cognition and miss most of the picture. It should be obvious this far that I feel no compunction to accept Dawkins' (1989), Dennett's (1995), and Blackmore's (1999) definition of `meme' as selfish replicator. A recent compendium of views (Aunger 2000) makes it clear that neither do many others. It is best not to insist on a research program that rises or falls on whether memes defined as selfish replicators exist. That is a careerist semantic game that tries to assume or impose as a definition something that must be investigated, and such a game does not advance the science of cultural transmission--a science that will be carried out anyways because we must. Most of us seem to accept the Oxford English Dictionary's definition, which says: `an element of culture that may be considered to be passed down by non-genetic means.'[3][1] So `selfish replicator' I will treat as a hypothesis about what the stuff that gets transmitted through non-genetic means is like. The relevant questions, then, are: (1) does this stuff look like a selfish replicator?; (2) If not, does this really make Darwinian analyses of culture impossible? Related questions are: (3) if they don't replicate, is it impossible to find the boundaries of memes?; and (4) can we import from biology, willy-nilly, the `selfish gene' idea? I will answer "no" to each of these questions. But I will still call what is transmitted culturally a `meme,' and so--I will bet my house--will everybody else. The term `meme' has already been selected for, so rather than forcing its meaning to coincide with a particular hypothesis about cultural transmission, let us do some science. I. What is required for genetic cumulative evolution? Darwinian systems involve simple and blind algorithmic processes that nevertheless produce gradual accumulation of (sometimes very complex) adaptive design. They have three main requirements: information must be able to leave descendant copies (inheritance), new information should be routinely generated by some process (mutation), and there should be forces responsible for causing some items of information to leave more descendants than others (selection). Genes satisfy all three. They are inherited through reproduction; new genes are routinely created because of occasional copying mistakes, or `mutations', during DNA duplication; and a gene, through its effect on its carriers, affects the probability that it will increase in number. Thanks to selection and inheritance, when a particular gene causes increased reproductive success, more copies of it are passed on, and its relative frequency in the population increases (absent frequency dependent effects, eventually the whole population will have it). Thanks to mutation, new alternative genes get generated which occasionally amount to improvements, allowing the population to continue to evolve. Cumulative genetic adaptations are possible because (1) genetic mutations typically introduce incremental rather than massive changes, and (2) the mutation rate for genes is low. It is these latter two requirements for cumulative evolution in genetic systems that inform some scholars' intuitions that `replication'--that is, high-fidelity copying--is crucial to cumulative evolution through memes as well, which intuitions then damn Darwinian approaches to culture if memes are found not to replicate. For this reason these two requirements deserve further attention here. Massive change is by definition the opposite of the accumulation of design, where each successive design change is a minor alteration on the margins of the previous template. But should we expect organic evolution to consist of small, incremental changes? Yes. The space of maladaptive designs is vast relative to the space of adaptive ones, so random changes to any current design (and mutations are random) are unlikely to cause adaptive improvements. Imagine that a monkey types a character at random as I am writing this essay. Will it improve? Without vanity, I can say that the chances are exceedingly low. A random typo is unlikely to yield English, let alone better English. But should the monkey press a key which launched a program to rearrange all of the letters in my essay, then he would be infinitely less likely to improve it--slim as his chances were anyway. In population-driven processes, for a novelty to last longer than an instant, it is typically constrained to cause a small modification. Mutations must also be infrequent because, unless designs are relatively stable across time, we cannot get cumulative evolution. Suppose the offspring of A's are mostly non-A's. Even if A reproduces better than its competitors B and C, this cannot have an evolutionary consequence because the information responsible for A's reproductive prowess is almost always lost after reproduction. On the contrary, if an A typically begets another A, then A's higher reproductive success will soon make everybody in the population an A (absent frequency-dependent effects). Later, when a rare mutation results in a slight improvement to `A design'--let us call the new design A?--these A? mutants will outreproduce mere A's and the population changes again (but only slightly). This covers the intuitive basics of genes as replicators allowing for cumulative cultural evolution. But how similar to genes are memes? Well, memes certainly have the properties of inheritance, mutation, and selection. We constantly acquire and learn things from each other through social interaction, so in a broad sense at least it makes sense to say that the information I possess can create a `descendant copy' in you (inheritance). People can make mistakes when acquiring information, and can also have stupid or bright novel ideas, which leads to new items of information (mutation). And some ideas are more popular than others, so they are copied more, stored longer, and rebroadcast more often, which in turn means they leave more descendants than competing ideas (selection). What makes some ideas more `popular' than others are the properties of human social-learning psychology. This is not the only force acting to favor certain memes over others, but it is a very important one and I shall restrict myself to it here. So much for intuitively stated formal similarities. The devil, as usual, lurks in the details. To many critics, the dangerous phrase above is "in a broad sense...information can create a `descendant copy.'" How broad? How similar must ancestor and descendant memes be? Some assert that selectionist approaches cannot work because memes are not true replicators, making cumulative evolution impossible (e.g. Sperber 1996; Boyer 1994). Others, however, have not considered this a problem and proceeded to build Darwinian selectionist models that in their fundamental assumptions are quite similar to those used in evolutionary genetics, but adapted for cultural idiosyncrasies (e.g. Boyd & Richerson 1985; Lumsden & Wilson 1981; Cavalli Sforza & Feldman 1981; for a review, see Feldman & Laland 1996). As Laland & Odling Smee (2000:121) put it: "For us, the pertinent question is not whether memes exist. . .but whether they are a useful theoretical expedient." Their critics, however, will counter that such models do not help us explain human cultural processes because the units employed are nothing like what exists in real-life cultural transmission. To find out who is right, we need first to examine closely whether it matters that memes are poor replicators. II. Do memes mutate too much? To Dan Sperber (1996), contagious pathogens such as viruses are a better analogy than genes for understanding the spread of cultural information. Populations of brains are infested in successive `epidemics' of memes (which Sperber invariably calls `representations'--a favorite term in the cognitive literature). He cautions, however, that the analogy can be taken only so far. . . .whereas pathogenic agents such as viruses and bacteria reproduce in the process of transmission and undergo a mutation only occasionally, representations are transformed almost every time they are transmitted. . .--Sperber (1996:25) . . .recall is not storage in reverse, and comprehension is not expression in reverse. Memory and communication transform information.--Sperber (1996:31) For example, does anybody ever retell a story exactly? No, and this is Sperber's point. In the case of genes, a typical rate of mutation might be one mutation per million replications. With such low rates of mutation, even a very small selection bias is enough to have, with time, major cumulative effects. If, on the other hand, in the case of culture there may be, as Dawkins [1976] acknowledges, `a certain "mutational" element in every copying event,' then the very possibility of cumulative effects of selection is open to question.--Sperber (1996:102-103) It is important to see exactly what the argument is. Genes are very stable across generations because they very rarely make copying errors during duplication--hence, for the most part, they replicate. As observed above, this allows cumulative genetic adaptations to emerge because small, cumulative changes can only be added if there is an overall template which remains--for the most part--stable. There is nothing absolute about the acceptable rate of mutation, of course. Rather, this is always relative to the strength of selection. For example, even if there is a moderate rate of mutation, cumulative evolution will still happen if the selective process culls suboptimal variants fast enough that the favored design is stable at the populational level, and from generation to generation. G.C. Williams (1966) made this principle famous in his definition of an `evolutionary gene,' which is "any hereditary information for which there is a favorable or unfavorable selection bias equal to several or many times its rate of endogenous change." This definition was taken willy-nilly by Dawkins and applied to his definition of the `meme,' and recently stated very clearly by Wilkins (1998:8): A meme is the least unit of sociocultural information relative to a selection process that has favorable or unfavorable selection bias that exceeds its endogenous tendency to change. Sperber is accepting this move to assume (1) that `replicators' are the things to look for; (2) that Dawkins' reinterpretation of Williams gives the universal definition of a replicator, and (3) that Darwinian analyses will apply to memes only if they can satisfy this definition. In fact, Sperber eagerly forces the issue by ruling that any other conceptualization of `the meme' is trivial (Sperber 2000:163). His stance is therefore that cumulative adaptations through cultural selection are possible only if we can find bona-fide cultural replicators. But memes in fact mutate in every single act of transmission, so he concludes that cultural selection cannot conceivably act fast enough because the meme's dizzying rate of endogenous change creates a ceiling effect (Atran 2001 echoes this argument). Sperber therefore believes that we must understand how cognitive processes of information storage and retrieval cause mutations in particular and systematic directions. With this information, we can build (orthomemetic?) models of directed mutation rather than selectionist models (Sperber 1996:52-53, 82-83; 110-112) of cumulative change. There is some irony in this. Hull (2000:47) quotes the above definition by Wilkins approvingly as a starting point for a science of memetics that he optimistically believes to be possible, although he fully expects "howls of derision" to come from unreasonable critics who will accuse this definition of not being sufficiently "operational." Something very different has already happened, however! A prominent critic of selectionist approaches to culture--Sperber--has eagerly embraced that very definition in order to explain why selectionist approaches to culture are supposedly impossible. It would seem as though either Hull or Sperber must be wrong, for they agree on how to define units of cultural processes that would be legitimately Darwinian, but they reach exactly opposite conclusions as to whether human culture passes or fails the test. However, I believe they are both mistaken because they are sparring on the wrong battlefield. The standard chosen, rather than enlighten, blinds us to the general requirements for a Darwinian system by insisting narrowly on the terms of one particular solution to them--the genetic one--as if this were the only possibility. I shall accept Sperber's point that the mutation rate for memes is 1: they mutate in every act of transmission. And I will agree, too, that often they are systematically biased. But this is neither here nor there. What matters is how big these mutations are, and how strongly biased in particular directions, as we shall see. III. `Replication' is a red herring Sperber's argument may seem intuitively appealing, but I think it is specious. Near-perfect copying fidelity is certainly important in genetic selection, but it is not a requirement for any Darwinian system. If the high rate of mutation is not the meme's only distinction, then perhaps its other idiosyncrasies make it possible for regularly imperfect--or even invariably imperfect--meme-copying to support the emergence of cumulative adaptations. I shall make the case with a toy example. But first, a few preliminaries. In genetics, a `locus' is the physical location of a `gene' on a chromosome. This is where the information `for something' can be found. If we are talking about, say, the `eye-color' locus, then the gene found there may be the `brown-eye' gene, or the `blue-eye' gene, and so forth. What is the analogue in memetic transmission? For example, imagine something like, say, a tennis-serve `locus'. Whatever is in your tennis-serve locus causes your behavior when beginning a new point in tennis. There are in principle a vast number of different behaviors that people could store at the tennis serve locus (just as there are many different sequences of nucleotides that may be stored at the chromosomal eye-color locus). Waving hello to your mom, or baking a bread, would be ruled illegal by the judges, but in principle this does not prevent you from storing such information at that locus (just as a random and useless sequence of nucleotides could, in principle, be stored at the eye-color locus). It hardly matters that the tennis-serve locus may not be physically located in the same piece of brain for every individual. To insist on this is to push the genetic analogy to an absurd extreme where it begins to straight-jackets thought rather than inspire insights. The relevant and crucial similarity is functional, not physical: if individuals recognize that an item of information becomes relevant when, in a game of tennis, a new point is beginning, then the `cultural locus' has all the requisite functional similarity to the genetic locus that we need. In cognitive terms, the cultural `locus' is a tag plus retrieval function--it is a matter of categorization rather than physical location in the brain. The information retrieved at the start of a new tennis point is that which I tag as `tennis serve'. Waving to my mom or baking a cake have not been tagged this way (even though, in principle they could be), and, since they have not been, they do not compete to `occupy' my tennis serve `locus.' The true alleles of my current serve, therefore, are other behaviors which I also tag as `tennis serves' because some individuals in the population perform them in the context of beginning a point in a tennis match. I may choose to acquire one of these later on, and in so doing will replace my current serve. These obvious functional similarities readily dismiss the criticism that, because memes do not have the same kind of physical reality as genes, selectionist approaches to culture are a nonstarter. We are not talking here of the duplication of exact neuronal structures analogous to the duplication of exact nucleotide sequences in DNA, but we are speaking of the duplication of a certain behavior, understood to belong in a certain context, and in competition with other behaviors also understood to be candidates for the same context. The lack of similarity in the material basis of genes and memes is not a problem. A. The right mix of stability and variation To see whether a meme's inability to properly replicate makes cumulative cultural adaptations are impossible, we must examine the full spectrum of theoretical possibilities. Suppose that in our population, Bob's serve is the most attractive, and seeing it performed gets people excited to make changes in their own tennis-serve loci. There is a continuum of different things that could happen, bounded by two extremes. At one extreme--replication--people acquire precisely the same content that is in Bob's own locus. For example, you acquire the exact same top-spin service with a slight jump that Bob favors. At the other extreme--causation of random changes--people rewrite the information in their locus such that it typically bears no resemblance to Bob's serve. Here, for example, you might `write' into your tennis serve locus the idea that you should wave at mom when up to serve. Please take note that I am not following the information in the brain here, although of course it is necessary for the process. What I am keeping track of here is the actual behaviors, and I am completely ignoring the question of what particular information content in the brain may be causing them. The latter is not always unimportant (Gil-White 2002a), but it does not concern me in the present analysis, and it is irrelevant to the points I will make. When I talk about `replication failure,' what I mean here is the inability of the copier to perform a serve that is identical to Bob's. Let us look first at the causation of random changes. This will look silly, but we cannot gain the proper insights until we examine the full spectrum of possibilities. As silly as it sounds, suppose I put `wave at mom' in my tennis serve locus after watching Bob's top-spin serve. You will put randomly different, but typically equally dissimilar, information to Bob's serve in your own tennis serve locus. What will happen? We are assuming that it is the content (i.e. the sequence of motions) involved in Bob's serve that make it attractive, in turn precipitating changes in the tennis-serve loci of other people. Given this, I myself (who now wave at mom when I `serve')--and all others who randomly changed the information at their tennis serve loci after watching Bob--are not similarly beacons of change; our new `tennis serves' look nothing like Bob's and they therefore get nobody excited (and mostly irritate the judge because they are not admissible). Bob's serve has not become more common, nor has the mean serve of the population moved in the direction of Bob's serve. Since evolution is about statistical changes in a population, the fact that this process does not produce reliable directional movement in the population's mean serve implies that this process cannot lead to cumulative design changes. After all, the first requirement for cumulative adaptive design is the possibility of directional change. Now consider the other extreme. This will look silly too. Here, watching Bob's serve produces verbatim replicas in observers' tennis-serve loci. People copy perfectly, so there is never any mutation--not ever. What happens? Because Bob's is the most attractive serve, all of the people who now have Bob's serve in turn become models for other people, who again copy the serve precisely and so forth. Bob's serve spreads until everybody is serving identically. Here, too, selection cannot lead to cumulative design changes because the serves are all identical to Bob's. The future will be spent forever more serving exactly like Bob, by everybody. No other serves will ever emerge because nobody ever makes a copying mistake. We see that at either end--random changes, or perfect replication (100% copying fidelity)--there can be no accumulation of adaptive design. So this can occur only somewhere `in the middle', where descendant changes are relatively similar to the `parent' stimulus, but somewhat different. There are two ways in which this can happen: (1) descendant serves are always identical to the parent, except that every once in a long while there will be an accidental difference; or (2) the descendant serves are always accidentally different from the parent serve, but jump around relatively closely to the average of copying accuracy. In both cases we get more attractive future serves by making marginal changes to Bob's, which in turn makes the marginally improved serve the new model (and this is what allows for cumulative adaptation). I examine each in turn. (1) Copying involves mistakes only once in a long while. Here the information `written' in a person's tennis-serve locus is a pristine replica of the `parent' serve. There is a very small probability of replication failure so, very rarely, a random modification results. Such modifications will typically make Bob's serve less effective because a tennis serve is a complex behavior where many variables must be kept within narrow ranges to ensure success. I am assuming that only effective serves are attractive, and so most random changes will result in less attractive serves. But very, very occasionally, a random copying mistake begets a more effective--and therefore more attractive--serve, which then displaces Bob's as people now begin making perfect replicas of the improved serve. Many iterations of this cycle will lead to ever better serves. I have just described a process of accumulation of adaptive design emerging from cultural transmission that is exactly parallel to cumulative genetic evolution by natural selection. Sperber (1996) claims that in order for selection to produce cumulative design in cultural transmission, the process should look like this. But let us take a look at a rather different process. (2) Copying always involves mistakes, but around an average of perfect accuracy. This process is illustrated below in fig. 1. Every time somebody sees Bob's top-spin serve, the goal is to copy it exactly, but there is always some error, and thus there is almost never a perfect copy. However, the errors are relatively small and not biased in any particular direction, so that Bob's serve is obviously the template for all descendant serves. In this scenario, replication is the occasional exception. However, the population's mean serve is still Bob's, even if no individual serve is a true replica. The errors amount to a constant introduction of modest variations, from which a serve superior to Bob's will emerge, and which then will become the new model serve--the new template to copy--for all of us. When that happens, this new serve becomes the new mean of the population, with a new cloud of error around it. If we concentrate on the population mean, it is clear that cumulative design is taking place. This is not like genetic evolution by natural selection (where replication is very high fidelity), but it is certainly the accumulation of adaptive design due to selection (and it is faster than natural selection because variants are introduced in every copying attempt). Fig. 1. Copying with modest errors. Think of the units in the X-axis as being very small, so that the distance between the left-most bar and the right-most bar is not too great--that is, we are assuming that all serves produced are minor deviations from the target serve (which is Bob's). In the second case just considered replication rarely if ever happens; the norm is replication failure. It is a good summary description of the assumptions that go into many of the selectionist models that Boyd & Richerson (1985) introduced in their approach. This condition of replication failure as the norm is what Sperber claims renders cumulative adaptations from cultural transmission impossible. But we have just seen that it is certainly conceivable, and this lays bare that replication itself is a red herring. It is neither here nor there. What cumulative adaptation requires is (1) sufficient inaccuracy in the production of descendants such that superior variants can occasionally emerge; and (2) sufficient accuracy that, at the populational level (the mean), we can speak of meaningful, directional change (cf. Boyd & Richerson 2000). B. Mutations may have consistent biases But what about directed mutation? This idea posits an attractor, created by a psychological bias, towards which serves will tend because the copying mistakes we make are on average in the direction of the attractor. That is, the mean of our copying errors will not be zero. Contra Sperber, this is still not a problem--at least not in principle. The attractor could be anywhere at all, but we can get our bearings by again considering the two extremes, namely, (1) when the attractor is the optimally effective serve, and (2) when it is in a direction opposite to the optimally effective serve. (1) The mutation attractor is the optimally effective serve. This case is illustrated below in figure 2. As before, suppose that every person tries to copy Bob's serve exactly, but fails within a cloud of error with mean zero. A few people, however, can see forward to the kinds of modifications that would make Bob's serve even better, and attempt these. This means that the actual mean `error' for the whole population will be skewed by these innovators in the direction of the optimal serve. Does this prevent cumulative adaptive design? No. On the contrary, it speeds up the process that takes the population to the optimal serve because mutations in this direction are slightly more likely. The design is cumulative because foresight does not extend to the optimal serve itself, merely to slight modifications of observable serves that take them in that direction. Fig. 2. Adaptive mutation bias. In this case the population mean is closer to the optimum, after copying, than is Bob's. (2) The mutation attractor is in a direction opposite to the optimal serve. This case is illustrated below in figure 3. This could mean, for example, that there is something about the way it feels natural to move our bodies that makes us more likely to make errors in a direction away from the optimally most effective serve. But the phrase here is more likely. It doesn't mean that copying errors in the direction of a better serve never happen. Thus, what happens is that the mean copying effort results in a serve somewhat lower in quality than Bob's, but if the cloud of copying error occasionally produces a serve better than his, this serve will become the new target for copiers. This results in a new population mean that is again less good than the new target serve, but it is not less good than the previous mean serve in the population. Thus, the population mean will have moved closer to the optimal serve despite the fact that the mutation bias always makes it lag behind its current target. Fig. 3. Maladaptive mutation bias. In this case the population mean is further away from the optimum, after copying, than is Bob's serve. However, some copiers will make mistakes to the right of Bob, and since this yields a better serve, it will become the model for the next generation. Only when the attractor is so far away that it prevents the emergence of any variants better than Bob's serve would the emergence of cumulative design be short-circuited, as shown below in figure 4. Copying mistakes that result in improvements Fig. 4. Overly strong maladaptive bias. Due to a strong mutation attractor, the population mean is so far away from Bob's serve in a maladaptive direction that better serves practically will never appear. The last example above shows that, when directed mutation occurs, it should be modeled together with selection. The direction of the system will then result from the algebraic sum of all the forces considered. We don't have to decide whether either mutation or selection is the force to consider in our modeling exercises. For problems having the structure just considered, Sperber will be right that constant, directed mutation, prevents cumulative adaptation only if and when such mutation is (1) not towards the optimum and, (2) of sufficient strength. This is an empirical question, and it may be true for some domains and not for others. But we will not find the answer under the armchair. But do we have empirical examples of cumulative cultural adaptations through selection? Yes. Other than tennis serves, we could name tennis racquets. In fact, we could name anything in the large domain called `technology'. Here design has obviously accumulated gradually. And even here Sperber's dictum that replication is a limiting case rather than the norm is correct (except in the case of our very modern manufacturing techniques). One can also point to institutions. Certainly institutions have been `constituting' themselves on paper for a long time, but institutional organization pre-dates paper. Moreover, though the rules of an institution may be written, institutional behavior is always in the (sometimes very) flexible neighborhood of what is written down, rather than a rigid instantiation of it. In this sense--as living, breathing organisms--institutions are always imperfectly copied (for an example, consider that the Mexican political constitution is--on paper--almost a replica of the American, on which it was modeled). And yet institutions accrete cumulative changes. The evidence that they do so adaptively is in the incontrovertible fact that complex societies have outcompeted simple ones, and in the fact that different institutional arrangements have been the key to success in the competition between different complex societies (McNeil 1963, Landes 1998, Diamond 1997, Wright 2000). Technological and institutional change are not the only examples, merely the most obvious ones. But they occupy much of what is important in cultural evolution, so they make the case that selectionist approaches will be quite significant to explaining culture. Given that cumulative cultural adaptations don't require memes to replicate, this was not the litmus test for Darwinian analyses to culture. And if my critique of gene-analogy fetishism among the critics of `memetics' is acceptable (for a mathematical demonstration of my core arguments, see Henrich and Boyd 2002), it simultaneously refutes the arguments of proponents such as Dawkins, Dennett, and Blackmore, who fetishize the alleged importance of `replication' for opposite reasons. IV. `Imitation' is another red herring A related point can be made about `imitation' (i.e. what we do when we copy Bob's serve). Blackmore insists on imitation as the memetic process. But she would like to consider a narrative, for example, a `meme.' And yet, narratives are not transmitted by imitation. Blackmore (1999:6) gets around this by corrupting the meaning of `imitation' just as she did with `replication': Dawkins said that memes jump from 'brain to brain via a process which, in the broad sense, can be called imitation' (1976:192). I will also use the term 'imitation' in the broad sense. So if, for example, a friend tells you a story and you remember the gist and pass it on to someone else then that counts as imitation. With such a loose definition of `imitation,' a reader such as myself cannot understand what standard Blackmore upholds when she insists that `imitation' is what identifies the subject matter of `memetics' (cf. Plotkin 2000:76-77). But this is another red herring anyway. We need a handle on the social-learning cognitive mechanisms which, in combination with individual-learning processes, are responsible for affecting the distribution of memes (cf. Plotkin 2000; Laland & Odling Smee 2000). Imitation is important, but we don't need to fixate on it. Different domains will involve different processes and will need mid-level theories particular to them, but "In every case the Darwinian population approach will illuminate the process..." (Boyd & Richerson 2000:144). The imitation of a motor act, the acquisition of a native language, and learning one's culture-specific social constructions have different developmental trajectories. . .Each is based on different psychological mechanisms. It is almost certainly the case that the characteristics each displays in terms of fecundity, longevity, and fidelity of copying are also different in each case, and different precisely because each is based on different mechanisms. The suggestion that "we stick to defining the [sic] meme as that which is passed on by imitation" Blackmore (1998), if taken literally, is an impoverishment of memetics for reasons of wanting to maintain copying fidelity.--Plotkin (2000:76) The insistence on imitation, as Plotkin suggests, comes precisely from this obsession with replication (copying fidelity). Imitation, narrowly (i.e. properly) understood, is the mechanism that strikes some observers, Blackmore included, as closest to the production of carbon copies. So they insist on the word `imitation' because it confers the cachet of `replication,' which in turn supposedly grants in exclusivity the legitimacy to undertake Darwinian analyses. Absurd. And here again, the critics of `memes' agree with this fetishism of `imitation' only so they can reach the opposite conclusion. Atran (2001) in a section title, says, "No Replication without Imitation; Therefore, No Replication" (because there is no real imitation), and thus--absent replication--no applicability of Darwinian selectionist analyses to culture. This is hardly better, and refuting Blackmore's error is simultaneously to refute this one. If imitation and replication are neither here nor there when it comes to establishing a litmus test for the possibility of a Darwinism of culture, then one cannot reduce one's advocacy or skepticism of this project to whether there is or isn't imitation and/or replication. It is true that some cultural transmission scholars have made much of `imitation' (e.g. Boyd & Richerson 1985, 1996, 2000; Tomassello et al. 1993), and they have stressed its indispensability to cumulative cultural evolution. Less misunderstanding would result if they said imitation was the ability which initially set humans along the path of cumulative cultural change, and that other tricks have since become possible (it is not a coincidence that when the above authors stress imitation they are comparing humans to nonhumans). For example, I have recently argued that language became possible when imitation led to the emergence of prestige hierarchies (Gil-White 2002). But this emergence of language now makes prestige-biased transmission often a process of influence that pushes attitudes back and forth along a continuum, rather than imitation (Henrich & Gil-White 2001). Another example: narratives can accrue cumulative changes through selection, and I doubt that Robert Boyd, Peter Richerson, or Michael Tomassello will disagree. But narratives don't spread through imitation, even if the evolution of imitation was necessary for the emergence of language, which is indispensable for narrative. We must distinguish the phylogenetic indispensability of imitation from its current importance in cultural transmission. V. Platonic inferences So far I have ignored the following problem: although individuals do not make replicas of the memes they try to copy, they do try to. However, what could their target be? After all, our tennis player, Bob, never replicates his own serve perfectly either! Bob's performance is itself a cloud of error around a mean. So copiers must be abstracting an `ideal Bob serve'--which they try to emulate--from Bob's performances. Sperber (1996:62-63) dismisses this as `a Platonist approach' (indeed Plato would have liked the argument that we strive to copy not the thing we see, but its `essence', as we infer it, so to speak). To Sperber, formal properties cannot be causal. I believe the opposite. It makes perfect sense that we infer and abstract an `ideal' serve as Bob's goal, and then strive for it. For evolution to have designed our social-learning psychology otherwise would not have been adaptive, given that the performances of the people we copy are statistical clouds (cf. Dennett 1995:358; Dawkins 1999:x-xii; Blackmore 1999:51-52; Boyd & Richerson 2000). In a selectionist model it is therefore perfectly valid to define `the meme' as the abstraction for which Bob strives, and to track the population mean as people try to copy this abstraction. I do not agree with criticisms that selectionist models have illegitimately relied on assumptions of discrete memes (Atran 2001), or that "the notion of replication certainly is one idealization too many for models of cultural transmission" Boyer (1998). The problem being modeled will determine whether the simplification is legitimate, and many such models actually include copying error as a parameter anyway. However, there is no question that there is an important role here for cognitive psychology and anthropology. We need a better understanding of how the brain decides which aspects of a performance are important and which irrelevant. Understanding such cognitive filters will tell us, for a particular domain, what is the `meme'. But not having yet a good handle on such things is no obstacle (pace Atran 2002:ch.10) to current selectionist models (review in Feldman & Laland 1996) for these are concerned with the formal, emergent properties of Darwinian systems that, by assumption, are capable of cumulative adaptation, rather than with the histories of any specific, individual memes. As such, they teach us how to think about cultural evolutionary processes involving broadly specified types of (relatively abstract) memes, and the long run properties of dynamic systems having two interlocking systems of inheritance: genetic and cultural. What I have tried to do here is show that the assumption of selectionist models--that cumulative adaptation is rampant in cultural transmission--is a very reasonable assumption. VI. What are the boundaries of `a meme'? Some critics (e.g. Atran 2001) accuse that memes don't have well-defined boundaries, but even "well-disposed" anthropologists can't see where to draw them. Maurice Bloch (2000) expresses his misgivings as follows: As I look at the work of meme enthusiasts, I find a ragbag of proposals for candidate memes, or what one would otherwise call units of human knowledge. At first, some seem convincing as discrete units: catchy tunes, folk tales, the taboo on shaving among Sikhs, Pythagoras's theorem, etc. However, on closer observation, even these more obvious 'units' lose their boundaries. Is it the whole tune or only a part of it which is the meme? The Sikh taboo is meaningless unless it is seen as part of Sikh religion and identity. Pythagoras' theorem is a part of geometry and could be divided into smaller units such as the concept of a triangle, angle, equivalence, etc. Bloch has rather quickly pronounced defeat. These problems are hardly insurmountable, and they are not any different from similar conceptual problems faced in evolutionary genetics. What is `the meme': the whole tune or only part of it? A Darwinian unit is of whatever size selection favors. This is why in evolutionary genetics Dawkins (1983:87-89) doesn't like to insist on the gene as a cistron (`start' codon to `stop' codon). He is right. The cistron is more useful to molecular biologists. A tune, just like a cistron, has a starting point and an ending point, and, just like a cistron, this is a matter of performance, not selection. For the tune, a musical performance; for the cistron, the construction of a polypeptide chain. Our intuition that the whole tune is a unit does not come from an analysis of what people can remember and what they rebroadcast, about what spreads and doesn't spread, but rather from our understanding of the conventions of musical performances. That the whole tune is a unit of performance does not make it a unit of selection. The key point is that there are memes about which things to perform, and how much of them to perform, and these are of a different kind, and are found at different cultural `loci,' than the loci which store tune fragments. At one cultural `locus' we find beliefs about which piece should be played compete. This locus can house a finite number of such beliefs; `Beethoven's 5^th deserves to be played' has consistently triumphed in securing a spot in it. Another locus is where memes compete to specify how much of a piece should be played. Here the belief `play a piece from beginning to end' has fared well against competitors. Thus, it is because these two memes are successful in their respective loci that Beethoven's 5^th is played often and in its entirety--not because the symphony itself is encoded whole in the heads of listeners! What listeners remember of the piece is stored in yet another locus where tune-fragments compete to be remembered. For the most part, only the opening theme survives (it is very catchy). That these loci are independent (though not unrelated, of course) is made evident by the fact that very catchy but tiresome pop-tune fragments will get remembered so easily that the preference for the entire song not to be played will spread (at least after the initial success of the song in question). It is thus possible for the tune-fragment, on the one hand, and the negative preference for the song which contains it, on the other, to be simultaneously at high frequency, and remain so for a while. Try and see if you can forget `The Macarena' (and tell me honestly whether you would like to hear it played). Of course, for such a tune-fragment to persist across the generations, a reasonable fraction of people must preserve the belief that the piece which contains it ought to be played. The opening theme to Beethoven's 5^th will probably continue to make it, but my future children will never know `The Macarena'. What we have discovered here is that for a meme to spread--here, the opening theme to Beethoven's 5^th--it needs a favorable ecology of other memes at other loci (for example, `Beethoven's 5^th deserves to be played'; the memes necessary to play a violin; the meme that violinists should be paid; etc., etc.). This discovery looks a lot like an earlier discovery: that any gene cannot hope to prosper unless it is surrounded by a favorable ecology of genes at other loci in its own organism, and also in the ecology of phenotypic effects of other organisms' genes. What else is new? If this discovery does not hurt the possibility of population analyses in biology, why should it be fatal for culture? Yes, the Sikh taboo is more likely to spread and remain stable in an ecology of religious memes that are congruent with it. Yes, Pythagoras' theorem cannot be learned without first possessing the meme that says what a triangle is. But neither can the gene for reciprocity spread, for example, unless there are genes already for, say, social aggregation. None of this is new, or especially difficult. Another vexing problem raised by the question "what are the boundaries of `a meme'?" refers to the level of abstraction. When somebody tells me a story, and I retell it, I will never give a verbatim rendition of the story I heard. Many of the details will change. There are good reasons to think that most of the details are not even stored in memory (Schank & Abelson 1995). I can feel the critic pouncing: "Aha! There is no stability!" But at what level? Suppose that the skeleton of the story is very stable. If so, the fact that story details are not even encoded in the listener's brain--and therefore change radically from version to version--is as worrisome to Darwinian analyses in culture as silent mutations in the DNA code are to evolutionary genetics (i.e. not at all). What we need to keep track of is the story skeleton. Changes there will be the real mutations. I shall ignore further development of this point here as I will soon give it an article-length treatment (Gil-White, in prep.). VII. Meme `content' is not everything Recently, Sperber (2000) makes a concession to the point that we make Platonic inferences but then insists that these are almost always triggered rather than bootstrapped. Atran (2002, 2001, 1998), and Boyer (1998, 1994) make essentially the same point. The argument is that observation produces `inferences' which are best described as the triggering of a pre-existing knowledge structure. Sperber (2000:165-66) gives the example of language, interpreted from a Chomskian point of view, "where language learners converge on similar meanings on the basis of weak evidence provided by words used in an endless diversity of contexts and with various degrees of literalness or figurativeness." From this it follows, he says, that language learning is much more about triggering pre-existing knowledge than bootstrapping new knowledge. Rather than stable and discrete memes competing with each other in a selective contest, goes the argument, memes will mutate quickly and fuzzily, and morph inexorably into the shape favored by a content-bias `attractor,' which is specified by our innate cognitive endowment. Not everything is like that, Sperber admits. "Learning to tap dance involves more copying than learning to walk," but, he insists, "For memetics to be a reasonable research programme, it should be the case that copying [as opposed to the triggering of pre-existing knowledge], and differential success in causing the multiplication of copies, overwhelmingly plays the major role in shaping all or at least most of the contents of culture." But it doesn't, he claims. Rather (as if this were an alternative!) he claims that "the acquisition of cultural knowledge and know-how is made possible and partly shaped by evolved domain-specific competencies..." In my view Sperber sets up a straw man--a false test--for several reasons. First, because, as noted above, he is asking us to choose between complements rather than between alternatives. Second, because, for a great many domains the triggering of inferences makes a rather different point. Our toy example will assist us here too. As observed earlier, learning Bob's serve requires that we abstract his goal from his statistical cloud of performances. This is an inference, sure, and it relies on "pre-existing knowledge" too. But knowledge about what? Primarily, about the purpose of a serve in a game of tennis. In other words, knowledge that does not come from an innate, domain-specific module as Sperber would have it, for the brain of a human hardly comes prepared to trigger "tennis" (and many people around the world don't play it). An important form of cumulative bootstrapping takes place already merely in the fact that the rules of tennis need to be understood first in order properly to infer the specific thing that Bob is going for when he serves. There is no straightforward or absolute reduction to the triggering of innate modules here. Third, because Sperber's linguistic example is not even that good. Although there is undoubtedly much innate knowledge dedicated to the bootstrapping of language, a model that reduces linguistic historical processes to nothing more than triggering of innate knowledge can never explain how Indo-European became Hindi but also Spanish. Fourth, because Sperber's test is unfairly asymmetric. In his formulation, the mechanism he does not favor--the copying of knowledge--must be "overwhelmingly" dominant, but his favored explanation need only be "partly" responsible for his prescriptions to be the most sound. Tails, he wins; heads, we lose! We hardly need this. Finally, even should we grant all of Sperber's assumptions and accept that all attractors will be innate, and that there will be attractors for everything, he is still wrong. Henrich & Boyd (2002) show that so long as more than one attractor can exert influence over a given meme, and the attractors are strong relative to selection pressures, the dynamics quickly become a contest between the discrete alternatives favored by each attractor, engaged in a selective contest. So even here the fuzzily-morphing-into-the-attractor model is not right--selection still happens. A. Non-content biases and their importance The last line of defense for Sperber would then be that, even so, the contest is all between innate attractors and so one cannot expect cumulative cultural evolution acting on arbitrarily varying memes. Atran (1998) and Boyer (1998) agree with this view that transmission is mostly about moderate variations around `core memes,' which are strongly constrained by innate mental biases that focus on a meme's content. A related view has stressed that the main causes of `triggered inferences' will be local non-cultural environments (e.g. Tooby & Cosmides 1992), so cultural differences reduce to the environmental conditions surrounding the various local human populations. Others, however, argue--not in stead (content biases are important too) but in addition--for the importance of non-content biases that allow arbitrary differences to spread and remain stable (Boyd & Richerson 1985; Henrich & Boyd 1998; Henrich & Gil-White 2001; Gil-White 2001a, 2001b). To see why we believe in the rampant spread of arbitrary differences, we must describe the relevant social-learning cognitive biases. Assume that Bob is your hero because he is a great tennis player. Bob likes a Wilson racquet. What do you do? Buy a Wilson racquet. Bob wears leather pants; you buy leather pants. Or suppose everybody in your high school class is getting leather pants. What do you do? Get leather pants (you don't want to look like a deviant). In these examples you acquire the meme not because the meme itself captivates you; what seduces you are the contingently associated features: the meme's source, or its relative frequency. In these observations lies a key--and very misunderstood--virtue of the selectionist approach pursued in the tradition pioneered by Boyd & Richerson (1985): the importance of `non-content' transmission biases. The memes that do well and spread widely in a population are those which, for whatever reason, the human brain has a `taste' for. But, as seen above, some of these `tastes' may have nothing to do with the actual content of a meme (what the meme actually `says', `prescribes', or makes people do). Of course, many biases involved in social learning will focus on a meme's content. Boyd and Richerson call these `direct biases' (and I am calling them `content biases'). However, as students of culture from an anthropological perspective, they have devoted much attention to the long-term consequences of non-content biases that can cause the accumulation of arbitrary differences between societies. The non-content biases relevant to this problem are conformity bias and prestige bias. Much research in social psychology suggests that humans have biases to prefer memes that are common relative to competing memes at a particular cultural `locus' (Miller and McFarland 1991; Kuran 1995; Asch 1956, 1963[1951]). Boyd and Richerson (1985:ch.7) and Henrich & Boyd (1998) give models to explain the adaptiveness of informational conformism as helping individuals pick up useful memes that others have already converged on. Gil-White (2001a) argues that interactional-norm conformism is adaptive because it gains the conformist the maximal number of potential interactants. Boyd and Richerson have also speculated (as indeed have many others) that prestigious individuals are copied more often than others, and Henrich & Gil-White (2001) recently took these speculations and developed a lay model to explain the evolution of such a cognitive bias, reviewing also the evidence for it extant in the social-scientific literature. We argue that prestige-bias is adaptive because successful individuals (i.e. with better memes) tend to have prestige. These two biases care nothing about content: conformity bias cares about relative frequency, and prestige bias about source. As far as these biases are concerned, the memes could be `about' anything at all. Thus, in domains without strong content biases, we should see the following effects. First, the memes of prestigious individuals will tend to become more common, but these will be unpredictably different for people in different communities given that every individual has an idiosyncratic life history (e.g., I, but not you, may fall off the horse after washing my feet in a stream, and conclude superstitiously that the stream was somehow directly responsible), and such differences will be larger between members of different communities (even if we both fall of our horses after washing in the stream, I am more likely to come up with the idea if my group already believes streams have supernatural powers). This sort of process will engender arbitrary differences between societies. The second effect is that, once common, conformity will keep such memes at high frequency in a community as large as the sample for the conformist bias. This will keep such arbitrary differences between societies stable generation after generation. Such differences in turn become acquired `content biases' on future evolution. The conformist and prestige biases therefore offer themselves as an appealing joint explanation for the different historical trajectories which have caused dramatic variation among the world's cultures. (Drift can also act to bootstrap arbitrary differences to frequencies high enough for conformity to kick in and stabilize them.) Together they can explain why two populations living in the same environment could become quite different, culturally--something that happens all the time. B. Don't reduce everything to `content' The issue of cultural variability has been an anthropological concern throughout the 20^th century, and it has led to the theoretical excess of `cultural relativism', which holds that human brains are--for any and all purposes--blank-slates upon which a local culture can write literally anything at all. That this is false should have been obvious (but it hasn't been). But perhaps some anthropologists are now guilty of overreacting in claiming that the blank-slate view of culture is always wrong. The picture of the human mind/brain as a blank slate on which different cultures freely inscribe their own world-view. . .[is] incompatible with our current understanding of biology and psychology. . . . the brain contains many sub-mechanisms, or `modules', which evolved as adaptations to. . .[ancestral] environmental opportunities and challenges (Cosmides & Tooby 1987, 1994; Tooby and Cosmides 1989, 1992) [and]...are crucial factors in cultural attraction. They tend to fix a lot of cultural content in and around the cognitive domain the processing of which they specialize in.--Sperber (1996:113) Other anthropologists in this tradition have expressed similar views in the process of exploring some interesting content biases as the reason for the widespread recurrence of certain memes. For Boyer (1994) these are certain religious ideas; for Atran (1998) concepts of living-kinds; and for Hirschfeld (1996), intuitions about so-called `races'. These are all valuable enterprises, but these authors seem to think that the discovery of these content biases amounts to a refutation of the possibility of acquiring any unconstrained memes (Boyer 1998), and therefore a refutation of the possibility of stable, arbitrary differences between cultures (Hirschfeld 1996:21-22), which in turn implies a refutation that such nonexistent differences could lead to cultural group selection (Atran 2002:ch.10). One should not conclude that finding content biases in some domains excludes the possibility of arbitrary differences in other domains without strong content biases. Sperber seems to present the issue above as an either/or question: the brain is not a blank slate, therefore cultural content is fixed around the cognitive domain of our evolved biases. But we must adjudicate this on a domain-by-domain basis. The blank-slate assumption may in fact be a reasonable approximation in a great many domains. With a different slant, Blackmore (1999) and Dennett (1995) also argue for the primacy of content, but they place the focus on the meme, rather than on innate psychology. Cultural evolution is here a selective process that makes memes increasingly better propagators. As Dennett (1995:362) puts it, Dawkins (1976:214) points out that `...a cultural trait may have evolved in the way it has simply because it is advantageous to itself.' (...) The first rule of memes, as for genes, is that replication is not necessarily for the good of anything; replicators flourish that are good at...replicating--for whatever reason! Memes that `look' like what the brain `wants' will spread even if they lack the effects that the brain is adaptively `hoping for'. This is valid, but the emphasis on content as such is overplayed. Dennett and Dawkins suggest that the only thing affecting a meme's spread is whether the meme itself is good at replicating, and that selection will successively edit the meme's content so that it is ever better at replicating. This is the `meme's eye view': only the properties of a meme (i.e. its content) determine its spread. But a meme can be lucky. It can happen to find itself in the head of a prestigious person, or, thanks to prestige-bias bootstrapping (or even random drift processes), it may find itself at high frequency through no `fault' of its own. In both cases the meme's content takes a back seat. In fact, the meme may be favored despite its content. This means that prestige-biased and conformist transmission are excellent explanations for why some maladaptive memes spread and remain stable, even when the memes themselves are not good at replicating. I hardly think that Dennett's `first rule of memes' is a rule at all, let alone the first. It is in no way necessary as an all-encompassing perspective on the processes involved in cultural transmission. I am hardly alone in making this criticism (e.g. Conte 2000:88; Laland & Odling Smee 2000:134; Boyd & Richerson 2000), and I am hopeful that the authors criticized here can be convinced. After all, Atran (2002:ch.10), partially acknowledges that "from a cognitive standpoint, some cultural aspects are almost wholly arbitrary." Boyer (1998) recognizes the importance of prestige bias, and Sperber (1996:90-91) explicitly recognizes its power to generate arbitrary differences between societies. Meanwhile, Blackmore (1999:ch.6) talks about source biases that I don't believe exist (e.g. `imitate the good imitators') but which, as source biases, should undermine her view of meme-selection as solely the result of meme content. Dawkins (1999:vii) starts his introduction to Blackmore's book by describing prestige bias. And Dennett and Dawkins are clearly aware of frequency-dependent effects such as conformism (Dennett 1995:352). Following these authors' own observations about non-content biases to their logical conclusions entails that arbitrary differences between cultures are not only possible but likely, and to the extent that they are stable they generate selection pressures at the group level (Boyd & Richerson 1985:ch.7; Henrich & Boyd 1998). We can now closely evaluate the sometimes facile claims made about memes, whether by proponents or critics. Susan Blackmore (1999, 2000) has recently become the most outspoken proponent of the notions I have just criticized, although the main points are owed to Dennett (1995) and also to Dawkins (1989). Her most pithy formulation, and the one that makes all of her intended links, is the following (Blackmore 2000:26): ...memes clearly vary and therefore fit neatly into the evolutionary algorithm. In other words, memes are replicators. The importance of this is that replicators are the ultimate beneficiaries of any evolutionary process. Dennett (1995) urges us always to ask cui bono? or who benefits? And the answer is the replicators... I believe every link in this argument to be mistaken. Blackmore begins by saying that it is because memes vary that they fit into the evolutionary algorithm. But this is false. Grains of sand vary, and they do not fit into the evolutionary algorithm. Memes fit into the algorithm only if they vary and remain reasonably stable in the process of transmission. If mutation were both infinite and infinitely random, then "what is passed on in imitation" (how Blackmore [2000:25] defines memes) would certainly vary but they could not be analyzed with Darwinian tools. Second, Blackmore says that because memes fit into the evolutionary algorithm, they must be replicators. This again is false. Units can fit into the evolutionary algorithm even if they don't replicate, as I have argued with the example of Bob's tennis serve. Boyd & Richerson (1985:ch.3) already demonstrated long ago that this is true even for the case of blending inheritance (though nobody ever takes notice). Recently, Henrich & Boyd (2002) provide another demonstration of why replication is a red herring. Third, Blackmore argues that because memes are replicators, and since "replicators are the ultimate beneficiaries of any evolutionary process," our analyses must always be in terms of how the memes benefit. False again, as shown by the existence of non-content biases. But this last argument of Blackmore's is so `sexy'--it is responsible for most of the attention which her work, and the preceding work of Dennett and Dawkins has received--that it is worth a thorough refutation, which I turn to next. VIII. Memetic Drive--the `meme's eye view' gone mad Can we reduce everything ultimately to the interests of `memes'? Blackmore (1999:8) says that "We humans. . .have become just the physical `hosts' needed for the memes to get around." But this would mean that, just as a chicken is an egg's way of making another egg (the `selfish gene' perspective), a brain is just a meme's way of making another meme (the `selfish meme'). But lest anybody forget, genes have something to do with making brains! The problem here is that `ultimate' is not as definite a concept as Blackmore might like to imagine. Since, in the long run, as the economists say, we are all dead, we must specify the time-scale for any evolutionary problem. Human brains are under selection pressure to develop genetically specified meme-catching biases that filter out maladaptive memes and zero-in on adaptive ones, and this means that genes and memes are caught in an interactive, historical feedback process--what Boyd & Richerson (1985) have called `dual inheritance.' Even when Blackmore (1999) talks about dual inheritance, however, she is fond of reducing everything to memes, including the mind. This is her concept of `memetic drive' which is supposedly her most radical idea (Aunger 2000:11), and which underlies most of her arguments about what `memetics,' conceived as the study of selfish replicators, can explain: Genes are instructions for making proteins, stored in the cells of the body and passed on in reproduction. Their competition drives the evolution of the biological world. Memes are instructions for carrying out behavior, stored in brains (or other objects) and passed on by imitation. Their competition drives the evolution of the mind.--Blackmore (1999:17) On the one hand we have the brain, a biological organ, specified by genes. Blackmore recognizes that competition among genes is responsible for the features of biological organs, so one may hazard that by `mind' she cannot simply mean `brain'. But if so the best we can do is say that `mind' is the set of interconnections that end up instantiated in the brain at the end of some developmental process which involves cultural inputs. In other words, the mind is partly a bunch of memes--partly, because not everything that ends up instantiated in the brain is acquired socially, as some of it is innately given. To make the best case for Blackmore's argument, let us artificially restrict `mind' to "connections that result from the social acquisition of information." Can we now say that competition among memes drives the evolution of the mind in the same way that competition among genes drives the evolution of the biological world? Yes. If we define `mind' as whatever memes end up in a brain, then, tautologically, competition among memes drives the evolution of minds. The tautology is not entirely useless because the meme concept emphasizes Darwinian processes that have been neglected. But it is better to say it without a tautology which, to boot, requires a new technical definition of `mind' (though I understand this is another sexy term with magical properties). Better to say: "short term cultural evolution is the product of competition among memes because a `culture' is a distribution of memes." Is this a truly new or radical argument? Certainly not by the standards of cultural transmission theory, relative to which Blackmore (1999:15-17) believes she has advanced so much that her ideas are in fact christening an entirely new and autonomous discipline which has yet to begin. But perhaps my translation was not adequate, and Blackmore has something else in mind. Perhaps by `mind' she really does mean `brain.' In chapter six of her 1999 book she actually argues that memes selected for big brains to serve their own--the memes'-- `interests' (what she calls `memetic drive'). In other words, the `interests' of memes set processes in motion that select for genes, which in turn code for brains that prefer those same memes. A brain is just a meme's way of making another meme. This is radical but wrong. A meme cannot select for a gene unless it is widespread (meta-populationally) and stable (inter-generationally). But there are only two avenues for such a widespread and stable meme to emerge. In the first, the meme is selected by an innate `content bias' in the brain's design, making it widespread and stable. But for Blackmore this is a catch-22, because what puts the meme in a position to select for the gene is the fact that this same gene evolved first. The second avenue is if a process such as group selection through conformist transmission (Boyd & Richerson 1985; Henrich & Boyd 1998; Boyd & Gintis, in prep.) makes a meme widespread and stable, even though there was originally no innate content bias to prefer it (e.g. some form of altruism). Suppose that, once common, it is costly not to acquire this meme, or else it is costly to do so slowly or with errors (for example, suppose that the meme to punish non-altruists has also spread in this fashion). In such a case genes coding for an innate content bias specific to that meme will be favored (here, genes for altruistic tendencies), and we may say that the memes have selected for the genes in a Baldwinian process. This can certainly work, but it is not radical by the standards of cultural transmission theorists, some of whom have been pushing this sort of argument for years (e.g. Boyd & Richerson 1985), and it also does not, as Blackmore claims it does, put the memes in the "driver's seat" to the detriment of the `interests' of the genes when it comes to brain design. Much less does it call for an entirely new discipline. One must not confuse the true statement that competition among memes--the replicative `interests' of memes--is what causes (short term) cultural evolution, with the false statement that the replicative `interests' of memes--against the `interests' of genes--drive the longer-term process of brain design. The brain cannot be designed against the `interests' of genes simply because those genes have to be selected for, and they cannot be selected for without differential reproductive success! Thus, when memes select for genes it will be only because the `interests' of memes and genes coincide. Granted, they may only coincide after the meme has become widespread (and this is very interesting), but they will still have to coincide. And a coincidence is just that--it is not a radical "turning of the tables" on our understanding of what shapes brains, as Blackmore would have it. The design of the brain will still be about biological reproductive success in the environments that selected for this design, not for the propagative success of memes in the absence of a biological instrumentality. Let us stop worrying about non questions based on false observations, such as "We seem to have a brain `surplus to requirements, surplus to adaptive needs' (Cronin 1991:355)," and, ". . .our abilities are out of line with those of other living creatures and they do not seem obviously designed for survival" (Blackmore 1999:67-68). Conclusion I conclude by listing the morals. The first is that we need not narrowly genetic Darwinian thinking, but a `population thinking' attitude that considers--in its own terms--the properties of statistical populations capable of inheritance and subject to selection (Boyd & Richerson 2000). A narrow comparison of the details of genes and memes is not the right test, though there is hardly any reason to abandon the heuristic horsepower of the analogy. The second moral is that if we believe psychological biases are the main source of selective forces acting on memes, then the discovery and implications of non-content biases should be taken seriously. This detracts nothing from the importance of content biases, it merely adds to the repertoire of forces that must be considered. The third moral is that we have talked quite enough. The only reason that there is this much misunderstanding about what memes can or cannot be, what they must or must not be for Darwinian analyses to apply, is that psychologists and anthropologists know so little evolutionary genetics, on the one hand, and this is not easy to remedy. But on the other hand, psychologists and anthropologists have done very little to advance something they are eminently qualified to do: analyze the natural histories of particular memes in different domains, and the proximate cognitive biases responsible for such processes. Some of the points I have made here came to me as revelations after tracing the spread of one particular meme in the communities I study in western Mongolia (Gil-White, in prep.), and others as a result of trying to give a full account of one particular social-learning bias (Henrich & Gil-White 2001). More revelations will follow, as in any science. But, as in any science, we need to resist the pleasures of navel-gazing in the armchair in order to get our hands dirty and toil at the empirical problems. References Asch, S. E. (1956) Studies of independence and conformity: I. Minority of one against a unanimous majority. Psychological Monographs 70: (Whole No. 416) Asch, S. E. (1963 (1951)) Effects of group pressure upon the modification and distortion of judgments. In: Groups, leadership, and men, ed. H. Guetzkow, New York: Russell & Russell. Atran, S. (1998) Folk-biology and the anthropology of science: Cognitive universals and cultural particulars. Behavioral and brain sciences 21: 547-609 Atran, S. (2001) The trouble with memes: Inference versus imitation in cultural creation. Human Nature 12: 351-381 Atran, S. (2002) In gods we trust: The evolutionary landscape of religion, New York: Oxford University Press. Atran, S., Medin D., Ross N., Lynch E., Vapnarsky V., Ucan Ek' E., Coley J., Timura C., Baran M. (2002) Folkecology, cultural epidemiology, and the spirit of the commons: A garden experiment in the Maya lowlands. in press, Current Anthropology Aunger, R. (2000) Introduction. In: Darwinizing culture: The status of memetics as a science, ed. R. Aunger, Oxford & New York: Oxford University Press. Blackmore, S. (1999) The meme machine, Oxford: Oxford University Press. Blackmore, S. (2000) The meme's eye view. In: Darwinizing culture: The status of memetics as a science, ed. R. Aunger, Oxford & New York: Oxford University Press. Bloch, M. (2000) A well-disposed anthropologist's problems with memes. In: Darwinizing culture: The status of memetics as a science, ed. R. Aunger, Oxford & New York: Oxford University Press. Boyd, R., Richerson P. J. (1985) Culture and the evolutionary process, Chicago: University of Chicago Press. Boyd, R., Richerson P. J. (1996) Why culture is common, but cultural evolution is rare. Proceedings of the British academy 88: 77-93 Boyd, R., Richerson P. J. (2000) Memes: Universal acid or a better mousetrap? In: Darwinizing culture: The status of memetics as a science, ed. R. Aunger, Oxford & New York: Oxford University Press. Boyer, P. (1994) The naturalness of religious ideas, Berkeley: University of California Press. Boyer, P. (1998) Cognitive tracks of cultural inheritance: How evolved intuitive ontology governs cultural transmission. American Anthropologist 100: 876-889 Cavalli-Sforza, L. L., Feldman M. (1981) Cultural transmission and evolution, Princeton: Princeton University Press. Conte, R. (2000) Memes through social minds. In: Darwinizing culture: The status of memetics as a science, ed. R. Aunger, Oxford & New York: Oxford University Press. Cosmides, L., Tooby J. (1987) From evolution to behavior: Evolutionary psychology as the missing link. In: The latest on the best: Essays on evolution and optimality, ed. J. Dupr?, Cambridge MA: MIT Press. Cosmides, L., Tooby J. (1994) Origins of domain-specificity: The evolution of functional organization. In: Mapping the mind: Domain-specificity in cognition and culture, eds. L. A. Hirschfeld and S. A. Gelman, New York: Cambridge University Press. Cronin, H. (1991) The ant and the peacock, Cambridge: Cambridge University Press. Dawkins, R. (1982) The extended phenotype, Oxford: Oxford University Press. Dawkins, R. (1999) Foreword. In: The meme machine, ed. S. Blackmore, Oxford: Oxford University Press. Dawkins, R. (1989(1976)) The selfish gene, Oxford and New York: Oxford University Press. Dennett, D. C. (1995) Darwin's dangerous idea: Evolution and the meanings of life, New York: Simon and Schuster. Diamond, J. M. (1997) Guns, germs, and steel : the fates of human societies, New York: W.W. Norton. Feldman, M. W., Laland K. N. (1996) Gene-culture coevolutionary theory. Trends in evology and evolution 11: 453-7 Gil-White, F. J. (2001a) Are ethnic groups biological 'species' to the human brain?: Essentialism in our cognition of some social categories. Current Anthropology 42: 515-554 Gil-White, F. J. (2001b) L'evolution culturelle a-t-elle des r?gles? La rech?rche Hors-S?rie No. 5: 92-97 Gil-White, F. J. (2002a) Comment on Atran et al (2002). Current Anthropology 43: 441-442 Gil-White, F. J. (2002b) The evolution of prestige explains the evolution of reference, paper delivered at the Fourth international conference on the evolution of language (Harvard University) Gil-White, F. J. in prep. I killed a one-eyed marmot: Why some narrative memes spread better than others, and how they maintain beliefs Henrich, J., Boyd R. (1998) The evolution of conformist transmission and the emergence of between-group differences. Evolution and Human Behavior 19: 215-241 Henrich, J., Boyd R. (2002) On modeling cognition and culture: How formal models of social learning can inform our understanding of cultural evolution. under review, Cognition and Culture Henrich, J., Gil-White F. J. (2001) The evolution of prestige: Freely conferred status as a mechanism for enhancing the benefits of cultural transmission. Evolution and human behavior 22: 165-196 Hirschfeld, L. (1988) On acquiring social categories: Cognitive development and anthropological wisdom. Man 23: 611-638 Hull, D. (2000) Taking memetics seriously: Memetics will be what we make it. In: Darwinizing culture: The status of memetics as a science, ed. R. Aunger, Oxford & New York: Oxford University Press. Kuran, T. (1995) Private truths, public lies: The social consequences of preference falsification, Cambridge, MA: Harvard University Press. Laland, K. N., Odling-Smee J. (2000) The evolution of the meme. In: Darwinizing culture: The status of memetics as a science, ed. R. Aunger, Oxford & New York: Oxford University Press. Landes, D. S. (1998) The wealth and poverty of nations: Why some are so rich and some so poor, New York: W.W. Norton. Lumsden, C., Wilson E. O. (1981) Genes, mind, and culture: the coevolutionary process, Cambridge, MA, and London: Harvard University Press. McNeill, W. H. (1963) The rise of the West: A history of the human community, Chicago: University of Chicago Press. Miller, D. T., Mcfarland C. (1991) Why social comparison goes awry: The case of pluralistic ignorance. In: Social comparison: contemporary theory and research., eds. J. Suls and T. Ashby, Hillsdale, N.J.: L. Erlbaum Associates. Plotkin, H. (2000) Memes through social minds. In: Darwinizing culture: The status of memetics as a science, ed. R. Aunger, Oxford & New York: Oxford University Press. Schank, R. C., Abelson R. P. (1995) Knowledge and memory: The real story. In: Knowledge and memory: The real story, ed. R. S. Wyer, Hillsdale, NJ: Lawrence Erlbaum Associates. Sperber, D. (1996) Explaining culture: A naturalistic approach, Oxford: Blackwell. Sperber, D. (2000) An objection to the memetic approach to culture. In: Darwinizing culture: The status of memetics as a science, ed. R. Aunger, Oxford & New York: Oxford University Press. Tomasello, M., Kruger A. C., Ratner H. H. (1993) Cultural learning. Brain and Behavioral Science 16: 95-552 Tooby, J., Cosmides L. (1989) Evolutionary psychology and the generation of culture, Part I: theoretical considerations. Ethology and sociobiology 10: 29-49 Tooby, J., Cosmides L. (1992) The psychological foundations of culture. In: The adapted mind: Evolutionary psychology and the generation of culture., eds. J. H. Barkow, L. Cosmides and J. Tooby, New York and Oxford: Oxford University Press. Wilkins, J. S. (1998) What's in a meme? Reflections from the perspective of the history and philosophy of evolutionary biology. Journal of memetics--Evolutionary models of information transmission 2 Williams, G. C. (1966) Adaptation and natural selection, Princeton, NJ: Princeton University Press. Wright, R. (2000) NonZero: The logic of human destiny, New York: Pantheon. _______________________ [4][1] Unlike Sperber (2000:163) I don't think there is anything trivial about this definition, and neither do I think that it corresponds to how anthropologists have always thought about culture, as he claims. Implicit in this definition is the idea that memes are units, that they are materially stored, and that they are subject to selection. These intuitions open the way to a completely different form of analysis of culture from that which we anthropologists had been traditionally contemplating. As Sperber (1996) himself has repeatedly accused, anthropologists have been prone to mystical approaches to culture that put it `out there' in the ether somewhere rather than in people's brains, and they have failed to examine the processes of transmission in its phenomenal and cognitive details. Making the units of cultural transmission analogous to genes, however loosely, which is what the `meme' idea in any of its forms does, produces an entirely new perspective--in fact, a revolution of sorts. References 1. mailto:fjgil at psych.upenn.edu 2. http://www.psych.upenn.edu/~fjgil/ 3. http://www.bbsonline.org/documents/a/00/00/12/44/bbs00001244-00/Memes2.htm#_edn1 4. http://www.bbsonline.org/documents/a/00/00/12/44/bbs00001244-00/Memes2.htm#_ednref1 From checker at panix.com Tue Dec 27 23:12:59 2005 From: checker at panix.com (Premise Checker) Date: Tue, 27 Dec 2005 18:12:59 -0500 (EST) Subject: [Paleopsych] NYT: Hometown Snubs Schwarzenegger Over Death Penalty Message-ID: Hometown Snubs Schwarzenegger Over Death Penalty http://www.nytimes.com/2005/12/27/international/europe/27austria.html [Note the elite vs. popular opinion split. There are similar splits over immigration and school prayer. I'd like to know why elite opinion about capital punishment is so strongly held. I cannot make up my own mind. I'm familiar with the various arguments, but I cannot weigh the various pros and cons and come to an overall conclusion. But elite opinion seems to ignore anything positive about capital punishment. It is absolutist and bigoted. [I'd also like to know why elite opinion has shifted. Who are the conformity enforcers? Are there fresh arguments against it that I have missed? If not, why weren't the old arguments accepted before? [I should make a list of splits in mass and elite opinion.] By RICHARD BERNSTEIN BERLIN, Dec. 26 - For years the quaint Austrian town of Graz trumpeted its special relationship with its outsize native son, Arnold Schwarzenegger. Born in a village nearby and schooled in Graz, Mr. Schwarzenegger was an honorary citizen and holder of the town's Ring of Honor. Most conspicuously, the local sports stadium was named after him. But early on Monday, under cover of darkness, his name was removed from the arena in a sort of uncontested divorce between the California governor and the town council, which had been horrified that he rejected pleas to spare the life of Stanley Tookie Williams, former leader of the Crips gang, who was executed by the state of California two weeks ago. The 15,000-seat stadium had been named after Mr. Schwarzenegger in 1997 as an act of both self-promotion and fealty toward the poor farmer's son and international celebrity, who has always identified Graz as his native place. But when he declined to commute Mr. Williams's death penalty, the reaction was swift and angry in Graz, which, like most places in Europe, sees the death penalty as a medieval atrocity. "I submitted a petition to the City Council to remove his name from the stadium, and to take away his status as an honorary citizen," Sigrid Binder, the leader of the Green Party, said in a recent interview. "The petition was accepted by a majority on the council." Before a formal vote was taken on the petition, however, Mr. Schwarzenegger made a kind of pre-emptive strike, writing a letter to Siegfried Nagl, the town's conservative mayor, withdrawing Graz's right to use his name in association with the stadium. There will be other death penalty decisions ahead, he wrote, and so he decided to spare the responsible politicians of Graz further concern. "It was a clever step," Ms. Binder said. "He took the initiative," she continued, and then suggested a bit of the local politics that had entered into the matter. "It was possible for him to do so," she said, "because the mayor didn't have the courage to take a clear position on this point." Needless to say, Mr. Nagl, a member of the conservative People's Party, who opposed the name-removal initiative, does not agree. He is against the death penalty, he said in an interview, and on Dec. 1, he wrote a letter to Mr. Schwarzenegger pleading for clemency for Mr. Williams. But he blames the leftist majority on the City Council - consisting of Greens, Social Democrats and two Communists - for trying to score some local political points at Mr. Schwarzenegger's and, he believes, Graz's own expense. "One stands by a friend and a great citizen of our city and does not drag his name through the mud even when there is a difference of opinion," Mr. Nagl said in a letter he wrote to Mr. Schwarzenegger. "I would like to ask you to keep the Ring of Honor of the City of Graz." The heated nature of the debate revealed how much a relatively small place like Graz, certainly a place with no military might or diplomatic power to speak of, wants to play a role as a sort of moral beacon, waging the struggle for what it considers the collective good. Graz, a place of old onion steeples, museums and Art Nouveau architecture, designated itself five years ago, with a unanimous vote of the City Council, to be Europe's first official "city of human rights." While the designation has no juridical meaning, it provides a sort of goal to live up to. "We are against the death penalty, not only in word, but really against the death penalty," said Wolfgang Benedek, a professor of international law at Graz University. He said the council's reaction reflected the special circumstances surrounding Mr. Williams: a man who had written a children's book aimed at steering young people away from violence, he had already spent many years in jail, and seemed, to Europeans at least, to have reformed himself. "Many people around the world pleaded with Mr. Schwarzenegger to show mercy in this case, and when he didn't, the city had somehow to react," Mr. Benedek said. Mr. Benedek allows that there is an element of elite versus popular opinion on this matter. A poll by the local newspaper found that over 70 percent of the public opposed removing Mr. Schwarzenegger's name from the stadium. This adds to a practical consideration very much on Mr. Nagl's mind: that Graz will no longer be able to count on using its special relationship with the governor to promote its image. "We had the great classical culture on the one side," Thomas Rajakovics, the mayor's spokesman, said, referring to other important figures who are associated with Graz, from the astronomer Johannes Kepler to the Nobel Prize-winning physicist Erwin Schr?dinger, to the conductor Karl B?hm. "And on the other, we had Arnold Schwarzenegger and the popular culture. These were the two poles for us, but we're not allowed to use his name any more." The Schwarzenegger name has, as it were, been erased. The new name is now simply Stadion Graz-Liebenau (a district of Graz), though there were other proposals. One was to name the stadium after the Crips, the gang that Mr. Williams founded, but that idea did not get widespread support. Another was to name it Hakoah, after a Jewish sports club that was banned after Hitler annexed Austria in 1938. But the first "city of human rights" did not seem quite ready for that either. It is not that there was vocal opposition but, as Ms. Binder put it, Austrians do not generally want a daily reminder of the terrible wartime past. Meanwhile, city officials are holding on to Mr. Schwarzenegger's honorary citizenship ring, which arrived from the governor during the holidays. Mr. Rajakovics said they would keep it for him in the hope that one day he would take it back. From checker at panix.com Wed Dec 28 03:01:14 2005 From: checker at panix.com (Premise Checker) Date: Tue, 27 Dec 2005 22:01:14 -0500 (EST) Subject: [Paleopsych] NYTBR: Leaders Who Build to Stroke Their Egos Message-ID: Leaders Who Build to Stroke Their Egos http://www.nytimes.com/2005/12/13/books/13kaku.html Books of The Times | 'The Edifice Complex' [But I want comparisons with democracies, such as that under Roosevelt II.] By MICHIKO KAKUTANI Deyan Sudjic THE EDIFICE COMPLEX How the Rich and Powerful Shape the World By Deyan Sudjic 403 pages. Penguin Press. $27.95. The pyramids, Versailles, the Taj Mahal, the Kremlin, the World Trade Center: it's hardly news that the rich and powerful have used architecture to try to achieve immortality, impress their contemporaries, stroke their own egos and make political and religious statements. So how artfully does Deyan Sudjic explicate this highly familiar observation? His new book, "The Edifice Complex," is a fat, overstuffed jumble of the obvious and the fascinating, the tired and the intriguing - a volume that feels less like an organic book than a series of hastily patched together essays and ruminations. It is a book in dire need of heavy-duty editing, but a book that intermittently grabs the reader's attention, making us rethink the equations between architecture and politics and money, and the myriad ways in which buildings can be made to embody everything from national aspirations and economic might to narcissistic displays of potency and ambition. Mr. Sudjic, the architecture critic for the London newspaper The Observer, looks at the architectural dreams of the great monsters of 20th-century history - Hitler, Stalin and Mao - and at the more modest fantasies of assorted tycoons and democratically elected politicians. He deconstructs the symbolism of the presidential libraries of Richard Nixon, Jimmy Carter, Ronald Reagan and George Herbert Walker Bush; looks at the dubious construction of London's Millennium Dome on Tony Blair's watch; and re-examines the debates over ground zero in New York. In addition, Mr. Sudjic provides some brisk assessments of such high-profile architects as Philip Johnson, Frank Gehry and Daniel Libeskind. And he examines the propensity of many prominent architects to hire themselves out to unsavory - and in some cases, morally reprehensible - clients. He notes, for instance, that Walter Gropius and Le Corbusier took part in a competition to design Stalin's Palace of the Soviets and points out that Albert Speer and Mies van der Rohe "were both ready to work" for Hitler, the only difference being that Speer "devoted himself entirely to realizing the architectural ambitions of his master," while Mies, for all his political expediency, "was unyielding about architecture." As for Rem Koolhaas, who declined to take part in the ground zero design competitions because of what he saw as the project's "overbearing self-pity," he vigorously pursued the job of building the new headquarters of Central China Television, the propagandistic voice of the state. In reviewing such cases, Mr. Sudjic comes to the conclusion that "the totalitarians and the egotists and the monomaniacs offer architects, whatever their personal political views, more opportunities for 'important' work than the liberal democracies." This is not an entirely persuasive argument, given the construction of iconic buildings like Jorn Utzon's Opera House in Sydney, Australia, and Mr. Gehry's Guggenheim museum in Bilbao, Spain, on one hand, and the nightmarishly grotesque architectural plans of many tyrants, on the other. In the most interesting chapters in this volume, Mr. Sudjic goes over some of those dictators' plans. We see Hitler, who once contemplated becoming an architect himself, working with Albert Speer to perfect the use of architecture as propaganda - as a tool for glamorizing his own rule while intimidating and impressing his subjects. The scale of Hitler's Chancellery was deliberately heroic - halls that were 30 feet high and doorways that were 17 feet high. And the plans to remake Berlin as "Germania," the F?hrer's own version of Rome, were similarly outsized, with a gigantic, 1,000-foot-high dome that would have accommodated 180,000 people and grand crossing street axes (possibly based "on Louis XIV's bedroom at Versailles, positioned at the crossing point of two of the most important roads in France"). Stalin's plans for Moscow were equally grandiose: his Palace of the Soviets was to be taller than the Empire State Building and topped by a gargantuan likeness of Lenin that was to be bigger than the Statue of Liberty. Stalin also set about erasing historic landmarks - like Moscow's great 19th-century basilica - in an effort to make his transformation of Imperial Russia into the Soviet Union irreversible. In fact, Mr. Sudjic notes that demolition can be "almost as essential a part of the process of transformation as new building" - as demonstrated by Haussmann's Paris and Ceausescu's Bucharest. The decision by Brazil's leaders to move the national capital out of Rio de Janeiro and build a new seat of government in the empty heart of the country was, Mr. Sudjic writes, "a deliberate attempt to create a new identity" for the country: the use of "an architecture entirely free of historical memories" was meant to symbolize the rejection of "centuries of political and cultural subservience to Europe." In the case of the new Germany, Mr. Sudjic reports, leaders were "less prepared to wipe out the traces of Hitler's Berlin" than they were ready "to eradicate the traces" of the former Communist-controlled East Germany. Indeed the physical legacy of vanished authoritarian regimes poses a difficult question for current governments. "Italy to this day," Mr. Sudjic writes, "is full of rotting buildings, many of real quality, that were put up by the Fascists to house their party organizations. They were confiscated by the postwar government, and nobody knows what to do with them. To demolish them all both would be profligate and would represent a historical whitewash, and yet to restore them could suggest a rehabilitation of the regime that built them." It is in raising such philosophical questions about architecture and its symbolism that "The Edifice Complex" is at its most original and pertinent, persuading the reader that the volume is probably worth reading - or at least skimming - despite the huge amounts of dross surrounding its nuggets of insight. From checker at panix.com Wed Dec 28 03:01:25 2005 From: checker at panix.com (Premise Checker) Date: Tue, 27 Dec 2005 22:01:25 -0500 (EST) Subject: [Paleopsych] NYT: Old, for Sure, but Human? Message-ID: Old, for Sure, but Human? http://www.nytimes.com/2005/12/13/science/13find.html [This could prove to be a major, major anomaly.] Findings By JOHN NOBLE WILFORD What is one to make of the intriguing footprints found in Mexico? The scientists who discovered them said last summer that they were made by humans walking in fresh volcanic ash 40,000 years ago. This seemed incredible, since no human presence in the Americas had been established earlier than about 13,000 years ago. So geologists went to the scene, near Puebla. They came to an even more astonishing conclusion: the prints were in 1.3-million-year-old rock, meaning the prints were laid down more than a million years before modern Homo sapiens evolved in Africa. The surprising antiquity of the rock bearing the prints was determined by a research team led by Paul R. Renne, director of the Berkeley Geochronology Center in California. The researchers conducted repeated argon dating and investigated the magnetic imprint in the rock. All the tests yielded the 1.3-million-year date. In the journal Nature, the team wrote, "We conclude that either hominid migration into the Americas occurred very much earlier than previously believed, or that the features in question were not made by humans on recently erupted ash." The original discovery was made in 2003 by Silvia Gonzalez of Liverpool John Moores University in England. Dr. Renne questioned that these were, in fact, footprints. "Their distribution is quite random, not like something made by early humans," he said by telephone. Paleontologists she consulted, Dr. Renne said, agreed. It may be, they said, that the prints are recent breaks in the hard surface caused by vibrations from a nearby highway and an active quarry. From checker at panix.com Wed Dec 28 03:01:37 2005 From: checker at panix.com (Premise Checker) Date: Tue, 27 Dec 2005 22:01:37 -0500 (EST) Subject: [Paleopsych] Nature: Internet encyclopaedias go head to head Message-ID: Internet encyclopaedias go head to head http://www.nature.com/nature/journal/v438/n7070/full/438900a.html [Hooray for Jimbo! Of course, it's the non-science articles that generate the biggest controversies.] Nature 438, 900-901 (15 December 2005) | doi:10.1038/438900a Jim Giles Abstract Jimmy Wales' Wikipedia comes close to Britannica in terms of the accuracy of its science entries, a Nature investigation finds. One of the extraordinary stories of the Internet age is that of Wikipedia, a free online encyclopaedia that anyone can edit. This radical and rapidly growing publication, which includes close to 4 million entries, is now a much-used resource. But it is also controversial: if anyone can edit entries, how do users know if Wikipedia is as accurate as established sources such as Encyclopaedia Britannica? Unfortunately we are unable to provide accessible alternative text for this. If you require assistance to access this image, or to obtain a text description, please contact npg at nature.com AP PHOTO/M. PROBST Several recent cases have highlighted the potential problems. One article was revealed as falsely suggesting that a former assistant to US Senator Robert Kennedy may have been involved in his assassination. And podcasting pioneer Adam Curry has been accused of editing the entry on podcasting to remove references to competitors' work. Curry says he merely thought he was making the entry more accurate. However, an expert-led investigation carried out by Nature -- the first to use peer review to compare Wikipedia and Britannica's coverage of science -- suggests that such high-profile examples are the exception rather than the rule. The exercise revealed numerous errors in both encyclopaedias, but among 42 entries tested, the difference in accuracy was not particularly great: the average science entry in Wikipedia contained around four inaccuracies; Britannica, about three. Considering how Wikipedia articles are written, that result might seem surprising. A solar physicist could, for example, work on the entry on the Sun, but would have the same status as a contributor without an academic background. Disputes about content are usually resolved by discussion among users. But Jimmy Wales, co-founder of Wikipedia and president of the encyclopaedia's parent organization, the Wikimedia Foundation of St Petersburg, Florida, says the finding shows the potential of Wikipedia. "I'm pleased," he says. "Our goal is to get to Britannica quality, or better." Wikipedia is growing fast. The encyclopaedia has added 3.7 million articles in 200 languages since it was founded in 2001. The English version has more than 45,000 registered users, and added about 1,500 new articles every day of October 2005. Wikipedia has become the 37th most visited website, according to Alexa, a web ranking service. But critics have raised concerns about the site's increasing influence, questioning whether multiple, unpaid editors can match paid professionals for accuracy. Writing in the online magazine TCS last year, former Britannica editor Robert McHenry declared one Wikipedia entry -- on US founding father Alexander Hamilton -- as "what might be expected of a high-school student". Opening up the editing process to all, regardless of expertise, means that reliability can never be ensured, he concluded. Yet Nature's investigation suggests that Britannica's advantage may not be great, at least when it comes to science entries. In the study, entries were chosen from the websites of Wikipedia and Encyclopaedia Britannica on a broad range of scientific disciplines and sent to a relevant expert for peer review. Each reviewer examined the entry on a single subject from the two encyclopaedias; they were not told which article came from which encyclopaedia. A total of 42 usable reviews were returned out of 50 sent out, and were then examined by Nature's news team. Only eight serious errors, such as misinterpretations of important concepts, were detected in the pairs of articles reviewed, four from each encyclopaedia. But reviewers also found many factual errors, omissions or misleading statements: 162 and 123 in Wikipedia and Britannica, respectively. Unfortunately we are unable to provide accessible alternative text for this. If you require assistance to access this image, or to obtain a text description, please contact npg at nature.com D. I. FRANKE/WIKIMEDIA FDN Kurt Jansson (left), president of Wikimedia Deutschland, displays a list of 10,000 Wikipedia authors; Wikipedia's entry on global warming has been a source of contention for its contributors. Editors at Britannica would not discuss the findings, but say their own studies of Wikipedia have uncovered numerous flaws. "We have nothing against Wikipedia," says Tom Panelas, director of corporate communications at the company's headquarters in Chicago. "But it is not the case that errors creep in on an occasional basis or that a couple of articles are poorly written. There are lots of articles in that condition. They need a good editor." Several Nature reviewers agreed with Panelas' point on readability, commenting that the Wikipedia article they reviewed was poorly structured and confusing. This criticism is common among information scientists, who also point to other problems with article quality, such as undue prominence given to controversial scientific theories. But Michael Twidale, an information scientist at the University of Illinois at Urbana-Champaign, says that Wikipedia's strongest suit is the speed at which it can updated, a factor not considered by Nature's reviewers. "People will find it shocking to see how many errors there are in Britannica," Twidale adds. "Print encyclopaedias are often set up as the gold standards of information quality against which the failings of faster or cheaper resources can be compared. These findings remind us that we have an 18-carat standard, not a 24-carat one." The most error-strewn article, that on Dmitry Mendeleev, co-creator of the periodic table, illustrates this. Michael Gordin, a science historian at Princeton University who wrote a 2004 book on Mendeleev, identified 19 errors in Wikipedia and 8 in Britannica. These range from minor mistakes, such as describing Mendeleev as the 14th child in his family when he was the 13th, to more significant inaccuracies. Wikipedia, for example, incorrectly describes how Mendeleev's work relates to that of British chemist John Dalton. "Who wrote this stuff?" asked another reviewer. "Do they bother to check with experts?" But to improve Wikipedia, Wales is not so much interested in checking articles with experts as getting them to write the articles in the first place. As well as comparing the two encyclopaedias, Nature surveyed more than 1,000 Nature authors and found that although more than 70% had heard of Wikipedia and 17% of those consulted it on a weekly basis, less than 10% help to update it. The steady trickle of scientists who have contributed to articles describe the experience as rewarding, if occasionally frustrating (see [21]'Challenges of being a Wikipedian'). Greater involvement by scientists would lead to a "multiplier effect", says Wales. Most entries are edited by enthusiasts, and the addition of a researcher can boost article quality hugely. "Experts can help write specifics in a nuanced way," he says. Wales also plans to introduce a 'stable' version of each entry. Once an article reaches a specific quality threshold it will be tagged as stable. Further edits will be made to a separate 'live' version that would replace the stable version when deemed to be a significant improvement. One method for determining that threshold, where users rate article quality, will be trialled early next year. [22]Top of page Related links RELATED STORIES * [23]Science in the web age: The expanding electronic universe * [24]Science in the web age: Joint efforts * [25]Science in the web age: The real death of print * [26]Science in the web age: Start your engines * [27]Reference revolution * [28]Wanted: social entrepreneurs RELATED LINKS * [29]Nature Podcast EXTERNAL LINKS * [30]Wikipedia * [31]Encyclopaedia Britannica References 20. http://www.nature.com/nature/journal/v438/n7070/full/438900a.html#top 21. http://www.nature.com/nature/journal/v438/n7070/box/438900a_BX1.html 22. http://www.nature.com/nature/journal/v438/n7070/full/438900a.html#top 23. http://www.nature.com/uidfinder/10.1038/438547a 24. http://www.nature.com/uidfinder/10.1038/438548a 25. http://www.nature.com/uidfinder/10.1038/438550a 26. http://www.nature.com/uidfinder/10.1038/438554a 27. http://www.nature.com/uidfinder/10.1038/news050314-17 28. http://www.nature.com/uidfinder/10.1038/434941a 29. http://www.nature.com/nature/podcast/index.html 30. http://www.wikipedia.org/ 31. http://www.britannica.com/ From checker at panix.com Wed Dec 28 03:01:48 2005 From: checker at panix.com (Premise Checker) Date: Tue, 27 Dec 2005 22:01:48 -0500 (EST) Subject: [Paleopsych] NYT: See Baby Touch a Screen. but Does Baby Get It? Message-ID: See Baby Touch a Screen. but Does Baby Get It? http://www.nytimes.com/2005/12/15/national/15toys.html By TAMAR LEWIN Jetta is 11 months old, with big eyes, a few pearly teeth - and a tiny index finger that can already operate electronic entertainment devices. "We own everything electronic that's educational - LeapFrog, Baby Einstein, everything," said her mother, Naira Soibatian. "She has an HP laptop, bigger than mine. I know one leading baby book says, very simply, it's a waste of money. But there's only one thing better than having a baby, and that's having a smart baby. And at the end of the day, what can it hurt? She learns things, and she loves them." New media products for babies, toddlers and preschoolers began flooding the market in the late 1990's, starting with video series like "Baby Einstein" and "Brainy Baby." But now, the young children's market has exploded into a host of new and more elaborate electronics for pre-schoolers, including video game consoles like the V.Smile and handheld game systems like the Leapster, all marketed as educational. Despite the commercial success, though, a report released yesterday by the Kaiser Family Foundation, "A Teacher in the Living Room? Educational Media for Babies, Toddlers and Pre-schoolers," indicates there is little understanding of how the new media affect young children - and almost no research to support the idea that they are educational. "The market is expanding rapidly, with all kinds of brand-new product lines for little kids," said Vicky Rideout, vice president of the Kaiser Foundation. "But the research hasn't advanced much. There really isn't any outcomes-based research on these kinds of products and their effects on young children, and there doesn't seem to be any theoretical basis for saying that kids under 2 can learn from media. "If parents are thinking, 'I need a break, I'll put my 4-year-old in front of this nice harmless video,' that's one thing," she continued, "But if parents are thinking, 'This is good for my 3-month-old, it will help her get ahead in the world,' that's another." In 1999, the American Academy of Pediatrics recommended no screen time at all for babies under 2, out of concern that the increasing use of media might displace human interaction and impede the crucially important brain growth and development of a baby's first two years. But it is a recommendation that parents routinely ignore. According to Kaiser, babies 6 months to 3 years old spend, on average, an hour a day watching TV and 47 minutes a day on other screen media, like videos, computers and video games. "These new media toys are growing and becoming quite prevalent," said Claire Lerner, a child-development expert at Zero to Three, a nonprofit advocacy group that includes information about brain development on its Web site. "This generation of parents grew up thinking technology was all positive, so if they see their child looking happy, engaged with what's on the screen, it's very seductive. But a group of toddlers making up a story together is a much richer learning experience than dragging things across a screen to make a story. Children learn best in the context of relationships." While there is no research on the effect of the new commercial products, earlier research has shown that educational television can teach 3- to 5- year-olds vocabulary and number concepts. Most child-development experts, however, say that babies under about 2? are not sufficiently developed for such learning. Still, many parents buy their babies toys designed for older children, either believing that their children are unusually advanced or hoping the toys will make them so. Just minutes after spending $150 on a VideoNow player and cartridges (ages 7 and up) at a Manhattan Toys "R" Us, Ms. Soibatian holds Jetta up in her stroller to see if she is interested in Learn Through Music Plus! (ages 2 to 5). At first, Jetta gently bops the screen with her whole hand, watching the flashing lights, but soon she notices the buttons, the index finger goes out, and a delighted Ms. Soibatian is ready to buy again. "You're never too young to learn, and kids nowadays are more advanced because of all these educational toys," said Iesha Middleton, another parent shopping at Toys "R" Us. Ms. Middleton's son will be 3 next month. "I tried to teach my son his ABC's when he was 1, and I didn't get very far, but with the Leapster, he learned A-Z really fast, and he can count up to 50." Even Sesame Workshop, long the torchbearer in children's educational media, is moving into the infant market, with new "Sesame Beginnings" DVD's for babies 6 months and up. "There are all these babies watching videos, and we wanted to address the reality that's out there and come up with something that is at least appropriate," said Gary Knell, Sesame's president. "Ours are about sharing and caring, modeling good parenting, not the cognitive approaches that are more appropriate for 3- or 4-year-olds. We won't be making any boastful claims about school success." Others have less restrained marketing: The "Brainy Baby - Left Brain" package has a cover featuring a cartoon baby with a thought balloon saying, "2 + 2 = 4" and promises that it will inspire logical thinking and "teach your child about language and logic, patterns and sequencing, analyzing details and more." The V.Smile video game system - a "TV Learning System" introduced last year - features the motto "Turn Game Time into Brain Time" and cartridges called "smartridges." The V.Smile, named "Best Toy of the Year" at the toy industry's 2005 trade show, has a television ad where a mom tells her children, "You'll never get into college if you don't play your video games!" The game says it is designed for children 3 to 7. There are, as yet, no reliable estimates of the size of the market for such devices, but at toy stores nationwide, they are selling briskly. Educational toy companies say their products are designed with the existing educational and developmental research in mind but add that more research on media effects would be helpful. "There's nothing that shows it helps, but there's nothing that shows it's does harm, either," said Marcia Grimsley, senior producer of "Brainy Baby" videos. "Electronics are part of our world, and I think that, used appropriately, they can benefit children." Ms. Rideout says parents need more help sorting through the array of electronic media: "We have detailed guidelines for advertising and labeling products like down pillows and dietary supplements, but not for marketing education media products," she said. Warren Buckleitner, editor of Children's Technology Review, has watched children play with many of the new products and believes that many of them have great education potential for older preschoolers. "We spend a great percentage of our energy in preschool teaching kids about symbols, and interactive electronics are very good teachers of symbols," Mr. Buckleitner said. "V.Smile is like a hyperactive nanny with flashcards. We had a 4-year-old, on the cusp of reading, who was so excited about finding words in the maze that she got addicted, in an arcade-ish way, and wrote down 20 words on a piece of scrap paper, then came and said, 'Look at my word collection.' I asked if she could read them and she could. It was very motivating for her." It does not work that way when the toy does not fit the child's developmental stage or pace: Mr. Buckleitner remembers a 2-year-old playing with an interactive electronic toy, but not understanding the green "go" button; after coaching from her mother, when she touched a cow that mooed, she was frustrated by the cow's continued mooing while she touched five other pictures. "The design people are still learning, so the technology will get better," Mr. Buckleitner said. Still, he concedes that in teaching small children, "There's not an educator alive who would disagree with the notion that concrete and real are always better." Research bears that out. In a line of experiments on early learning included in a research review by Dan Anderson, a University of Massachusetts psychology professor, one group of 12- to 15-month-olds was given a live demonstration of how to use a puppet, while another group saw the demonstration on video. The children who saw the live demonstration could imitate the action - but the others had to see the video six times before they could imitate it. "As a society, we are in the middle of a vast uncontrolled experiment on our infants and toddlers growing up in homes saturated with electronic media," Mr. Anderson said. From ljohnson at solution-consulting.com Wed Dec 28 15:41:38 2005 From: ljohnson at solution-consulting.com (Lynn D. Johnson, Ph.D.) Date: Wed, 28 Dec 2005 08:41:38 -0700 Subject: [Paleopsych] NYT: See Baby Touch a Screen. but Does Baby Get It? In-Reply-To: References: Message-ID: <43B2B232.9020209@solution-consulting.com> At Xmas I gave my wife /Animals in Translation/ by the phenom, Temple Grandin. She argues from animal observations that human babies have to play with and manipulate physical objects, pointing out that kids that have not held a pencil and drawn with it, cannot draw with the cursor on a computer screen. She is worried about children who are playing videogames and not outside playing in the physical world. I have two adult kids and two teens, and I got them all a new trampoline for Xmas. It helps them keep their balance. Lynn Lynn D. Johnson, Ph.D. Solutions Consulting Group 166 East 5900 South, Ste. B-108 Salt Lake City, UT 84107 Tel: (801) 261-1412; Fax: (801) 288-2269 Check out our webpage: www.solution-consulting.com Feeling upset? Order Get On The Peace Train, my new solution-oriented book on negative emotions. Premise Checker wrote: > See Baby Touch a Screen. but Does Baby Get It? > http://www.nytimes.com/2005/12/15/national/15toys.html > > By TAMAR LEWIN > > Jetta is 11 months old, with big eyes, a few pearly teeth - > and a tiny index finger that can already operate electronic > entertainment devices. > > "We own everything electronic that's educational - LeapFrog, > Baby Einstein, everything," said her mother, Naira > Soibatian. "She has an HP laptop, bigger than mine. I know > one leading baby book says, very simply, it's a waste of > money. But there's only one thing better than having a baby, > and that's having a smart baby. And at the end of the day, > what can it hurt? She learns things, and she loves them." > > New media products for babies, toddlers and preschoolers > began flooding the market in the late 1990's, starting with > video series like "Baby Einstein" and "Brainy Baby." But > now, the young children's market has exploded into a host of > new and more elaborate electronics for pre-schoolers, > including video game consoles like the V.Smile and handheld > game systems like the Leapster, all marketed as educational. > > Despite the commercial success, though, a report released > yesterday by the Kaiser Family Foundation, "A Teacher in the > Living Room? Educational Media for Babies, Toddlers and > Pre-schoolers," indicates there is little understanding of > how the new media affect young children - and almost no > research to support the idea that they are educational. > > "The market is expanding rapidly, with all kinds of > brand-new product lines for little kids," said Vicky > Rideout, vice president of the Kaiser Foundation. "But the > research hasn't advanced much. There really isn't any > outcomes-based research on these kinds of products and their > effects on young children, and there doesn't seem to be any > theoretical basis for saying that kids under 2 can learn > from media. > > "If parents are thinking, 'I need a break, I'll put my > 4-year-old in front of this nice harmless video,' that's one > thing," she continued, "But if parents are thinking, 'This > is good for my 3-month-old, it will help her get ahead in > the world,' that's another." > > In 1999, the American Academy of Pediatrics recommended no > screen time at all for babies under 2, out of concern that > the increasing use of media might displace human interaction > and impede the crucially important brain growth and > development of a baby's first two years. But it is a > recommendation that parents routinely ignore. According to > Kaiser, babies 6 months to 3 years old spend, on average, an > hour a day watching TV and 47 minutes a day on other screen > media, like videos, computers and video games. > > "These new media toys are growing and becoming quite > prevalent," said Claire Lerner, a child-development expert > at Zero to Three, a nonprofit advocacy group that includes > information about brain development on its Web site. "This > generation of parents grew up thinking technology was all > positive, so if they see their child looking happy, engaged > with what's on the screen, it's very seductive. But a group > of toddlers making up a story together is a much richer > learning experience than dragging things across a screen to > make a story. Children learn best in the context of > relationships." > > While there is no research on the effect of the new > commercial products, earlier research has shown that > educational television can teach 3- to 5- year-olds > vocabulary and number concepts. Most child-development > experts, however, say that babies under about 2? are not > sufficiently developed for such learning. > > Still, many parents buy their babies toys designed for older > children, either believing that their children are unusually > advanced or hoping the toys will make them so. > > Just minutes after spending $150 on a VideoNow player and > cartridges (ages 7 and up) at a Manhattan Toys "R" Us, Ms. > Soibatian holds Jetta up in her stroller to see if she is > interested in Learn Through Music Plus! (ages 2 to 5). At > first, Jetta gently bops the screen with her whole hand, > watching the flashing lights, but soon she notices the > buttons, the index finger goes out, and a delighted Ms. > Soibatian is ready to buy again. > > "You're never too young to learn, and kids nowadays are more > advanced because of all these educational toys," said Iesha > Middleton, another parent shopping at Toys "R" Us. Ms. > Middleton's son will be 3 next month. "I tried to teach my > son his ABC's when he was 1, and I didn't get very far, but > with the Leapster, he learned A-Z really fast, and he can > count up to 50." > > Even Sesame Workshop, long the torchbearer in children's > educational media, is moving into the infant market, with > new "Sesame Beginnings" DVD's for babies 6 months and up. > > "There are all these babies watching videos, and we wanted > to address the reality that's out there and come up with > something that is at least appropriate," said Gary Knell, > Sesame's president. "Ours are about sharing and caring, > modeling good parenting, not the cognitive approaches that > are more appropriate for 3- or 4-year-olds. We won't be > making any boastful claims about school success." > > Others have less restrained marketing: The "Brainy Baby - > Left Brain" package has a cover featuring a cartoon baby > with a thought balloon saying, "2 + 2 = 4" and promises that > it will inspire logical thinking and "teach your child about > language and logic, patterns and sequencing, analyzing > details and more." > > The V.Smile video game system - a "TV Learning System" > introduced last year - features the motto "Turn Game Time > into Brain Time" and cartridges called "smartridges." The > V.Smile, named "Best Toy of the Year" at the toy industry's > 2005 trade show, has a television ad where a mom tells her > children, "You'll never get into college if you don't play > your video games!" The game says it is designed for children > 3 to 7. > > There are, as yet, no reliable estimates of the size of the > market for such devices, but at toy stores nationwide, they > are selling briskly. > > Educational toy companies say their products are designed > with the existing educational and developmental research in > mind but add that more research on media effects would be > helpful. > > "There's nothing that shows it helps, but there's nothing > that shows it's does harm, either," said Marcia Grimsley, > senior producer of "Brainy Baby" videos. "Electronics are > part of our world, and I think that, used appropriately, > they can benefit children." > > Ms. Rideout says parents need more help sorting through the > array of electronic media: "We have detailed guidelines for > advertising and labeling products like down pillows and > dietary supplements, but not for marketing education media > products," she said. > > Warren Buckleitner, editor of Children's Technology Review, > has watched children play with many of the new products and > believes that many of them have great education potential > for older preschoolers. > > "We spend a great percentage of our energy in preschool > teaching kids about symbols, and interactive electronics are > very good teachers of symbols," Mr. Buckleitner said. > "V.Smile is like a hyperactive nanny with flashcards. We had > a 4-year-old, on the cusp of reading, who was so excited > about finding words in the maze that she got addicted, in an > arcade-ish way, and wrote down 20 words on a piece of scrap > paper, then came and said, 'Look at my word collection.' I > asked if she could read them and she could. It was very > motivating for her." > > It does not work that way when the toy does not fit the > child's developmental stage or pace: Mr. Buckleitner > remembers a 2-year-old playing with an interactive > electronic toy, but not understanding the green "go" button; > after coaching from her mother, when she touched a cow that > mooed, she was frustrated by the cow's continued mooing > while she touched five other pictures. "The design people > are still learning, so the technology will get better," Mr. > Buckleitner said. > > Still, he concedes that in teaching small children, "There's > not an educator alive who would disagree with the notion > that concrete and real are always better." > > Research bears that out. In a line of experiments on early > learning included in a research review by Dan Anderson, a > University of Massachusetts psychology professor, one group > of 12- to 15-month-olds was given a live demonstration of > how to use a puppet, while another group saw the > demonstration on video. The children who saw the live > demonstration could imitate the action - but the others had > to see the video six times before they could imitate it. > > "As a society, we are in the middle of a vast uncontrolled > experiment on our infants and toddlers growing up in homes > saturated with electronic media," Mr. Anderson said. > >------------------------------------------------------------------------ > >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Euterpel66 at aol.com Wed Dec 28 16:10:22 2005 From: Euterpel66 at aol.com (Euterpel66 at aol.com) Date: Wed, 28 Dec 2005 11:10:22 EST Subject: [Paleopsych] Japanese Baby Message-ID: <258.45b589d.30e412ee@aol.com> that is hysterical When my daughter was at NYU living in the city, she took a pix of a street corner in Greenwich Village with about four Orientals all with cameras, taking pic at same time. I'll have to see if I cam dig it up. Lorraine Rice Believe those who are seeking the truth. Doubt those who find it. ---Andre Gide http://hometown.aol.com/euterpel66/myhomepage/poetry.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at panix.com Wed Dec 28 23:26:32 2005 From: checker at panix.com (Premise Checker) Date: Wed, 28 Dec 2005 18:26:32 -0500 (EST) Subject: [Paleopsych] BBS: Folk biology and the anthropology of science Message-ID: Folk biology and the anthropology of science: Cognitive Universals and Cultural Particulars http://www.bbsonline.org/documents/a/00/00/04/23/bbs00000423-00/bbs.atran.html Scott Atran Centre National de la Recherche Scientifique (CREA - Ecole Polytechnique) 1 rue Descartes 75005 Paris FRANCE and Institute for Social Research The University of Michigan Ann Arbor MI48106-1248 USA satran at umich.edu Keywords Folk biology, taxonomy, cognitive universals, modularity, evolution, culture, Maya, anthropology Abstract This essay in the "anthropology of science" is about how cognition constrains culture in producing science. The example is folk biology, whose cultural recurrence issues from the very same domain-specific cognitive universals that provide the historical backbone of systematic biology. Humans everywhere think about plants and animals in highly structured ways. People have similar folk-biological taxonomies composed of essence-based species-like groups and the ranking of species into lower- and higher-order groups. Such taxonomies are not as arbitrary in structure and content, nor as variable across cultures, as the assembly of entities into cosmologies, materials or social groups. These structures are routine products of our "habits of mind," which may be in part naturally selected to grasp relevant and recurrent "habits of the world." An experiment illustrates that the same taxonomic rank is preferred for making biological inferences in two diverse populations: Lowland Maya and Midwest Americans. These findings cannot be explained by domain-general models of similarity because such models cannot account for why both cultures prefer species-like groups, despite the fact that Americans have relatively little actual knowledge or experience at this level. This supports a modular view of folk biology as a core domain of human knowledge and as a special player, or "core meme," in the selection processes by which cultures evolve. Structural aspects of folk taxonomy provide people in different cultures with the built-in constraints and flexibility that allow them to understand and respond appropriately to different cultural and ecological settings. Another set of reasoning experiments shows that the Maya, American folk and scientists use similarly structured taxonomies in somewhat different ways to extend their understanding of the world in the face of uncertainty. Although folk and scientific taxonomies diverge historically, they continue to interact. The theory of evolution may ultimately dispense with the core concepts of folk biology, including species, taxonomy and teleology; in practice, however, these may remain indispensable for scientific work. Moreover, theory-driven scientific knowledge cannot simply replace folk knowledge in everyday life. Folk-biological knowledge is not driven by implicit or inchoate theories of the sort science aims to make more accurate and perfect. INTRODUCTION [1] In every human society, people think about plants and animals in the same special ways. These special ways of thinking, which can be described as "folk biology," are fundamentally different from the ways humans ordinarily think about other things in the world, such as stones, stars, tools or even people. The science of biology also treats plants and animals as special kinds of objects, but applies this treatment to humans as well. Folk biology, which is present in all cultures, and the science of biology, whose origins are particular to Western cultural tradition, have corresponding notions of living kinds. Consider four corresponding ways in which ordinary folk and biologists think of plants and animals as special. First, people in all cultures classify plants and animals into species-like groups that biologists generally recognize as populations of interbreeding individuals adapted to an ecological niche. We will call such groups - such as redwood, rye, raccoon or robin - "generic species" for reasons that will become evident. Generic species are usually as obvious to a modern scientist as to local folk. Historically, the generic-species concept provided a pretheoretical basis for scientific explanation of the organic world in that different theories - including evolutionary theory - have sought to account for the apparent constancy of "common species" and for the organic processes that center on them (Wallace 1889/1901:1) Second, there is a commonsense assumption that each generic species has an underlying causal nature, or essence, which is uniquely responsible for the typical appearance, behavior and ecological preferences of the kind. People in diverse cultures consider this essence responsible for the organism's identity as a complex, self-preserving entity governed by dynamic internal processes that are lawful even when hidden. This hidden essence maintains the organism's integrity even as it causes the organism to grow, change form and reproduce. For example, a tadpole and frog are in a crucial sense the same animal although they look and behave very differently, and live in different places. Western philosophers, such as Aristotle and Locke, attempted to translate this commonsense notion of essence into some sort of metaphysical reality, but evolutionary biologists reject the notion of essence as such. Nevertheless, biologists have traditionally interpreted this conservation of identity under change as due to the fact that organisms have separate genotypes and phenotypes. Third, in addition to the spontaneous division of local flora and fauna into essence-based species, such groups have "from the remotest period in... history... been classed in groups under groups. This classification [of generic species into higher- and lower-order groups] is not arbitrary like the grouping of stars in constellations" (Darwin 1872/1883:363).[2] The structure of these hierarchically included groups, such as white oak/oak/tree or mountain robin/robin/bird, is referred to as "folk-biological taxonomy." Especially in the case of animals, these nonoverlapping taxonomic structures can often be scientifically interpreted in terms of speciation (that is, related species descended from a common ancestor by splitting off from a lineage). Fourth, such taxonomies not only organize and summarize biological information; they also provide a powerful inductive framework for making systematic inferences about the likely distribution of organic and ecological properties among organisms. For example, given the presence of a disease in robins one is "automatically" justified in thinking that the disease is more likely to present among other bird species than among nonbird species. In scientific taxonomy, which belongs to the branch of biology known as systematics, this strategy receives its strongest expression in "the fundamental principle of systematic induction" (Warburton 1967, Bock 1973). On this principle, given a property found among members of any two species, the best initial hypothesis is that the property is also present among all species that are included in the smallest higher-order taxon containing the original pair of species. For example, finding that the bacteria E-scheriehia coli share a hitherto unknown property with robins, a biologist would be justified in testing the hypothesis that all organisms share the property. This is because E. coli link up with robins only at the highest level of taxonomy, which includes all organisms. As we shall see, these four corresponding notions issue from a specific cognitive structure, which may be a faculty of the human mind that is innately and uniquely attuned to perceiving and conceptually organizing living kinds. The evolutionary origins of such a faculty arguably involved selection pressures bearing on immediate utility, such as obtaining food and surviving predators and toxins. In no society, however, do people exclusively classify plants and animals because they are useful or harmful. This claim goes against the generally received view that folk biologies are primarily utilitarian, and that scientific biology emerged in part to expel this utilitarian bias from systematic thinking about the living world. Rather, the special ways people classify organic nature enable them to systematically relate fairly well-delimited groups of plants and animals to one another in indefinitely many ways, and to make reasonable predictions about how biological properties are distributed among these groups, regardless of whether or not those properties are noxious or beneficial. Although folk biology and the science of biology share a psychological structure, they apply somewhat different criteria of relevance in constructing and interpreting notions of species, underlying causal structure, taxonomy and taxonomy-based inference. Given the universal character of folk biology, a plausible speculation is that it evolved to provide a generalized framework for understanding and appropriately responding to important and recurrent features in hominid ancestral environments. By contrast, the science of biology has developed to understand an organization of life in which humans play only an incidental role no different from other species.Thus, although there are striking similarities between folk taxonomies and scientific taxonomies, we will also find that there are radical differences. To explore how these different criteria of relevance function, the folk-biological taxonomies of American students and Maya Indians are compared and contrasted below with scientific taxonomies. In this target article, we first describe universal aspects of folk biology. We then show where and why folk biology and scientific biology converge and diverge. In the final part, we explain how folk biology and scientific biology continue to interact in the face of the historical differences that have emerged between them. The focus is on taxonomy and taxonomy-based inference. The general approach belongs to "the anthropology of science," which this paper illustrates. The examples of biology do not apply straightaway to all of science, any more than those of systematics apply to all of biology, but they are central enough in the history of science to be a good place to begin. 1.Folk-Biological Taxonomy. Over a century of ethnobiological research has shown that even within a single culture there may be several different sorts of "special-purpose" folk-biological classifications that are organized by particular interests for particular uses (e.g., beneficial versus noxious, domestic versus wild, edible versus inedible, etc.). Only in the last decades has intensive empirical and theoretical work revealed a cross-culturally universal "general-purpose" taxonomy (Berlin, Breedlove & Raven 1973) that supports systematic reasoning about living kinds, and properties of living kinds, in the face of uncertainty (Atran 1990). For example, learning that one cow is susceptible to "mad cow" disease one might reasonably infer that all cows are susceptible to the disease but not that all mammals or animals are. This "default" folk-biological taxonomy, which serves as an inductive compendium of biological information, is composed of a fairly rigid hierarchy of inclusive groups of organisms, or taxa. At each level of the hierarchy, the taxa, which are mutually exclusive, partition the locally perceived biota in a virtually exhaustive manner. Lay taxonomy, it appears, is everywhere composed of a small number of absolutely distinct hierarchical levels, or ranks. Anthropologist Brent Berlin (1992) has established the standard terminology for folk-biological ranks as follows: the "folk-kingdom" rank (e.g., animal, plant), the "life-form" rank (e.g., bug, fish, bird, mammal, tree, herb/grass, bush), the "generic" or "generic-species" rank (e.g., gnat, shark, robin, dog, oak, clover, holly), the "folk-specific" rank (poodle, white oak) and the "folk-varietal" rank (toy poodle; spotted white oak). Taxa of the same rank tend to display similar linguistic, biological and psychological characteristics. 1.1. The Significance of Rank. Rank allows generalizations to be made across classes of taxa at any given level. For example, the living members of a taxon at the generic-species level generally share a set of biologically important features that are functionally stable and interdependent (homeostasis); members can generally interbreed with one another but not with the living members of any other taxon at that level (reproductive isolation). Taxa at the life-form level generally exhibit the broadest fit (adaptive radiation) of morphology (e.g., skin covering) and behavior (e.g., locomotion) to habitat (e.g., air, land, water). Taxa at the subordinate folk-specific and folk-varietal levels often reflect systematic attempts to demarcate biological boundaries through cultural preferences. . The generalizations that hold across taxa of the same rank (i.e., a class of taxa) thus differ in logical type from generalizations that apply only to this or that taxon (i.e, a group of organisms). Termite, pig and lemon tree are not related to one another by virtue of any simple relation of class inclusion or connection to some common hierarchical node, but by dint of their common rank - in this case the level of generic species. Notice that a system of rank is not simply a hierarchy, as some suggest (Rosch 1975, Premack 1995, Carey 1996). Hierarchy, that is, a structure of inclusive classes, is common to many cognitive domains, including the domain of artifacts. For example, chair often falls under furniture but not vehicle, and car falls under vehicle but not furniture. But there is no ranked system of artifacts:[3] no inferential link, or inductive framework, spans both chair and car, or furniture and vehicle, by dint of a common rank, such as the artifact species or the artifact family. In other words, in many domains there is hierarchy without rank, but only in the domain of living kinds is there always rank. Ranks and taxa are of a different logical order, and confounding them is a category mistake. Biological ranks are second-order classes of groups ( e.g., species, family, kingdom) whose elements are first-order groups (e.g., lion, feline, animal). Ranks seem to vary little, if at all, across cultures as a function of theories or belief systems. In other words, ranks are universal but not the taxa they contain. Ranks represent fundamentally different levels of reality, not convenience. Consider: The most general rank is the folk kingdom,[4] that is, plant or animal. Such taxa are not always explicitly named but they represent the most fundamental divisions of the biological world. These divisions correspond to the notion of "ontological category" in philosophy (Donnellan 1971) and psychology (Keil 1979). From an early age humans cannot help but conceive of any object they see in the world as either being or not being an animal, and there is evidence for an early distinction between plants and nonliving things (Gelman & Wellman 1991, Keil 1994, Hickling & Gelman 1995, Hatano & Inagaki 1996). Conceiving of an object as a plant or animal seems to carry certain assumptions that are not applied to objects thought of as belonging to other ontological categories, like person, substance or artifact. The next rank down is that of life form.[5] The majority of taxa of lesser rank fall under one or another life form. Most life-form taxa are named by lexically unanalyzable names (primary lexemes), and have further named subdivisions, such as tree and bird. Biologically, members of a single life-form taxon are diverse. Psychologically, members of a life-form taxon share a small number of perceptual diagnostics, such as stem aspect, skin covering and so forth (Brown 1984). Life-form taxa may represent general adaptations to broad sets of ecological conditions, such as competition among single-stem plants for sunlight and tetrapod adaptation to life in the air (Hunn 1982, Atran 1985a). Classification by life form may occur relatively early in childhood. For example, familiar kinds of quadrupeds (e.g., dogs and horses) are classified separately from sea versus air animals (Mandler, Bauer & McDonough 1991; Dougherty 1979 for American plants; Stross 1973 for Maya). The core of any folk taxonomy is rank of generic species, which contains by far the most numerous taxa in any folk-biological system. Taxa of this rank generally fall under some life form, but there may be outliers that are unaffiliated with any major life-form taxon.[6] This is often so for a plant or an animal of particular cultural interest, such as maize for Maya (Berlin, Breedlove & Raven 1974) and the cassowary for the Karam of New Guinea (Bulmer 1970). Like life-form taxa, generic-species taxa are usually named by primary lexemes, such as oak and robin. Occasionally, generic-species names exhibit variant forms of what systematists refer to as binomial nomenclature: for example, binomial compounds, such as hummingbird, or binomial composites, such as oak tree. In both these cases the binomial makes the hierarchical relation apparent between the generic species and the life form. Generic species often correspond to scientific genera or species, at least for those organisms that humans most readily perceive, such as large vertebrates and flowering plants. On occasion, generic species correspond to local fragments of biological families (e.g., vulture), orders (e.g., bat) and, especially with invertebrates, even higher-order taxa (Atran 1987a, Berlin 1992). Generic species also tend to be the categories most easily recognized, most commonly named and most readily learned in small-scale societies (Stross 1973). Generic species may be further divided at the folk-specific level. Folk-specific taxa are usually labeled binomially, with secondary lexemes. Such compound names make transparent the hierarchical relation between a generic species and its subordinate taxa, like white oak and mountain robin. However, folk-specific taxa that belong to a generic species with a long tradition of high cultural salience may be labeled with primary lexemes, like winesap (a kind of apple tree) and tabby (a kind of cat). Partitioning into subordinate taxa usually occurs as a set of two or more taxa that contrast lexically along some readily perceptible dimension (color, size, etc.); however, such contrast sets often involve cultural distinctions that language and perception alone do not suffice to explain (Hunn 1982). An example is the Itzaj Maya contrast between red mahogany (ch%k ch%k-al~te') and white mahogany (s%k ch%k-al~te'). Red mahogany actually appears to be no redder than white mahogany. Rather, red mahogany is preferred for its beauty because it has a deeper grain than white mahogany. It is "red" as opposed to "white" probably because Lowland Maya traditionally associate red with the true wind of the East, which brings rain and bounty, and white with the false wind of the North, which brings deception (Atran in press). In general, whether or not a generic species is further differentiated depends on cultural importance. Occasionally, an important folk-specific taxon will be further subdivided into contrasting folk-varietal taxa, such as short-haired tabby and long-haired tabby. Varietals are usually labeled trinomially, with tertiary lexemes that make transparent their taxonomic relationship with superordinate folk-specifics and generic species. An example is spotted white oak. Foreign organisms introduced into a local environment are often initially assimilated to generic species through folk-specific taxa. For example, European colonists originally referred to New World maize as "Indian corn," that is, a kind of wheat. Similarly, Maya initially dubbed Old World wheat "Castillian maize." Over time, as the introduced species acquired its own distinctive role in the local environment, it would assume generic-species status and would, as with most other generic species, be labeled by a single lexeme (e.g., "corn" in American English now refers exclusively to maize). Finally, intermediate levels also exist between the generic-species and life-form levels. Taxa at these levels usually have no explicit name (e.g., rats + mice but no other rodents), although they sometimes do (e.g., felines, palms). Such taxa - especially unnamed "covert" ones - tend not to be as clearly delimited as generic species or life forms; nor does any one intermediate level always constitute a fixed taxonomic rank that partitions the local fauna and flora into a mutually exclusive and virtually exhaustive set of broadly equivalent taxa. Still, there is a psychologically evident preference for forming intermediate taxa at a level roughly between the scientific family (e.g., canine, weaver bird) and order (e.g., carnivore, passerine) (Atran 1983, Berlin 1992). 1.2. The Generic Species: Principal Focus of Biological Knowledge. People in all cultures spontaneously partition the ontological categories animal and plant into generic species in a virtually exhaustive manner. "Virtually exhaustive" means that when an organism is encountered that is not readily identifiable as belonging to a named generic species, it is still expected to belong to one. The organism is usually assimilated to one of the named taxa it resembles, although at times it is assigned an "empty" generic-species slot pending further scrutiny (e.g., "such-and-such a plant is some [generic-species] kind of tree," see Berlin in press). This partitioning of ontological categories seems to be part and parcel of the categories themselves: no plant or animal can fail to belong uniquely to a generic species. The term "generic species" is used here, rather than "folk genera/folk generic" (Berlin 1972) or "folk species/folk specieme" (Bulmer 1970), for three reasons:[7] (1) a principled distinction between biological genus and species is not pertinent to most people around the world. For humans, the most phenomenally salient species (including most species of large vertebrates, trees, and phylogenetically isolated groups such as palms and cacti) belong to monospecific genera in any given locale.[8] Closely related species of a polytypic genus are often hard to distinguish locally, and no readily perceptible morphological or ecological "gap" can be discerned between them (Diver 1940). (2) The term "generic species" reflects a more accurate sense of the correspondence between the most psychologically salient folk-biological groups and the most historically salient scientific groups (Stevens 1994). The distinction between genus and species did not appear until the influx of newly discovered species from around the world compelled European naturalists to sort and remember them within a worldwide system of genera built around (mainly European) species types (Atran 1987a). (3) The term "generic species" reflects a dual character. As salient mnemonic groups, they are akin to genera in being those groups most readily apparent to the naked eye (Cain 1956). As salient causal groups, they are akin to species in being the principal centers of evolutionary processes responsible for biological diversity (Mayr 1969). 1.2.1. The Evolutionary Sense of an Essence Concept. From the standpoint of hominid evolution, the concept of such an essential kind may represent a balancing act between what our ancestors could and could not afford to ignore about their environment. The concept of generic species allows people to perceive and predict many important properties that link together the members of a biological species actually living together at any one time, and to distinguish such species from one another. By contrast, the ability to appreciate the graded phylogenetic relationships between scientific species, which involve vast expanses of geological time and geographical space, would be largely irrelevant to the natural selection pressures on hominid cognition. Ernst Mayr (1969) calls such "local" species, which are readily observed over one or a few generations to coexist in a given local environment, "non-dimensional species" for two reasons: they are manifest to the untrained eye, with no need for theoretical reflection; and the perceptible morphological, ecological and reproductive gaps separating such species summarize the evolutionary barriers between them. Mayr argues that the awareness of non-dimensional species provides the necessary condition for further insight and exploration into phylogenetic species; any sufficient condition for scientific understanding, however, must go beyond essentialism. People ordinarily assume that the various members of each generic species share a unique underlying nature, or essence. This assumption carries the inference of a strong causal connection between superficially dissimilar or noncontiguous states or events - an inference that other animals or primates do not seem capable of making (cf. Kummer 1994). People reason that even three-legged, purring, albino tiger cubs are by nature large, striped, roaring, carnivorous quadrupeds. This is because there is presumably something "in" tigers that is the common cause of them growing large, having stripes, eating meat and roaring under "normal" conditions of existence. People expect the disparate properties of a species to be integrally linked without having to know precise causal relationships. A biological essence is an intrinsic (i.e., nonartifactual) teleological agent, which physically (i.e., nonintentionally) causes the biologically relevant parts and properties of a generic species to function and cohere "for the sake of" the generic species itself. For example, even preschoolers in our culture consistently judge that the thorns on a rose bush exist for the sake of there being more roses, whereas physically similar depictions of barbs on barbed wire or the protuberances of a jagged rock are not considered to exist for the sake of there being more barbed wire or jagged rocks (Keil 1994). This concept of underlying essence goes against the claim that "biological essentialism is the theoretical elaboration of the logical-linguistic concept, substance sortal" that applies to every count noun (Carey 1996:194). Chair may be defined in terms of the human function it serves, and mud in terms of its physical properties, but neither have deep essences because neither is necessarily assumed to be the unique outcome of an imperceptible causal complex. For example, a three-legged or legless beanbag chair does not lack "its" legs, because although most chairs "normally" have four legs they are not quadrupedal by nature (cf. Schwartz 1978). Neither is the notion of essence merely that of a common physical property. Red things comprise a superficial natural class, but such things have little in common except that they are red; and they presumably have few, if any, features that follow from this fact. People the world over assume that the initially imperceptible essential properties of a generic species are responsible for the surface similarities they perceive. People strive to know these deeper properties but also assume that the nature of a species may never be known in its entirety. This cognitive compulsion to explore the underlying nature of generic species produces a continuing and perhaps endless quest to better understand the surrounding natural world, even though such understanding seldom becomes globally coherent or consistent. 1.2.2. A Taxonomic Experiment on Rank and Preference. Given these observations, cognitive studies of the "basic level" are at first sight striking and puzzling. In a justly celebrated set of experiments, Rosch and her colleagues set out to test the validity of the notion of a psychologically preferred taxonomic level (Rosch, Mervis, Grey, Johnson & Boyes-Braem 1976). Using a broad array of converging measures, they found that there is indeed a "basic level" in category hierarchies of "naturally occurring objects," such as "taxonomies" of artifacts as well as living kinds. For artifact and living kind hierarchies, the basic level is where: (1) many common features are listed for categories, (2) consistent motor programs are used for the interaction with or manipulation of category exemplars, (3) category members have similar enough shapes so that it is possible to recognize an average shape for objects of the category, (4) the category name is the first one to come to mind in the presence of an object (e.g., "table" versus "furniture" or "kitchen table"). There is a problem, however: The basic level that Rosch et al. (1976) had hypothesized for artifacts was confirmed (e.g., hammer, guitar); however, the hypothesized basic level for living kinds (e.g., maple, trout), which Rosch initially presumed would accord with the generic-species level, was not. For example, instead of maple and trout, Rosch et al. found that tree and fish operated as basic-level categories for American college students. Thus, Rosch's basic level for living kinds generally corresponds to the life-form level, which is superordinate to the generic-species level (cf. Zubin & K?pcke 1986 for findings with German). To explore this apparent discrepancy between preferred taxonomic levels in small-scale and industrialized societies, and the cognitive nature of ethnobiological ranks in general, we use inductive inference. Although a number of converging measures have been used to explore the notion of basic levels, there has been little direct examination of the relationship between inductive inference and basic levels. This is all the more surprising in view of the fact that a number of psychologists and philosophers assume that basic-level categories maximize inductive potential as intuitive "natural kinds" which "scientific displines evolve to study" (Carey 1985:171; cf. Gelman 1988, Millikan in press). Inference studies allow us to directly test whether or not there is a psychologically preferred rank that maximizes the strength of any potential induction about biologically relevant information, and whether or not this preferred rank is the same across cultures. If a preferred level carries the most information about the world, then categories at that level should favor a wide range of inferences about what is common among members (cf. Anderson 1990). The prediction is that inferences to a preferred category (e.g., white oak to oak, tabby to cat) should be much stronger than inferences to a superordinate category (oak to tree, cat to mammal). Moreover, inferences to a subordinate category (spotted white oak to white oak, short-haired tabby to tabby) should not be much stronger than or different from inferences to a preferred category. What follows is a summary of results from one representative set of experiments in two very diverse populations: Midwestern Americans and Lowland Maya (for complete results see Atran, Estin, Coley & Medin in press; Coley, Medin & Atran in press). 1.2.2.1. Subjects and Methods. The Itzaj are Maya Amerindians living in the Pet?n rainforest region of Guatemala. Until recently, men devoted their time to shifting agriculture, hunting and silviculture, whereas women concentrated on the myriad tasks of household maintenance. The Itzaj comprised the last independent native polity to be conquered by Spaniards (in 1697) and they have preserved virtually all ethnobiological knowledge recorded for Lowland Maya since the time of the initial Spanish conquest (Atran 1993). Despite the current awesome rate of deforestation and the decline of Itzaj culture, the language and ethic of traditional Maya silviculture is still very much in evidence among the generation of our informants who range in age from 50 to 80 years old . The Americans were self-identified as people raised in Michigan and recruited through an advertisement in a local newspaper. Based on extensive fieldwork with the Itzaj, we chose a set of Itzaj folk-biological categories of the kingdom (K), life-form (L), generic-species (G), folk-specific (S), and folk-varietal (V) ranks. We selected three plant life forms: che' = tree, ak' = vine, pok~che' = herb/bush. We also selected three animal life forms: b'a'al~che' kuxi'mal = "walking animal," i.e., mammal, ch'iich' = birds including bats, k%y = fish. Three generic-species taxa were chosen from each life form such that each generic species had a subordinate folk-specific, and each folk-specific had a salient varietal. Pretesting showed that participants were willing to make inferences about hypothetical diseases. The properties chosen for animals were diseases related to the "heart" (puksik'al), "blood" (k'ik'el), and "liver" (tamen). For plants, diseases related to the "roots" (motz), "sap" (itz) and "leaf" (le'). Properties were chosen according to Itzaj beliefs about the essential, underlying aspects of life's functioning. Thus, the Itzaj word puksik'al, in addition to identifying the biological organ "heart" in animals, also denotes "essence" or "heart" in both animals and plants. The term motz denotes "roots," which is considered the initial locus of the plant puksik'al. The term k'ik'el denotes "blood" and is conceived as the principal vehicle for conveying life from the puksik'al throughout the body. The term itz denotes "sap," which functions as the plant's k'ik'el. The tamen, or "liver," helps to "center" and regulate the animal's puksik'al. The le', or "leaf," is the final locus of the plant puksik'al. Properties used for inferences had the form, "is susceptible to a disease of the called ." For each question, "X" was replaced with a phonologically appropriate nonsense name (e.g. "eta") in order to minimize the task's repetitiveness. All participants responded to a list of over 50 questions in which they were told that all members of a category had a property (the premise) and were asked whether "all," "few," or "no" members of a higher-level category (the conclusion category) also possessed that property. The premise category was at one of four levels, either life-form (e.g. L = bird), generic-species (e.g. G = vulture), folk-specific (e.g. S= black vulture), or varietal (e.g. V = red-headed black vulture). The conclusion category was drawn from a higher-level category, either kingdom (e.g. K = animal), life-form (L), generic-species (G), or folk-specific (S). Thus, there were ten possible combinations of premise and conclusion category levels: L->K, G->K, G->L, S->K, S->L, S->G, V->K, V->L, V->G, and V->S. For example, a folk-specific-to-life form (S->L) question might be, "If all black vultures are susceptible to the blood disease called eta, are all other birds susceptible?" If a participant answers "no," then the follow-up question would be "Are some or a few other birds susceptible to disease eta, or no other birds at all?" The corresponding life forms for the Americans were: mammal, bird, fish, tree, bush and flower (on flower as an American life form see Dougherty 1979). The properties used in questions for the Michigan participants were "have protein X," "have enzyme Y," and "are susceptible to disease Z." These were chosen to be internal, biologically based properties intrinsic to the kind in question, but abstract enough so that rather than answering what amounted to factual questions participants would be likely to make inductive inferences based on taxonomic category membership. 1.2.2.2. Results. Representative findings are given in Figure 1. Responses were scored in two ways. First we totaled the proportion of "all or virtually all" responses for each kind of question (e.g., the proportion of times respondents agreed that if red oaks had a property, all or virtually all oaks would have the same property). Second, we calculated "response scores" for each item, counting a response of "all or virtually all" as 3, "some or few" as 2, and "none or virtually none" as 1. A higher score reflected more confidence in the strength of an inference. Figure 1a summarizes the results from all Itzaj informants for all life forms and diseases, and shows the proportion of "all" responses (black), "few" responses (checkered), and "none" responses (white). For example, given a premise of folk-specific (S) rank (e.g., red squirrel) and a conclusion category of generic-species (G) rank (e.g., squirrel), 49% of responses indicated that "all" squirrels, and not just "some" or "none," would possess a property that red squirrels have. Results were obtained by totaling the proportion of "all or virtually all" responses for each kind of question (e.g., the proportion of times respondents agreed that if red oaks had a property, all or virtually all oaks would have the same property). A higher score represented more confidence in the strength of the inductive inference. Figure 1b summarizes the results of Michigan response scores for all life forms and biological properties. Response scores were analyzed using t-tests with significance levels adjusted to account for multiple comparisons. Figure 2 summarizes the significant comparisons (p-values) for "all" responses, "none" responses and combined responses. For all comparisons, n = 12 Itzaj participants and n= 21 American participants (for technical details see Atran et al. in press). Following the main diagonals of Figures 1 and 2 refers to changing the levels of both the premise and conclusion categories while keeping their relative level the same (with the conclusion one level higher than the premise). Induction patterns along the main diagonal indicate a single inductively preferred level. Examining inferences from a given rank to the adjacent higher-order rank (i.e., V->S, S->G, G->L, L->K), we find a sharp decline in strength of inferences to taxa ranked higher than generic species, whereas V->S and S->G inferences are nearly equal and similarly strong. Notice that for "all" responses, the overall Itzaj and Michigan patterns are nearly identical. Moving horizontally within each graph in Figures 1 and 2 corresponds to holding the premise category constant and varying the level of the conclusion.[9] Here we find the same pattern for "all" responses for both Itzaj and Americans as we did along the main diagonal. However, in the combined response scores ("all" + "few") there is now evidence of increased inductive strength for higher-order taxa among Americans versus Itzaj. On this analysis, both Americans and Itzaj show the largest break between inferences to generic species versus life forms. But only American subjects also show a consistent pattern of rating inferences to life-form taxa higher than to taxa at the level of the folk kingdom: G->K vs. G->L, S->K vs. S->L, and V->K vs. V->L. Finally, moving both horizontally and along the diagonal, for Itzaj there is some hint of a difference between inductions using conclusions at the generic-species versus folk-specific levels: V->G and S->G are modestly weaker than V->S. Regression analysis reveals that for Itzaj, the folk-specific level accounts for a small proportion of the variance beyond the generic species (1.4%), but a significant one (F > 4). For Michigan participants, the folk-specific level is not differentiated from the generic-species level (0.2, not significant). In fact, most of the difference between V->G and V->S inductions results from inference patterns for the Itzaj tree life form . There is evidence that Itzaj confer some preferential status upon trees at the folk-specific level (e.g. savanna nance tree). Itzaj are forest-dwelling Maya with a long tradition of agroforestry that antedates the Spanish conquest (Atran 1993). 1.2.2.3. Discussion. These results indicate that both the ecologically inexperienced Americans and the ecologically experienced Itzaj prefer taxa of the generic-species rank in making biological inferences; the findings go against a simple relativist account of cultural differences in folk-biological knowledge. However, the overall effects of cultural experience on folk-biological reasoning are reflected in more subtle ways that do not undermine an absolute preference for the generic species across cultures. In particular, the data point to a relative downgrading of inductive strength to higher ranks among industrialized Americans through knowledge attrition owing to lack of experience and a relative upgrading of inductive strength to lower ranks among silvicultural Maya through expertise. A secondary reliance on life forms arguably owes to Americans' general lack of actual experience with generic species (Dougherty 1978). In one study, American students used only the name "tree" to refer to 75% of the species they saw in a nature walk (Coley, Medin & Atran in press). Although Americans usually can't tell the difference between beeches and elms, they expect that biological action in the world is at the level of beeches and elms and not tree. Yet without being able at least to recognize a tree, they would not even know where to begin to look for the important biological information. The Itzaj pattern reflects both overall preference for generic species and the secondary importance of lower-level distinctions, at least for kinds of trees. A strong ethic of reciprocity in silviculture still pervades the Itzaj; the Maya tend trees so that the forest will tend to the Maya (Atran & Medin 1997). This seems to translate into an upgrading of biological interest in tree folk-specifics. These findings cannot be explained by appeals either to cross-domain notions of perceptual "similarity" or to the structure of the world "out there." On the one hand, if inferential potential were a simple function of perceptual similarity then Americans should prefer life forms for induction (in line with Rosch et al.). Yet Americans prefer generic species as do Maya. On the other hand, objective reality - that is, the actual distribution of biological species within groups of evolutionarily related species - does not substantially differ in the natural environments of Midwesterners and Itzaj. Unlike Itzaj, however, Americans perceptually discriminate life forms more readily than generic species. True, there are more locally recognized species of tree in the Maya area of Peten, Guatemala than in the Midwest United States. Still, the readily perceptible evolutionary "gaps" between species are roughly the same in the two environments (most tree genera in both environments are monospecific). If anything, one might expect that having fewer trees in the American environment allows each species to stand out more from the rest (Hunn 1976). For birds the relative distribution of evolutionarily related species also seems to be broadly comparable across temperate and rainforest environments (Boster 1988). An inadequacy in current accounts of preferred taxonomic levels may be a failure to distinguish domain-general mechanisms for best clustering stimuli from domain-specific mechanisms for best determining loci of biological information. To explain Rosch's data it may be enough to rely on domain-general, similarity-based mechanisms. Such mechanisms may generate a basic level in any number of cognitive domains, but not the preferred level of induction in folk biology. Perhaps humans are disposed to take tight clusters of covariant perceptual information as strong indicators of a rich underlying structure of biological information. This may be the "default" case for humans under "normal" conditions of learning and exposure to the natural world. By and large, people in small-scale societies would live under such "normal" conditions, involving the same general sorts of ambient circumstances that led to the natural selection of cognitive principles for the domain of folk biology. People in urban societies, however, may no longer live under such "default" conditions (except for hunters, bird watchers etc., Tanaka & Taylor 1991.) How, then, can people conceive of a given folk-biological category as a generic species without always (or mostly) relying on perception? Ancillary encyclopedic knowledge may be crucial. Thus, one may have detailed knowledge of dogs but not oaks. Yet a story that indicates where an oak lives, or how it looks or grows, or that its life is menaced may be sufficient to trigger the assumption that oaks comprise a generic species just as dogs do. But such cultural learning produces the same results under widely divergent conditions of experience in different social and ecological environments. This indicates that the learning itself is strongly motivated by cross-culturally shared cognitive mechanisms that do not depend primarily on experience. In conjunction with encyclopedic knowledge of what is already known for the natural world, language is important in targeting preferred kinds. In experiments with children as young as two years old, Gelman and her colleagues showed that sensitivity to nomenclatural patterns and other linguistic cues helps guide folk-biological inferences about information that is not perceptually obvious, especially for categories believed to embody an essence (Gelman, Coley & Gottfried 1994; Hall & Waxman 1993). Language alone, however, is not enough to induce the expectation that little known generic species convey more biological information than better known life forms for Americans. Some other process must invest the generic-species level with inductive potential. Language alone can only signal that such an expectation is appropriate for a given lexical item; it cannot determine the nature of that expectation. Why assume that an appropriately tagged item is the locus of a "deep" causal nexus of biological properties and relationships? It is logically impossible that such assumptions and expectations come from (repeated exposure to) the stimuli themselves. Input to the mind alone cannot cause an instance of experience (e.g., a sighting in nature or in a picture book), or any finite number of fragmentary instances, to be generalized into a category that subsumes a rich and complex set of indefinitely many instances. This projective capacity for category formation can only come from the mind, not from the world alone. The empirical question, then, is whether or not this projective capacity of the mind is simply domain-general, or also domain-specific. For any given category domain - say, living kinds as opposed to artifacts or substances - the process would be domain-general if and only if one could generate the categories of any number of domains from the stimuli alone together with the very same cognitive mechanisms for associating and generalizing those stimuli. But current domain-general similarity models of category formation and category-based reasoning fail to account for the generic species as a preferred level for folk-biological taxonomy across cultures. Our findings suggest that fundamental categorization processes in folk biology are rooted in domain-specific conceptual assumptions rather than in domain-general perceptual heuristics. Subsistence cultures and industrialized cultures may differ in the level at which organisms are most easily identified, but they both still believe that the same absolute level of reality is preferable for biological reasoning, namely, the generic-species rank. This is because they expect the biological world to partition at that rank into nonoverlapping kinds, each with its own unique causal essence, whose visible products may or may not be readily perceived. People anticipate that the biological information value of these preferred kinds is maximal whether or not there is also a visible indication of maximal covariation of perceptual attributes. This does not mean that more general perceptual cues have no inferential value when applied to the folk-biological domain. On the contrary, the evidence points to a significant role for such cues in targeting basic-level life forms as secondary foci for inferential understanding in a cultural environment where biological awareness is relatively poor, as among many Americans. Possibly there is an evolutionary design to having both domain-general perceptual heuristics and domain-specific learning mechanisms: the one enabling flexible adaptation to the variable conditions of experience; the other more invariable in steering us to those abiding aspects of biological reality that are causally recurrent and especially relevant for the emergence of human life and cognition. 1.3. Evolutionary Ramifications: Folk Biology as a Core Domain of Mind and Culture. A speculative but plausible claim in light of our observations and findings is that folk biology is a core domain for humans. A core domain is a semantic notion, philosophically akin to Kant's "synthetic a priori." The object domain, which consists of generic species of biological organisms, is the extension of an innate cognitive module. Universal taxonomy is a core module, that is, an innately determined cognitive structure that embodies the naturally selected ontological commitments of human beings and provides a domain-specific mode of causally construing the phenomena in its domain (for a more disembodied view of innate "modes of construal," see Keil 1995). In particular, the cognitive structure of folk biology specifies that generic species are the preferred kinds of things that partition the biological world, that these generic species are composed of causally related organisms that share the same vitalist (teleo-essentialist) structure, and that these generic species further group together into causally related but mutually exclusive groups under groups. In sum, the generic species is a core concept of the folk-biology module. Core modules share much with Fodor's (1983) input modules. Both are presumably naturally selected endowments of the human mind that are initially activated by a predetermined range of perceptual stimuli. However, there are differences. Input modules, unlike core modules, are hermetically closed cognitive structures that have exclusive access to the mental representations that such input systems produce. For example, syntactic- recognition schemata and facial-recognition schemata respectively deal exclusively and entirely with syntactic recognition and facial recognition. By contrast, core modules have preferential rather than proprietary access to their domain-specific representations (Atran 1990:285). For example, core modules for naive physics, intuitive psychology or folk biology can make use of one another's inputs and outputs, although each module favors the processing of a different predetermined range of stimuli. Moreover, the ability to use a "metarepresentational module," which takes as inputs the outputs of all other modules, allow changes (restructurings and extensions) to operate over the initial core domain as a result of developing interactions with our external (ambient) and internal (cognitive) environment. Flexibility in core modules, Sperber (1994) argues, makes evolutionary sense of how humans so quickly acquire distinct sorts of universal knowledge, which individuals and cultures can then work on and modify in various ways. Sperber's discussion also indicates, in principle, how ordinary people and cognitive scientists can manage the "combinatorial explosion" in human information without simply making it all grist for an inscrutable central-processing mill. A living kind module enables humans to apprehend the biological world spontaneously as a partitioning into essence-based generic species and taxonomically related groups of generic species. This directs attention to interrelated and mutually constraining aspects of the plant and animal world, such as the diverse and interdependent functioning of heterogeneous body parts, maturational growth, inheritance and natural parentage, disease and death. Eventually, coherent "theories" of these causal interrelations might develop under particular learning conditions (Carey 1985) or historical circumstances (Atran 1990). Such systematic elaboration of biological causality, however, is not immediately observable or accessible. Core knowledge that is domain-specific should involve dedicated perceptual-input-analyzers, operating with little interference or second-guessing from other parts of the human conceptual system (Carey 1996, Gigerenzer in press). What might be the evolutionary algorithm that activates or triggers the living kind module's selective attention to generic species? In the absence of experiments or other reliable data, we can only speculate. Evidence from other core domains, such as naive physics and intuitive psychology, helps as both guide and foil to speculation about triggering algorithms for a living-kind module. For humans as well as animals, there is some evidence of at least two distinct but hierarchically related triggering algorithms, each involving a dedicated perceptual-input-analyzer that attends to a restricted range of information. There is an algorithm that attends only to the external movements of rigid bodies that obey something like the laws of Newtonian mechanics in a high-friction environment. Thus, infants judge that an object moving on a plane surface will continue along that surface in a straight path until it stops, but will not jump and suspend itself in mid-air (Spelke 1990). There is also an algorithm that attends to the direction and acceleration of objects not predictable by "naive mechanics." If the motion pattern of one object on a computer screen centers on the position of another object, so that the first object circles around the second object, and speeds up towards or away from it, then infants judge the first object to be self-propelled or "animate" (Premack & Premack 1994). Of course, algorithms for animateness and intentionality can lead to mistakes. They surely did not evolve in response to selection pressures involving two-dimensional figures moving across computer screens. These inhabitants of flatland just happen to fall within the actual domains to which the modules for animacy and intentionality spontaneously extend, as opposed to the proper domains for which the modules evolved (i.e., animate beings and intentional agents). Much as the actual domain of frog food-getting intelligence involves tongue flicking at dark points passing along a frog's field of vision, whereas the proper domain is more about catching flies (Sperber 1994). Algorithms for animacy and intentionality do not suffice to discriminate just living kinds, that is, generic species. On the one hand, they fail to distinguish plants from non-living kinds. Yet people everywhere distinguish plants into generic species just as they do animals. An algorithm that cues in primarily on the relative movement of heterogeneous and diversely connected parts around an object's center of gravity probably plays an important role in discerning animals and plants (perhaps first as they move in the wind, then grow, etc.), although it too may initially err (plastic plants, perhaps clothes on a line). On the other hand, algorithms for animacy and intentionality fail to distinguish humans from nonhuman living kinds, that is, plants and animals. It is animals and plants that are always individuated in terms of their unique generic species, whereas humans are individuated as both individual agents and social actors in accordance with inferred intentions rather than expected clusters of body parts. People individuate humans (as opposed to animals) with the additional aid of a variety of domain-specific "recognizers" for individual human faces, voices, gestures and gaits, which richly motivate inferences about motion and intention from rather partial and fleeting perceptual cues (Fodor 1983, Tooby & Cosmides 1992). Yet no known aboriginal culture - or any culture not exposed to Aristotle - believes that humans are animals or that there is an ontological category undifferentiated between humans and animals. Let us further speculate about selection pressures involved in our automatic attention to human individuals versus our automatic attention to generic species. A characteristic of primates (and some other vertebrates) is that they are social animals who can distinguish individuals of their species, unlike termites who cannot (Kummer, Daston, Gigerenzer & Silk in press). There is evidence that as long as two million years ago, Homo habilis relied upon nonkin to hunt, gather and scavenge for subsistence (Isaac 1983). In order to handle the social contracts required for this mode of subsistence, coalition forming and cooperation with nonkin were probably required. This probably entailed a negotiation of intentions with individuals who could not be identified by indications of blood relationship. In regard to animals and plants, there is also evidence of varied and wide-ranging diet and subsistence patterns in hominid social camps at that time (Bunn 1983). In such a camp, it could be supremely important to know which individual should be recruited in a food-sharing coalition if only to avoid "free riders" who take without giving (Cosmides & Tooby 1989). But it would hardly matter to know the individual identity of lions which could eat you, nettles which could sting you, or deer and mangos which you could eat. Knowing not just the habits of particular species, but making taxonomic inferences about the habits and relationships of groups of biologically related species would be likely to increase the effectiveness (benefit) of such knowledge-based subsistence immeasurably, with little or no added investment (cost) in time or effort (trial-and-error learning). The special evolutionary origins of domain-specific cognitive modules should have special bearings on cultural evolution. One might have expected the implications of domain-specificity to be compelling for those who reason in line with Dawkins (1976), viewing the emergence of culture as a selection process. Unfortunately, aside from notable exceptions (Sperber 1994; Tooby & Cosmides 1992; cf. Lumsden & Wilson 1981), the focus is primarily on how, for example, "Chinese minds differ radically from French minds" (Dennett 1995:365; cf. Cavalli-Sforza & Feldman 1981; Durham 1991). Nevertheless, Dawkins's idea may be a good idea for the study of human cultures, suitably modified by the findings and concerns of cognitive anthropology. His idea is that there may be cultural units that function in social evolution just as there are biological units that function in biological evolution. He calls these units of cultural transmission "memes" - a word that sounds like "gene" and evokes Latin and Greek words for "imitation." One modification consists in restricting highly imitative, replicating memes to knowledge produced by core domains, that is, to memes that have an identifiable syntactic as well as a semantic aspect. In this respect, folk-biological knowledge is a core meme. A core meme, like universal taxonomy, differs from a developing meme, like the culturally specific elaboration of a scientific research program, in a number of interrelated ways. An apparent difference is in the closer resemblance of core memes to genes. First, for core memes, like genes, there is a strong alignment of syntactic ("genotypic") and semantic ("phenotypic") identity. For example, the universal structure of folk-biological taxonomy arguably emerges from a modular cognitive capacity - a mental faculty - that evolved as an effective means of capturing perceptibly relevant and recurrent aspects of ancestral hominid environments. As a result, humans "conceptually perceive" the biological world in more or less the same way. Processes of perceiving and reasoning about generic species are intimately connected: they are guided by the same knowledge system. The folk-biology module focuses attention on perceptual information that can reveal that an object is a living kind, or organism, by uniquely assigning it to one or another of the fundamental partitions of the readily perceptible biological world. Thus, the key feature of folk biology, belonging to a preferred taxonomic rank and a causally essential category, is induced from spatiotemporal analysis via a triggering algorithm that attends to a limited set of perceptual cues whose presence signals an organism as belonging to a generic species. Second, for core memes, conceptual replication involves information being physically transmitted largely intact from physical vehicle to physical vehicle without any appreciable sequencing of vehicles. As in genetic replication, replication of core memes involves fairly high-fidelity copying and a relatively low rate of mutation and recombination. Mental representations of generic species, for instance, are transmitted from brain to brain via public representations such as uttered names and pointings (Sperber 1985). It often suffices, however, that a single fragmentary instance of experience - a naming or sighting by ostension in a natural or artificial setting - "automatically" triggers the transmission and projection of that instance into a richly structured taxonomic context (Atran and Sperber 1991). By contrast, a developing meme requires institutionalized channeling of information. For example, specific scientific schools or research programs involve more or less identifiable communities of scientists, journals, instruments, laboratories and so forth. Institutionalization is necessary because the information is harder to learn and keep straight, but is also more readily transformed and extended into new or different knowledge. This often requires formal or informal instruction to sustain the sequencing of information, and to infuse output with added value by inciting or allowing transformation of input via interpolation, invention, selection, suppression and so forth (see Latour 1987 and Hull 1988 for different insights into institutional constraints). Third, a core meme does not depend for its survival on the cognitive division of labor in a society or on durable transmission media. For example, children can learn about species from written texts, films or picture books; nevertheless, noninstitutionalized transmission of such information in an illiterate society is usually quite reliable as long as there is an unbroken chain of oral communication (within the living memory of the collective) about events in the natural world. Developing memes, however, typically mobilize information of such quantity, diverse quality and expertise that single minds cannot - for lack of capacity or because of other cognitive demands - keep track of all that is needed to understand the information and pass it along. Because scientists can usually only work on bits and pieces of the information in the field at any particular time and place, but may also need to consult information elaborated elsewhere or let fallow for generations (e.g., Mendel's discoveries), durable media are required for that information to usefully endure. Fourth, a core meme does not primarily depend on metacognitive abilities, although it may make use of them (e.g., in stories, allegories, analogies). For the harder-to-learn beliefs of developing memes to grow requires the mingling of ideas from different sources, including different sorts of core memes. For example, numerical and mechanical knowledge now play important, and perhaps preponderant, roles in areas of molecular biology. Mingling of ideas implies the transfer of diverse domain-specific outputs into a domain-neutral representation. A domain-neutral metarepresentation can then function as input for further information processing and development. Fifth, the involvement of core memes in developing metacognitive memes that ride piggyback on core memes or stem from them, such as totemism or biological systematics, allows us in principle to distinguish the convergent evolution of memes across cultures from borrowing, diffusion and descent. If all memes were purely semantic, such a distinction might well be practically impossible in the absence of clear historical traces. One case of convergent evolution is the spontaneous emergence of totemism - the correspondence of social groups with generic species - at different times and in different parts of the world. Why, as L?vi-Strauss (1963) aptly noted, are totems so "good to think"? In part, totemism is metacognitive because it uses representations of generic species to represent groups of people; however, this pervasive metarepresentational inclination arguably owes its recurrence to its ability to ride piggyback on folk-biological taxonomy, which is not primarily or exclusively metacognitive. Consider: Generic species and groups of generic species are inherently well-structured, attention-arresting, memorable and readily transmissible across minds. As a result, they readily provide effective pegs on which to attach knowledge and behavior of less intrinsically well-determined social groups. In this way totemic groups can also become memorable, attention-arresting and transmissible across minds. These are the conditions for any meme to become culturally viable (see Sperber 1996 for a general view of culture along the lines of an "epidemiology of representations"). A significant feature of totemism that enhances both memorability and its capacity to grab attention is that it violates the general behavior of biological species: members of a totem, unlike members of a generic species, generally do not interbreed, but only mate with members of other totems in order to create a system of social exchange. Notice that this violation of core knowledge is far from arbitrary. In fact, it is such a pointed violation of human beings' intuitive ontology that it readily mobilizes most of the assumptions people ordinarily make about biology in order to better help build societies around the world (Atran & Sperber 1991). In the structuring of such metarepresentations, then, the net result appears close to an optimal balance between memorability, attention-grabbing power and flexibility in assimilating and adapting to new and relevant information. This is to assure both ease of transmissibility and longstanding cultural survival. More generally, incorporating recurrently emerging themes in religious and symbolic thought into cognitive science can be pursued as a research program, which focuses on the transmission metarepresentational elaborations of intuitive ontologies or core memes (see Boyer 1994 for such a general framework for the study of religion). This distinction between convergent and descendant metacognitive memes is not absolute. Creationism, for example, has both cross-culturally recurrent themes of supernatural species reification and particular perspectives on the nature of species that involve outworn scientific theories as well as specific historical traditions. Here as well, knowledge of the universal core of such beliefs helps to identify what is, and what is not, beyond the range of ordinary common sense (Atran 1990). Finally, even aspects of the metarepresentational knowledge that science produces as ouput can feed back (as input) in subtle and varied ways into the core module's actual domain: for example, learning that whales aren't fish and that bats aren't birds. But the feedback process is also constrained by the intuitive bounds of domain-specific, common sense (Atran 1987b). The message here is that evolutionary psychology might profit from a source barely tapped: the study of cultural transmission. Some bodies of knowledge have a life of their own, only marginally affected by social change (e.g., intuitive mechanics, basic color classification, folk-biological taxonomies); others depend for their transmission, and hence for their existence, on specific institutions (e.g., totemism, creationism, evolutionary biology).[10] This suggests that culture is not an integrated whole, relying for its transmission on undifferentiated cognitive abilities. But the message is also one of "charity" concerning the mutual understanding of cultures (Davidson 1984): anthropology is possible because underlying the variety of cultures are diverse but universal commonalities. This message also applies to the disunity and comprehensibility of science (part 3). 2. Cultural Elaborations of Universal Taxonomy Despite the evident primacy of ranked taxonomies in the elaboration of folk-biological knowledge in general, and the cognitive preference for generic species in particular, I no longer think that folk taxonomy defines the inferential character of folk biology as strongly as I indicated in a previous work, Cognitive Foundations of Natural History (Atran 1990). Mounting empirical evidence gathered with colleagues suggests that although universal taxonomic structures universally constrain and guide inferences about the biological world, different cultures (and to a lesser extent different individuals within a culture) show flexibility in which inferential pathways they choose (for details see Atran 1995, in press; Medin et al. 1996, 1997; L?pez, Atran, Coley, Medin & Smith 1997; Coley, Medin, Proffitt, Lynch & Atran in press). Different tendencies apparently relate to different cultural criteria of relevance for understanding novelties and uncertainties in the biological world and in adapting to them. For example, among the Itzaj Maya, in contrast to the systematic use of taxonomies by scientists or modern (non-aboriginal) American folk, understanding ecological relationships seems to play a role on a par with morphological and underlying biological relationships in determining how taxa may be causally interrelated. For centuries, Itzaj have managed to so use their folk-biological structures to organize and maintain a fairly stable, context-sensitive biological and ecological order. In a different way, scientists use taxonomies as heuristics for reaching a more global, ecologically context-free understanding of biological relationships underlying the diversity of life. American folk unwittingly pursue a compromise of sorts: maintaining ecologically valid folk categories, but reasoning about them as if they were theory-based. Irrelevancy often results. 2.1. Taxonomy-Based Inference Across Cultures. To illustrate, consider some recent experimental findings. Our intention was to see whether and how Americans and Maya reason the same or differently from their respective taxonomies to determine the likely distribution of unfamiliar biologically-related properties. Our strategy was as follows: First we asked individual informants to perform successive sorting tasks of name cards or colored picture cards (or specimens in Itzaj pilot studies) in order to elicit individual taxonomies. Then we used statistical measures to see whether or not the data justified aggregating the individual taxonomies for each informant group into a single "cultural model" that could confidently retrodict most (of the variance in) informant responses. Finally, we used the aggregated cultural taxonomies to perform various category-based inference tasks with the same or different informants. At each stage of the sorting and inference tasks we asked informants to justify responses. In sum, our techniques enabled us to describe an aggregate model of taxonomy for each population in order to determine emergent patterns of cultural preferences in matters of biological inference. 2.1.1. An Experimental Method for Generating Taxonomies. In the sorting tasks, each set of cards represented either all the generic species of a life form (Itzaj and Michigan mammals) or intermediate category (Itzaj palms), or a large range of the generic species of a life form (e.g., all local trees in the Evanston-Chicago area for people living in the area). The aim was to obtain individual taxonomies that covered the range of relationships between intermediate folk taxa, that is, taxonomic relationships between the generic-species and life-form levels. This was motivated by the fact that the boundaries of intermediate taxa vary somewhat more across individuals and cultures than do ranked taxa, and our goal was to explore as much the differences as the similarities in taxonomy-based reasoning across cultures. Furthermore, the intermediate level of taxonomy is where evolutionary relationships are most visibly manifest and comprehensible (both in the history of science and among educated lay folk, see Atran 1983), and where ecological relationships are most manifest for Maya (e.g., in the habits of arboreal mammals on the fruiting and reproduction of canopy trees). We thought these factors would increase the possibility of ascertaining whether significant differences between Americans and Maya relate to different goals for understanding biological relationships: one weighted by the influence of science in American culture, and the other weighted by interests of subsistence and survival in the Maya rainforest. 2.1.1.1. Methods. What follows is a brief account of findings in regard to all mammals represented in the local environments of the Itzaj and Michigan groups, respectively.[11] For Itzaj we included bats, although Itzaj do not consider them mammals. For the students we included the emblematic wolverine, although it has practically disappeared from Michigan. We asked American informants to sort name cards of all local mammal generic species into successive piles according to the degree they "go together by nature." For Itzaj, name cards were Maya words in Latin letters and informants were asked to successively sort cards according to the degree to which they "go together as companions" (uy-et'~ok) of the same "natural lineage" (u-ch'ib'al). When informants indicated no further desire to successively groups cards the first piles were restored and the informants were asked to subdivide the piles until they no longer wished to do so. The "taxonomic distance" between any two taxa (cards) was then calculated according to where in the sorting sequence they were first grouped together. While a majority of Itzaj informants were functionally illiterate, they had no trouble in manipulating name cards as mnemonic icons. No differences were observed in handling cards between literate and illiterate Itzaj, and no statistically significant differences in results. We chose names cards over pictures or drawings to minimize stimulus effects and maximize the role of categorical knowledge. 2.1.1.2. Results: Convergence and Divergence in Intermediate-Level classifications. Results indicate that the individual mammal taxonomies of Itzaj and students from rural Michigan are all more or less competent expressions of comparably robust cultural models of the biological world.[12] To compare the structure and content of cultural models with one another, and with scientific models, we mathematically compared the topological relations in the tree structure of each group's aggregate taxonomy with those of a classic evolutionary taxonomy, that is, one based on a combination of morphological and phylogenetic considerations.[13] There was substantial shared agreement between the aggregated taxonomies of Itzaj (Figure 3) and Michigan students (Figure 4), between evolutionary taxonomy (Figure 5) and Itzaj taxonomy, and between evolutionary taxonomy and the American folk taxonomy. Agreement between the intermediate folk taxonomies and evolutionary taxonomy is maximized at around the level of the scientific family, both for Itzaj and Michigan subjects, indicating an intermediate-level focus in the folk taxonomies of both cultures. On the whole, taxa formed at this level are still imageable (e.g., the cat or dog families). A closer comparison of the folk groupings in the two cultures, however, suggests that there are at least some cognitive factors at work in folk-biological classification that are mitigated or ignored by science. For example, certain groupings, such as felines + canines, are common to both Itzaj and Michigan students, although felines and canines are phylogenetically further from one another than either family is to other carnivore families (e.g., mustelids, procyonids, etc.). These groupings of large predators indicate that size and ferocity or remoteness from humans is a salient classificatory dimensions in both cultures (cf. Henley 1969, Rips et al. 1973). These are dimensions that a corresponding evolutionary classification of the local fauna does not highlight. An additional nonscientific dimension in Itzaj classification, which is not present in American classification, relates to ecology. For example, Itzaj form a group of arboreal animals, including monkeys as well as tree-dwelling procyonids (kinkajou, cacomistle, raccoon) and squirrels (a rodent). The ecological nature of this group was independently confirmed as follows: We asked informants to tell us which plants are most important for the forest to live. Then, we aggregated the answers into a cultural model, and for each plant in the aggregate list we asked which animals most interacted with it (without ever asking directly which animals interact with one another). The same group of arboreal animals emerged as a stable cluster in interactions with plants. Other factors in the divergence between folk and scientific taxonomies are related both to science's global perspective in classifying local biota and to its reliance on biologically "deep," theoretically weighted properties of internal anatomy and physiology. Thus, the opossum is the only marsupial in North and Central America. Both Itzaj and Midwesterners relate the opossum to skunks and porcupines because it shares with them readily perceptible features of morphology and behavior. From a scientific vantage, however, the opossum is taxonomically isolated from all the other locally represented mammals in a subclass of its own. One factor mitigating the ability of Itzaj or Midwesterners to appreciate the opossum as scientists do is the absence of other locally present marsupials to relate the opossum to. As a result, both Michigan students and Itzaj are apparently unaware of the deeper biological significance of the opossum's lack of a placenta. 2.1.2. Taxonomy-Driven Inductions. Our inference studies were designed to further explore how the underlying reasons for these these apparent similarities and differences in intermediate-level taxonomies might inform category-based inductions among Maya, lay Americans and scientists. We tested for three category-based induction phenomena: Taxonomic Similarity, Taxonomic Typicality and Taxonomic Diversity (cf. Osherson, Smith, Wilkie, L?pez & Shafir 1990). 2.1.2.1 Taxonomic Similarity. Similarity involves judging whether inference from a given premise category to a conclusion category is stronger than inference from some other premise to the same conclusion, where the premise and conclusion categories are those in the aggregate taxonomic tree. Similarity predicts that the stronger inference should be the one where the premise is closest to the conclusion, with "closeness" measured as the number of nodes in the tree one has to go through to reach the conclusion category from the premise category. So, suppose that sheep have some unfamiliar property (e.g., "ulnar arteries") or are susceptible to an unknown disease ("eta"). Suppose, as an alternative premise, that cows have a different property ("sesamoid bones") or are susceptible to a different disease (e.g., "ina"). Following any of the three taxonomies (Maya, American or evolutionary), one should conclude that is it more likely that goats have what sheep have than what cows have, because goats are taxonomically closer to sheep than they are to cows. If similarity is a built-in feature of folk taxonomy, then American and Maya inductions should converge and diverge where their taxonomies do. They should also resemble and depart from scientific inductions where their taxonomies do regarding the scientific taxonomy. In fact, both Americans and Maya chose items like sheep/goat versus cow/goat. This confirms the convergence of the scientific taxonomy with reasoning among both Americans and Maya precisely where the structure of their respective taxonomies should lead us to expect convergence. Both also chose items like opossum/porcupine versus squirrel/porcupine, which confirms the expected convergence between Maya and American classifications, and also the expected divergence of both groups from scientific classification. Choice of items such as dog/fox for Americans but cat/fox for Maya confirms that Americans reason more in line with scientific classifications in such cases than do Maya. In fact, justifications show that Itzaj recognize numerous similarities between foxes and dogs (snout, paw, manner of copulation) but judge that foxes are closer to cats because of interrelated aspects of size and predatory habits. 2.1.2.2. Taxonomic Typicality. The metric for typicality, like the one for similarity, is given by the taxonomy itself, as the lowest average taxonomic distance. In other words, the typicality of an item (e.g., a generic species) is the average taxonomic distance of that item to all other items in the inclusive category (e.g., life form). Items that are more typical provide greater coverage of the category than items that are less typical. For example, Itzaj choose the items jaguar/mammal or mountain lion/mammal over squirrel/mammal or raccoon/mammal, judging that all mammals are more like to be susceptible to a disease that jaguars or mountain lions have than to a disease that squirrels or raccoons have. This is because Maya consider jaguars and mountain lions more typical of the mammals than are squirrels and raccoons. In fact, jaguars and mountain lions are not merely typical for Itzaj because they are more directly related to other mammals than are squirrels and raccoons; they also more closely represent an ideal standard of the "true animal/mammal" (jach b'a'al~che') against which the appearance and behavior of all other animals may be judged. This is evident from Itzaj justifications as well as from direct ratings of which mammals the Itzaj consider to be the "truest." By contrast, American informants choose the items squirrel/mammal or raccoon/mammal over bobcat/mammal or lynx/mammal, presumably because they consider squirrels and raccoons are more typical of mammals for Americans than are bobcats and lynxes. Note that typicality in these cases cannot be attributed to frequency of occurrence or encounter. Our American subjects were all raised in rural Michigan, where the frequency of encounter with squirrels, raccoons, bobcats and lynxes is nowadays about as likely as the corresponding Itzaj encounter with squirrels, raccoons, jaguars and mountain lions. Both the Americans and Maya were also more or less familiar with all animals in their respective tasks. In each case for which we have Itzaj typicality ratings, the "truest" and most taxonomically-typical taxa are large, perceptually striking, culturally important and ecologically prominent. The dimensions of perceptual, ecological and cultural salience all appear necessary to a determination of typicality, but none alone appears to be sufficient. For example, jaguars are beautiful and big (but cows are bigger), their predatory home range (about 50 km2) determines the extent of a forest section (but why just this animal's home range?), and they are "lords" of the forest (to which even the spirits pay heed). In other words, typicality for the Itzaj appears to be an integral part of the human (culturally-relevant) ecology. Thus, the Itzaj say that wherever the sound of the jaguar is not heard, there is no longer any "true" forest, nor any "true" Maya. Nothing of this sort appears to be the case with American judgments of biological typicality and typicality-based biological inference. Thus, the wolverine is emblematic in Michigan, but carries no preferential inductive load. 2.1.2.3. Taxonomic Diversity. Like taxonomically defined typicality, diversity is a measure of category coverage. But a pair of typical items provides less coverage than, say, a pair containing one item that is typical and another that is atypical. For example, given that horses and donkeys share some property, but that horses and gophers share some other property, then our American subjects judge that all mammals are more likely to have the property that horses share with gophers than the property that horses share with donkeys. This is because the average taxonomic distance of donkeys to other mammals is about the same as that of horses, so that donkeys add little information that could not be inferred from horses alone. For example, the distance from horses and donkeys to cows is uniformly low, whereas the distance to mice is uniformly high. Now, the distance from horses to cows is low, but so is the distance from gophers to mice. Thus, information about both horses and gophers is likely to be more directly informative about more mammals than information about only horses and donkeys. Whereas both Americans and Itzaj consistently show similarity and typicality in taxonomy-based reasoning, the Itzaj do not show diversity. However, Itzaj noncompliance with diversity-based reasoning apparently results neither from a failure to understand the principle of diversity nor from any problems of "computational load," such as those which seem to affect the inability of young school children to reason in accordance with diversity (L?pez, Gelman, Gutheil & Smith 1992). As with the most evident divergences between American and Itzaj performance on similarity and typicality tasks, divergence on diversity apparently results from ecological concerns. The diversity principle corresponds to the fundamental principle of induction in scientific systematics: a property shared by two organisms (or taxa) is likely shared by all organisms falling under the smallest taxon containing the two (Warburton 1967). Thus, American folk seem to use their biological taxonomies much as scientists do when given unfamiliar information in order to infer what is likely in the face of uncertainty: informed that goats and mice share a hitherto unknown property, they are more likely to project that property to mammals than if informed that goats and sheep do. By contrast, Itzaj tend to use similarly structured taxonomies to search for causal ecological explanations of why unlikely events should occur: for example, bats may have passed on the property to goats and mice by biting them, but a property does not need an ecological agent to be shared by goats and sheep. In the absence of a theory - or at least the presumption of a theory - of causal unity underlying disparate species, there is no compelling reason to consider a property discovered in two distant species as biologically intrinsic or essential to both. It may make as much or more sense to consider the counterintuitive presence of a property in dissimilar species as the likely result of an extrinsic or ecologically "accidental" cause. Notice that in both the American and Itzaj cases similarly structured taxonomies provide distance metrics over which biological induction can take place. For the Americans, taxonomic distance generally indicates the extent to which underlying causes are more likely to predict shared biological properties than are surface relationships. For Itzaj, taxonomic distance offers one indication of the extent to which ecological agents are likely to be involved in predicting biological properties that do not conform to surface relationships. A priori, either stance might be correct. For example, diseases are clearly biologically-related; however, distribution of a hitherto unknown disease among a given animal population could well involve epidemiological factors that depend on both inherent biological susceptibility and ecological agency. Equally "appropriate" ecological strategies may be used to reason about unfamiliar features of anatomy, physiology and behavior (e.g., in regard to predators or grazers), and even reproduction and growth (e.g., possible animal hybridizations or plant graftings).[14] This does not mean that Itzaj do not understand a diversity principle. In their justifications, Itzaj clearly reject a context-free use of the diversity-principle in favor of context-sensitive reasoning about likely causal connections. In fact, in a series of tasks designed to assess risk-diversification strategies (e.g., sampling productivity from one forest plot or several) Itzaj consistently showed an appreciation of the diversity principle in these other settings. This suggests that although diversity may be a universal reasoning heuristic it is not a universal aspect of folk-biological taxonomy. More generally, what "counts" as a biological cause or property may differ somewhat for folk, like the Itzaj, who necessarily live in intimate awareness of their surroundings, and those, like American folk, whose awareness is less intimate and necessary. For Itzaj, awareness of biological causes and properties may directly relate to ecology, whereas for most American folk the ecological ramifications of biological causes and properties may remain obscure. Historically, the West's development of a world-wide scientific systematics explicitly involved disregard of ecological relationships, and of the colors, smells, sounds, tastes and textures that constitute the most intimate channels of Maya recognition and access to the surrounding living world. For example, the smell of animal excrement so crucial to Maya hunters, or the texture of bark so important to their recognition of trees in the dark forest understory, simply have no place in a generalized and decontextualized scientific classification. 2.1.2.4. Science's Marginal Role for American Folk. A good candidate for the cultural influence of theory in American folk biology is science. Yet, the exposure of Michigan students to science education has little apparent effect on their folk taxonomy. From a scientific view, student taxonomies are no more accurate than those of Itzaj. Science's influence is at best marginal. For example, science may peripherally bear on the differences in the way Itzaj and Michigan students' categorize bats. Itzaj deem bats to be birds (ch'iich'), not mammals (b'a'al~che'). Like Midwesterners, Itzaj acknowledge in interviews that there is a resemblance between bats and small rodents. Because Itzaj classify bats with birds, they consider the resemblance to be only superficial and not indicative of a taxonomic relationship. By contrast, Michigan students "know" from schooling that bats are mammals. But this knowledge can hardly be taken as evidence for the influence of scientific theory on folk taxonomy. Despite learning that bats are mammals, the students go on to relate bats to rats just as Itzaj might if they did not already "know" that bats are birds. Nevertheless, from an evolutionary standpoint bats are taxonomically no closer to rats than to cats. The students, it seems, pay scant attention to the deeper biological relationships science reveals. In other words, the primary influence of science education on folk-biological knowledge may be to fix category labels, which in turn may affect patterns of attention and induction. The influence of science education on folk induction may also reflect less actual knowledge of theory than willing belief that scientific theory supports folk taxonomy. For example, given that a skunk and opossum share a deep biological property, Michigan students are less likely to conclude that all mammals share the property than if it were shared by a skunk and a coyote. From a scientific standpoint, the students employ the right reasoning strategy (diversity-based inference), but reach the wrong conclusion because of a faulty taxonomy (i.e., the belief that skunks are taxonomically further from coyotes than from opossums). Yet if told that opossums are phylogenetically more distant from skunks than coyotes are, the students readily revise their taxonomy to make the correct inference. Still, it would be misleading to claim that the students then use theory to revise their taxonomy, although a revision occurs in accordance with scientific theory. 2.1.3. A Failing Compromise. With their ranked taxonomic structures and essentialist understanding of species, it would seem that no great cognitive effort is additionally required for the Itzaj to recursively essentialize the higher ranks as well, and thereby avail themselves of the full inductive power ranked taxonomies provide. But contrary to earlier assumptions (Atran 1990), our studies show this is not the case. Itzaj, and probably other traditional folk, do not essentialize ranks: they do not establish causal laws at the intermediate or life-form levels, and do not presume that higher-order taxa share the kind of unseen causal unity that their constituent generic species do. There seems, then, to be a sense to Itzaj "failure" in turning their folk taxonomies into one of the most powerful inductive tools that humans may come to possess. To adopt this tool, Itzaj would have to suspend their primary concern with ecological and morpho-behavioral relationships in favor of deeper, hidden properties of greater inductive potential. But the cognitive cost would probably outweigh the benefit (Sperber & Wilson 1986). For this potential, which science strives to realize, is to a significant extent irrelevant, or only indirectly relevant, to local ecological concerns. Scientists use diversity-based reasoning to generate hypotheses about global distributions of biological properties so that theory-driven predictions can be tested against experience and the taxonomic order subsequently restructured when prediction fails. By contrast, American folk do not have the biological theories to support diversity-based reasoning that scientists do. If they did, American folk would not have the categories they do. 2.2. The General-Purpose Nature of Folk Taxonomy. These experimental results in two very different cultures - an industrial Western society and a small-scale tropical forest society - indicate that people across cultures organize their local flora and flora in similarly structured taxonomies. Yet they may reason from their taxonomies in systematically different ways. These findings, however, do not uphold the customary distinction in anthropology and in history and philosophy of biology, between "general-purpose" scientific classifications that are designed to maximize inductive potential and "special-purpose" folk-biological classifications (Gilmour & Walters 1964, Bulmer 1970), which are driven chiefly by "functional" (Dupr? 1981), "utilitarian" (Hunn 1982) or "social" (Ellen 1993) concerns. On the contrary, like scientific classifications folk-biological taxonomies appear to be "general-purpose" systems that maximize inductive potential for indefinitely many inferences and ends. That potential, however, may be conceived differently by a small-scale society and a scientifically oriented community. For scientific systematics, the goal is to maximize inductive potential regardless of human interest. The motivating idea is to understand nature as it is "in itself," independently of the human observer (as far as possible). For the Itzaj, and arguably for other small-scale societies, folk-biological taxonomy works to maximize inductive potential relative to human interests. Here, folk-biological taxonomy provides a well-structured but adaptable framework . It allows people to explore the causal relevance to them - including the ecological relevance - of the natural world, and in indefinitely many and hitherto unforeseen ways. Maximizing the human relevance of the local biological world - its categories and generalizable properties (including those yet undiscovered) - does not mean assigning predefined purposes or functional signatures to it. Instead, it implies providing a sound conceptual infrastructure for the widest range of human adaptation to surrounding environmental conditions, within the limits of culturally acceptable behavior and understanding. For scientific systematics, folk biology may represent a ladder to be discarded after it has been climbed, or at least set aside while scientists surf the cosmos. But those who lack traditional folk knowledge, or implicit appreciation of it, may be left in the crack between science and common sense. For an increasingly urbanized and formally educated people, who are often unwittingly ruinous of the environment, no amount of cosmically valid scientific reasoning skill may be able to compensate the local loss of ecological awareness upon which human survival may ultimately depend. 3. Science and Common Sense in Systematic Biology The scenario that I have explored so far comes to this: Some areas of culture in general, as well as particular scientific fields, are based in specific cognitive domains that are universal to human understanding of nature. Concern with elaborating this basis produces recurrent themes across cultures (e.g., totemism), and its evaluation constitutes much of the initial phases in the development of a science (e.g., natural history). The next sections take a closer look at later phases in the development of systematic biology, where knowledge of the world comes to transcend the bounds of sense without, however, completely losing sight. The experimental evidence reviewed in the previous sections suggests that people in small-scale, traditional societies do not spontaneously extend assumptions of an underlying essential nature to taxa at ranks higher than the generic species. Thus, to infer that a biological property found in a pair of organisms belonging to two very different looking species (e.g., a chicken and an eagle) likely belongs to all organisms in the lowest taxon containing the pair (e.g., bird) may require a reflective elaboration of causal principles that are not related to behavior, morphology, or ecological proclivity in any immediately obvious way. Only this would justify the assumption that all organisms belonging to a taxon at a given rank share equally some internal structure regardless of apparent differences between them. Such predictions lead to errors as well as discoveries. This sets into motion a "boot-strapping" reorganization of taxa and taxonomic structure, and of the inductions that the taxonomy supports. For example, upon discovery that bats bear and nurture their young more like mammals than birds, it is then reasonable to exclude bats from bird and include them with mammal. Despite the "boot-strapping" revision of taxonomy implied here, notice how much did not change: neither the overall structure of folk taxonomy, nor - in a crucial sense - even the kinds involved. Bats, birds, whales, mammals and fish did not just vanish from common sense to arise anew in science. There was a redistribution of affiliations between antecedently perceived kinds. What had altered was the construal of the underlying natures of those kinds, with a redistribution of kinds and a reappraisal of properties pertinent to reference. Historically, taxonomy is conservative, but it can be revolutionized. Even venerable life forms, like tree, are no longer scientifically valid concepts because they have no genealogical unity (e.g., legumes are variously trees, vines, bushes, etc.). The same may true of many longstanding taxa. Phylogenetic theorists question the "reality" of zoological life forms, such as bird and reptile, and the whole taxonomic framework that made biology conceivable in the first place. Thus, if birds descended from dinosaurs, and if crocodiles but not turtles are also directly related to dinosaurs, then: crocodiles and birds form a group that excludes turtles; or crocodiles, birds and turtles form separate groups; or all form one group. In any event, the traditional separation of bird and reptile is no longer tenable. Still, even in the midst of their own radical restructuring of taxonomy, Linnaeus and Darwin would continue to rely on popular life-forms like tree and bird to collect and understand local species arrangements, as do botanists and zoologists today. As for ordinary people, and especially those who live intimately with nature, they can ignore such ecologically salient kinds only at their peril. That is why science cannot simply subvert common sense. 3.1. Aristotelian Essentials. The boot-strapping enterprise in Western science began with Aristotle, or at least with the naturalistic tradition in Ancient Greece he represented. His task was to unite the various foundational forms of the world - each with their own special underlying nature" (phusis in the implicit everyday sense) - into an overarching system of "Nature" (phusis in an explicitly novel metaphysical sense). In practice, this meant systematically deriving each generic species (atomon eidos) from the causal principles uniting it to other species of its life form (megiston genos). It also implied combining the various life forms by "analogy" (analogian) into an integrated conception of life. Theophrastus, Aristotle's disciple, conceived of botanical classification in a similar way. Aristotelian life forms are distinguished and related through possession of analogous organs of the same essential function (locomotion, digestion, reproduction, respiration). For example, bird wings, quadruped feet and fish fins are analogous organs of locomotion. The generic species of each life form are then differentiated by degrees of "more or less" with respect to essential organs. Thus, all birds have wings for moving about and beaks for obtaining nutriments. But, whereas the predatory eagle is partially diagnosed by long and narrow wings and a sharply hooked beak, the goose - owing to its different mode of life - is partially diagnosed by a lesser and broader wing span and flatter bill. A principled classification of biological taxa by "division and assembly" (diaresis and synagoge) ends when all taxa are defined, with each species completely diagnosed with respect to every essential organ (Atran 1985b). In the attempt to causally link up all taxa, and derive them from one another, Aristotle took the first step in decontextualizing nature from its ecological setting. For him, birds were not primarily creatures that live in trees and the air, but causal complexes of life's essential organs and functions from which generic species derive. Life forms become causal way stations in the essential processes that link the animal and plant kingdoms to generic species. As a result, all higher ranks are now essentialized on a par with generic species, and the principle of taxonomic diversity becomes the basis for causal inference in systematics: any biological property that can be presumed to be related to life's essential organs and functions, if shared by two generic species, can be expected to be shared in descending degrees by all organisms in the life form containing the two. This first sustained scientific research program failed because it was still primarily a local effort geared to explaining a familiar order of things. Aristotle knew of species not present in his own familiar environment, but he had no idea that there were orders of magnitude of difference between what was locally apparent and what existed worldwide. Given the (wrong) assumption that a phenomenal survey of naturally occurring kinds was practically complete, he hoped to find a true and consistent system of essential characters by trial and error. He did not foresee that introduction of exotic forms would undermine his quest for a discovery of the essential structure of all possible kinds. But by inquiring into how the apparently diverse natures of species may be causally related to the nature of life, Aristotle established the theoretical program of natural history (as biology was called before evolutionary theory). 3.2. The Linnaean Hierarchy. As in any folk inventory, ancient Greeks and Renaissance herbalists contended with only 500 or 600 local species (Raven et al. 1971). Preferred taxa often correspond to scientific species (dog, coyote, lemon tree, orange tree). But frequently a scientific genus has only one locally occurring species (bear, redwood), which makes species and genus perceptually coextensive. This occurs regularly with the most phenomenally salient organisms, including mammals and trees (for example, in a comparative study, we found that 69% of tree genera in both the Chicago area - 40 of 58 - and the Itzaj area of the Peten rainforest - 158 of 229 - are monospecific, see Medin et al. in press). Europe's "Age of Exploration," which began during the Renaissance, presented the explorers with a dazzling array of new species. The emerging scientific paradigm required that these new forms be ordered and classified within a global framework that unaided common sense could no longer provide. This required a further decontextualizing of nature, which the newly developed arts of block printing and engraving allowed. In what is widely regarded as the first "true-to-nature" herbal of the Renaissance (Brunfels 1530-1536), a keen historian of science notes: The plant was taken out of the water, and the roots were cleansed. What therefore we see depicted is a water lily without water - isn't this a bit paradoxical? All relations between the plant and its habitat have been broken and concealed (Jacobs 1980:162). By isolating organisms from local habitats through the sense-neutral tones of written discourse, a global system of biological comparisons and contrasts could develop. This meant sacrificing local "virtues" of folk-biological knowledge, including cultural, ecological and sensory information. In the Post-Renaissance, decontextualization of preferred folk taxa eventually led to their "fissioning" into species (Cesalpino 1583) and genera (Tournefort 1694). During the initial stages of Europe's global commercial expansion, the number of species increased an order of magnitude. Foreign species were habitually joined to the most similar European species, that is, to the generic type, in a "natural system." Enlightenment naturalists, like Jungius and Linnaeus, further separated natural history from its cognitive moorings in human ecology, banning from botany intuitively "natural" but scientifically "lubricious" life-forms, such as tree and grass (Linnaeus 1751, sec. 209). A similar "fissioning" of intermediate folk groupings occurred when the number of encountered species increased another order of magnitude, and a "natural method" for organizing plants and animals into families (Adanson 1763) and orders (Lamarck 1809) emerged as the basis of modern systematics. Looking to other environments to complete local gaps at the intermediate level, naturalists sought to discern a worldwide series that would cover all environments and again reduce the ever-increasing number of discovered species to a mnemonically manageable set - this time to a set of basic, family plans. Higher-order vertebrate life forms were left to provide the initial framework for biological classes, which only phylogenetic theory would call into question. A concept of phylum became distinguished once it was realized that there is less internal differentiation between all the vertebrate life forms taken as a whole, than there is within most intermediate groupings of the phenomenally "residual" life form, insect (bugs, worms, etc.). This was due to Cuvier (1829), who first reduced vertebrates to a single "branch" (embranchement). Finally, climbing the modified ranks of folk biology to survey the diversity of life, Darwin was able to show how the whole ordering of species could be transformed into the tree of life - a single emerging Nature governed by the causal principles of natural selection. 3.3.Folk Biology's Enduring Embrace. From Linnaeus to the present day, biological systematics has used explicit principles and organizing criteria that traditional folk might consider secondary or might not consider at all (e.g., the geometrical composition of a plant's flower and fruit structure, or the numerical breakdown of an animal's blood chemistry). Nevertheless, as with Linnaeus, the modern systematist initially depends implicitly, and crucially, on a traditional folk appreciation. As Bartlett (1936:5) noted with specific reference to the Maya region of Peten (cf. Diamond 1966 for zoology): A botanist working in a new tropical area is... confronted with a multitude of species which are not only new to him, but which flower and fruit only at some other season than that of his visit, or perhaps so sporadically that he can hardly hope to find them fertile. Furthermore, just such plants are likely to be character plants of [ecological] associations.... [C]onfronted with such a situation, the botanist will find that his difficulties vanish as if by magic if he undertakes to learn the flora as the natives know it, using their plant names, their criteria for identification (which frequently neglect the fruiting parts entirely), and their terms for habitats and types of land. As Linnaeus needed the life form tree and its commons species to actually do his work, so did Darwin need the life form bird and its common species. From a strictly cosmic viewpoint, the title of his great work, On the Origins of Species, is ironic and misleading - much as if Copernicus had entitled his attack on the geocentric universe, On the Origins of Sunrise. Of course, in order to attain that cosmic understanding, Darwin could no more dispense with thinking about "common species" than Copernicus could avoid thinking about the sunrise (Wallace 1901:1-2). In fact, not just species, but all levels of universal folk taxonomy served as indispensable landmarks for Darwin's awareness of the evolving pathways of diversity: from the folk-specifics and varietals whose variation humans had learned to manipulate, to intermediate-level families, and life-form classes, such as bird, within which the godlier processes of natural selection might be discerned: [In the Galapagos Islands] There are twenty-six land birds; of these twenty-one or perhaps twenty-three are ranked a distinct species, and would commonly be assumed to have been here created; yet the close [family] affinity of most of these birds to American species is manifest in every character, in their habits, gestures, and tones of voice. So it is with other animals, and with a large proportion of plants.... Facts such as these, admit of no sort of explanation on the ordinary view of creation. (Darwin 1872/1883:353-354). Use of taxonomic hierarchies in systematics today reveals a similar point. By tabulating the ranges of extant and extinct genera, families, classes and so on, systematists can provide a usable compendium of changing diversity throughout the history of life. For example, by looking at just numbers of families, it is possible to ascertain that insects form a more diverse group than tetrapods (i.e, terrestrial vertebrates, including amphibians, birds, mammals and reptiles). By calculating whether or not the taxonomic diversity in one group varies over time as a function of the taxonomic diversity in another group, evidence can be garnered for or against the evolutionary interdependence of the two groups. Recent comparisons of the relative numbers of families of insects and flowering plants, reveal the surprising fact that insects were just as taxonomically diverse before the emergence of flowering plants as after. Consequently, evolutionary effects of plant evolution on the adaptive radiation of insects are probably less profound than previously thought (Labandeira & Sepkoski 1993). The heuristic value of (scientifically elaborated) folk-based strategies for cosmic inquiry is compelling, despite evolutionary theorists being well aware that no "true" distinctions exist between various taxonomic levels. Not only do taxonomic structure and species continue to agitate science - for better or worse - but also the nonintentional and nonmechanical causal processes that people across the world assume to underlie the biological world. Vitalism is the folk belief that biological kinds - and their maintaining parts, properties and processes - are teleological, and hence not reducible to the contingent relations that govern inert matter. Its cultural expression varies (cf. Hatano & Inagaki 1994). Within any given culture people may have varying interpretations and degrees of attachment to this belief: some who are religiously inclined may think that a "spiritual" essence determines biological causality; others of a more scientific temperament might hold that systems of laws which suffice for physics and chemistry do not necessarily suffice for biology. Many, if not most, working biologists (including cognitive scientists) implicitly retain at least a minimal commitment to vitalism: they acknowledge that physico-chemical laws should suffice for biology, but suppose that such laws are not adequate in their current form, and must be enriched by further laws whose predicates are different from those of inert physics and chemistry. It is not evident how a complete elimination of teleological expressions (concepts defined functionally) from biological theory can be pursued without forsaking a powerful and fruitful conceptual scheme for physiology, morphology, disease and evolution. In cognitive science, a belief that biological systems, such as the mind/brain, are not wholly reducible to electronic circuitry, like computers, is a pervasive attitude that implicitly drives considerable polemic, but also much creative theorizing. Even if this sort of vitalism represents a lingering folk belief that science may ultimately seek to discard, it remains an important and perhaps indispensable cognitive heuristic for regulating scientific inquiry. 3.4. Are there Folk Theories of Natural Kinds? So far the line of argument has been that systematic biology and commonsense folk biology continue to share core-related concepts, such as the species, taxonomic ranking and teleological causality. Granted, in science these are used more as heuristics than as ontological concepts, but their use allows and fosters varied and pervasive interactions between science and common sense. Still, systematic biology and folk biology are arguably distinct domains, which are delimited by different criteria of relevance. This cognitive division of labor between science and common sense is not a view favored in current philosophy or psychology (see Dupr? 1993 for an exception). More frequent is the view that in matters of biological systematics, science is continuous with folk biology; only, science involves a more adequate elaboration of implicit folk meanings and "theories." Deciding the issue is not so simple - in part because, as Bertrand Russell lamented: "One of the most difficult matters in all of controversy is to distinguish disputes about words from disputes about facts" (1958:114). Philosophers and psychologists have noted that no principled distinction between folk and scientific knowledge can be built on ideas of empirical refutation or confirmation, under-determination or going beyond appearance or the information given, or even toleration of internal contradictions and inconsistencies (Kuhn 1962, Feyerabend 1975, Keil & Silberstein 1996). Instead, I want to focus on three related differences between science and folk systems: integration, effectiveness and competition. Concerning integration, it does appear that across all cultures there is some attempt at causal coordination of a few central aspects of life: bodily functioning and maturational growth, inheritance and reproduction, disease and death. But the actual extent of this integration, and the concrete causal mechanisms that effect it, vary widely in detail and coherency across cultures (and individuals, judging by informant justifications in the experimental tasks discussed in the last section). Although the core concept of a generic species as a teleological agent may be universal, knowledge of the actual causal chains that linkup the life properties of a species can involve a host of vitalistic, mechanical and intentional causes whose mix is largely determined by social tradition and individual learning experience (e.g., on disease, see Keil 1994 and Au & Romo 1996 for Americans, and Berlin & Berlin 1996 for Maya). Moreover, few, if any, commonsense accounts of "life" seek to provide a causal account of the global relationships linking (e.g., generating) species and groups of species to and from one another, although there may be various recurrent causal clusters and family relationships. Aristotle was possibly the first person in the world to attempt to integrate an entire taxonomic system.[15] Concerning effectiveness, science's aim is ultimately cosmic in that it is geared to generating predictions about events that are equally accurate, correct or true for any observer. By contrast basic commonsense knowledge, driven by the folk core, has a more terrestrial aim: namely, to provide an effective understanding of the environment that allows appropriate responses. From an evolutionary standpoint, the structure from which we infer an agent's environment must also be the one that actively determines the agent's behavioral strategies (congruent actions and responses): "if the resulting actions anticipate useful future consequences, the agent has an effective internal model; otherwise it has an ineffective one" that may lead it to die out (Holland 1995:34). Folk-biological taxonomies provide both the built-in constraints and flexibility adequate for a wide range of culturally appropriate responses to various environments. By contrast, scientific taxonomies are of limited value in everyday life, and some of the knowledge they elicit (e.g., that tree, bird, sparrow and worm are not valid taxa) may be inappropriate to a wide range of a person's life circumstances. Concerning competition among theories, even in our own culture such competition only marginally affects the folk-biological core (Dupreegrave; 1981, Atran 1987b). A tendency towards cultural conservatism and convergence in folk biology may be a naturally selected aspect of the functioning of the folk-biology module. As in the case of language, the syntactic structure is geared to generate fairly rapid and comprehensive semantic agreement, which would likely have been crucial to group survival (Pinker & Bloom 1990).[16] Fundamental conflicts over the meaning or extension of tree, lion and deer would hardly have encouraged cooperative subsistence behavior. All scientific theories may be characterized, in principle, in relation to their competition with other theories (Popper 1972, Lakatos 1978, Hull 1988). An intended goal of this competition is to expand the database through better organizing principles. This is the minimum condition for the accumulation of knowledge that distinguishes science as a Western tradition from other cultural traditions. For example, it is only in Europe that a cumulative development of naturally history occurred that could lead to anything like a science of biology. Thus, the Chinese, Ottoman, Inca and Aztec empires spanned many local folk-biological systems. Unlike Europe, however, these empires never managed to unite the species of different folk-biological systems into a single classification scheme, much less into anything like a unified causal framework (Atran 1990). Finally, consider that a penchant for calling intuitive data-organizing principles "theories" may stem, in part, from a peculiar bias in analytic philosophy and cognitive psychology. This bias consists in using the emergence of scientific knowledge as the standard by which to evaluate the formation of ordinary knowledge about the everyday world. From an anthropological vantage, this is peculiar because it takes as a model of human thought a rather small, specialized and marginal subset of contemporary thought. It is rather like taking the peculiar knowledge system of another cultural tradition, such as Maya cosmography, and using this to model human thought in general. This bias to model human cognition on scientific thought is historically rooted in the tradition of Anglo-American empiricism, which maintains that science is continuous with common sense, both ontologically (Russell 1948) and methodologically (Quine 1969). It is supposedly a natural and more perfect extension of common sense that purges the latter of its egocentric and contextual biases: for, "it is the essence of a scientific account of the world to reduce to a minimum the egocentric bias in [an everyday] assertion" (Russell 1957:386). When faced with a choice between commonsense kinds and scientific kinds whose referents substantially overlap, people ought to pick the scientific kind; for, "we should not treat scientists' criteria as governing a word which has different application-conditions from the 'ordinary' word" (Putnam 1986:498; cf. Kripke 1972:315). The belief that folk taxonomies are approximations to scientific classifications confounds two appropriate empirical observations and one inappropriate metaphysical supposition. The observations are that: the terms for commonsense generic species and the species terms used in science are often the same; and scientific classification did initially stem from commonsense classification. The erroneous supposition is that both terms denote "natural kinds," and that people will refine their use of natural-kind terms as science improves because this is an inherent part of understanding what they "mean." This entails that there is no a priori mental ("syntactic") constraint on our use or understanding of biological kinds. There is only a semantic understanding that is determined a posteriori by scientific discoveries about the correct or true structure of the world. In fact, neither the terms for generic species nor the species terms used in science denote natural kinds. Consider: Mill (1843), who was one of Russell's mentors, introduced the notion of natural kind in the philosophy of science. Natural kinds were to be nature's own "limited varieties," and would correspond to the predicates of scientific laws in what was then thought to be a determinate Newtonian universe. Counted among the fundamental ontological kinds of this universe were biological species and the basic elements of inert substance (e.g., gold, lead).[17] In evolutionary theory, however, species are not natural kinds. "Speciation," that is the splitting over time of more or less reproductively isolated groups, has no fixed beginning and can only be judged to have occurred to some degree through hindsight. No hard and fast rule can distinguish a variety or genus from a species in time, although failure to interbreed is a good rule of thumb for distinguishing (some) groups of organisms living in close proximity. No laws of molecular or genetic biology consistently apply to all and only species. Nor is there evidence for a systematic deferral to science in matters of everyday biological kinds. This is because the relevance of biological kinds to folk in everyday life pertains to their role in making the everyday world comprehensible, not in making the cosmos at large transparent. When folk assimilate some rather superficial scientific refinements to gain a bit of new knowledge (e.g., whales and bats), these usually affect the antecedent folk system only at the margins. In sum, a "scientific" notion of the species as a natural kind is not the ultimate reference for the commonsense meaning of living kind terms. There is marked discontinuity between evolutionary and preevolutionary conceptions of species. Indeed, the correct scenario might be just the reverse. A notion of the species as a natural kind lingers in the philosophy of science and resolutely persists in psychology (Schwartz 1979, Rey 1983, Carey 1985, Gelman 1988, Keil 1995), which indicates that certain basic notions in science are as much hostage to the dictates of common sense as the other way around. So, to the questions - "what, if not natural kinds, are generic species?" and "what, if not a theory, are the principles of folk biology ?" - the answer may be simply "they are what they are." This is a good prospect for empirical research CONCLUSION The uniform structure of taxonomic knowledge, under diverse socio-cultural learning conditions, arguably results from domain-specific cognitive processes that are panhuman, although circumstances trigger and condition the stable structure acquired. No other cognitive domain is invariably partitioned into foundational kinds that are so patently clear and distinct. Neither does any other domain so systematically involve a further ranking of kinds into inductively sound taxonomies, which express natural relationships that support indefinitely many inferences. Although accounts of actual causal mechanisms and relations among taxa vary across cultures, abstract taxonomic structure is universal and actual taxonomies are often recognizably ancient and stable. This suggests that such taxonomies are products of an autonomous, natural classification scheme of the human mind, which does not depend directly on an elaborated formal or folk theory. Such taxonomies plausibly represent "modular" habits of the mind, naturally selected to capture recurrent habits of the world relevant to hominid survival in ancestral environments. Once emitted in a cultural environment, the ideas developed within this universal framework spread rapidly and enduringly through a population of minds without institutionalized instruction. They tend to be inordinately stable within a culture, and remain by and large structurally isomorphic across cultures. Within this universal framework people develop more variable and specific causal schema for knowing taxa and linking them together. This enables people to interpret and anticipate future events in their environments in locally relevant ways. To be sure, there are universal presumptions that species-like kinds have underlying causal natures, and this drives learning. As a result, people across the world teleologically relate observable morphology, internally directed growth and transgenerational inheritance to developing ideas about the causal constitution of generic species. But no culturally elaborated theory of life's integral properties need causally unite and differentiate all such kinds by systematic degrees. Thus, it is not the cultural elaboration of a theory of biological causality that originally distinguishes people's understanding of the species concept, taxonomy and teleology, as these apply to (nonhuman) animals and plants from understanding basic concepts and organization of inert substances, artifacts or persons. Rather, the spontaneous arrangement of living things into taxonomies of essential kinds constitutes a prior set of constraints on any and all possible theories about the causal relations between living kinds and their biological properties. This includes evolutionary theories, such as Darwin's, which ultimately counter this commonsense conception. From a scientific standpoint, folk-biological concepts such as the generic species are woefully inadequate for capturing the evolutionary relationships of species over vast dimensions of time and space - dimensions that human minds were not directly designed (naturally selected) to comprehend. All taxa are but individual segments of a genealogical tree (Ghiselin 1981), whose branchings may never be clearcut. Only by laborious cultural strategies like those involved in science can minds accumulate the knowledge to transcend the bounds of their phenomenal world and grasp nature's subtleties. But this requires continued access to the intuitive categories that anchor speculation and allow more sophisticated knowledge to emerge, much as the universal intuition of solid bodies and contingent movement has anchored scientific speculation about mass, matter and motion. This does not mean that folk taxonomy is more or less preferable to the inferential understanding that links and perhaps ultimately dissolves taxa into biological theories. This "commonsense" biology may just have different conditions of relevance than scientific biology: the one, providing enough built-in structural constraint and flexibility to allow individuals and cultures to maximize inductive potential relative to the widest possible range of everyday human interests in the biological world; and the other, providing new and various ways of transcending those interests in order to infer the structure of nature in itself, or at least a nature where humans are only incidental. Because common sense operates unaware of its limits, whereas science evolves in different directions and at different rates to surpass those limits, the boundary between them is not apparent. A research task of "the anthropology of science" is to comprehend this division of cognitive labor between science and common sense: to find the bounds within which reality meets the eye, and to show us where visibility no longer holds the promise of truth. NOTES REFERENCES Adanson, M. (1763) Familles des plantes, 2 vols. Paris: Vincent. Anderson, J. (1990) The adaptive character of thought. Hillsdale NJ: Erlbaum. Atran, S. (1983) Covert fragmenta and the origins of the botanical family. Man 18:51-71. Atran, S. (1985a) The nature of folk-botanical life forms. American Anthropologist 87:298-315. Atran, S. (1985b) Pretheoretical aspects of Aristotelian definition and classification of animals. Studies in History and Philosophy of Science 16:113-163. Atran, S. (1987a) Origins of the species and genus concepts. Journal of the History of Biology 20:195-279. Atran, S. (1987b) Constraints on the ordinary semantics of living kinds. Mind and Language 2:27-63. Atran, S. (1990) Cognitive foundations of natural history: Towards an anthropology of science. Cambridge, England: Cambridge University Press. Atran, S. (1993) Itza Maya tropical agro-forestry. Current Anthropology 34:633-700. Atran, S. (1994) Core domains versus scientific theories. In L. Hirschfeld & S. Gelman (Eds.), Mapping the mind: Domain-specificity in cognition and culture. NY: Cambridge University Press. Atran, S. (1995) Classifying nature across cultures. In D. Osherson & E. Smith (Eds.), Invitation to cognitive science, vol. 3: Thinking. Cambridge MA: MIT. Atran, S. (in press) Itzaj Maya folk-biological taxonomy. In D. Medin & S. Atran (Eds.), Folk biology. Cambridge MA: MIT Press. Atran, S., Estin, P., Coley, J. & Medin, D. (in press) Generic species and basic levels: Essence and appearance in folk biology. Journal of Ethnobiology. Atran, S. & Medin, D. (1997) Knowledge and action: Cultural models of nature and resource management in Mesoamerica. In M. Bazerman, D. Messick, A. Tinbrunsel & K. Wayde-Benzoni (Eds.), Environment, ethics, and behavior. San Francisco: Jossey-Bass. Atran, S. & Sperber, D. (1991) Learning without teaching: Its place in culture. In L. Tolchinsky-Landsmann (Ed.), Culture, schooling and psychological development. Norwood NJ: Ablex. Au, T. & Romo, L. (1996) Building a coherent conception of HIV transmission. In D. Medin (Ed.), The psychology of learning and motivation, vol. 35. NY: Academic Press. Bartlett, H. (1936) A method of procedure for field work in tropical American phytogeography based on a botanical reconnaissance in parts of British Honduras and the Peten forest of Guatemala. Botany of the Maya Area, Miscellaneous Papers I. Washington DC: Carnegie Institution of Washington Publication 461. Bartlett, H. (1940) History of the generic concept in botany. Bulletin of the Torrey Botanical Club 47:319-362. Berlin, B. (1972) Speculations on the growth of ethnobotanical nomenclature. Language and Society 1:63-98. Berlin, B. (1992) Ethnobiological classification. Princeton: Princeton University. Berlin, B. (in press) One Maya Indian's view of the plant world. In D. Medin & S. Atran, Folk biology. Cambridge MA: MIT Press. Berlin, B., Breedlove, D., & Raven, P. (1973) General principles of classification and nomenclature in folk biology. American Anthropologist 74:214-242. Berlin, B., Breedlove, D., & Raven, P. (1974) Principles of Tzeltal plant classification. NY: Academic Press. Berlin, E. & Berlin, B. (1996) Medical ethnobiology of the Highland Maya of Chiapas, Mexico. Princeton: Princeton University Press. Bock, W. (1973) Philosophical foundations of classical evolutionary taxonomy. Systematic Zoology 22:275-392. Boster, J. (1988) Natural sources of internal category structure. Memory & Cognition 16:258-270. Boster, J. (1991) The information economy model applied to biological similarity judgment. In L. Resnick, J. Levine & S. Teasley (Eds.), PERSPECTIVES ON SOCIALLY SHARED COGNITION. Washington, DC: American Psychological Association. Boster, J.; Berlin, B.; & O'Neill, J. (1986) Natural and human sources of cross-cultural agreement in ornithological classification. AMERICAN ANTHROPOLOGIST 88:569-583. Boyer, P. (1994) The naturalness of religious ideas. Berkeley: University of California Press. Brown, C. (1984) Language and living things: Uniformities in folk classification and naming. New Brunswick NJ: Rutgers University Press. Bulmer, R. (1970) Which came first, the chicken or the egg-head? In J. Pouillon & P. Maranda (Eds.), Echanges et communications: m?langes offerts [daggerdbl] Claude L?vi-Strauss. The Hague: Mouton. Bunn, H. (1983) Evidence on the diet and subsistence patterns of Plio-Pleistocene hominids at Koobi Fora, Kenya, and at Olduvai Gorge, Tanzania. In J. Clutton-Brock & C. Grigson (Eds.), Animals and archaeology. London: British Arcaeological Reports. Cain, A. (1956) The genus in evolutionary taxonomy. Systematic Zoology 5:97-109. Carey, S. (1985) Conceptual change in childhood. Cambridge MA: MIT Press. Carey, S. (1996) Cognitive domains as modes of thought. In D. Olson & N. Torrance (Eds.), Modes of thought. NY: Cambridge University Press. Cesalpino, A. (1583) De plantis libri XVI. Florence: Marescot. Coley, J.; Lynch, E.; Proffitt, J.; Medin, D.; & Atran, S. (in press) Inductive reasoning in folk-biological thought. In D. Medin & S. Atran (Eds.), Folk biology. Cambridge MA: MIT Press. Coley, J., Medin, D. & Atran, S. (in press) Does rank have its privilege? Inductive inferences in folkbiological taxonomies. Cognition. Cosmides, L. & Tooby, J. (1989) Evolutionary psychology and the generation of culture, part II. Ethology and Sociobiology 10:51-97. Cuvier, G. (1829) Le r?gne animal, 2nd ed., vol. 1. Paris: D?terville. Darwin, C. (1883) On the origins of species by means of natural selection, 6th ed. NY: Appleton (originally published 1872) Davidson, D. (1984) On the very idea of a conceptual scheme. In Inquiries into truth and interpretation. Oxford: Clarendon Press. Dawkins, R. (1976) The selfish gene. Oxford: Oxford University Press. Dennett, D. (1995) Darwin's dangerous idea. NY: Simon & Schuster. Diamond, J. (1966) Zoological classification of a primitive people. Science 151:1102-1104. diSessa, A. (1988) Knowledge in pieces. In G. Forman & P. Pufall (Eds.), Constructivism in the computer age. Hillsdale, NJ: Erlbaum. diSessa, A. (1996) What do "just plain folks" know about physics? In D. Olson & N. Torrance (Eds.), The handbook of education and human development. Oxford: Blackwell. Diver, C. (1940) The problem of closely related species living in the same area. In J. Huxley (Ed.), The new systematics. Oxford: Clarendon Press. Donnellan, K. (1971) Necessity and criteria. In J. Rosenberg & C. Travis (Eds.), Readings in the philosophy of language. Englewood-Cliffs NJ: Prentice-Hall. Dougherty, J, (1978) Salience and relativity in classification. American Ethnologist 5:66-80. Dougherty, J. (1979) Learning names for plants and plants for names. Anthropological Linguistics 21:298-315. Dupr?, J. (1981) Natural kinds and biological taxa. The Philosophical Review 90:66-90. Dupr?, J. (1993) The disorder of things. Cambridge MA: Harvard University Press. Durham, W. (1991) Coevolution: Genes, culture and human diversity. Stanford: Stanford University Press. Dwyer, P. (1976) An analysis of Rofaifo mammal taxonomy. American Ethnologist 3:425-445. Feyerabend, P. (1975) Against method. London: New Left Review. Ellen, R. (1993) The cultural relations of classification. Cambridge: Cambridge University Press. Fodor, J. (1983) Modularity of mind. Cambridge MA: MIT Press. Gelman, R. (1990) First principles organize attention to and learning about relevant data: Number and the animate-inanimate distinction. Cognitive Science 14:79-106. Gelman, S. (1988) The development of induction within natural kind and artifact categories. Cognitive Psychology 20:65-95. Gelman, S., Coley, J. & Gottfried, G. (1994) Essentialist beliefs in children. In L. Hirschfeld & S. Gelman (Eds.), Mapping the mind. NY: Cambridge University. Gelman, S. & Wellman, H. (1991) Insides and essences. Cognition 38:214-244. Gilmour, J. & Walters, S. (1964) Philosophy and classification. In W. Turrill (Ed.), Vistas in botany, vol. 4: Recent researches in plant taxonomy. Oxford: Pergamon Press. Ghiselin, M. (1981) Categories, life, and thinking. Behavioral and Brain Sciences 4:269-313. Gigerenzer, G. (in press) The modularity of social intelligence. In A. Whiten & R. Byrne (Eds.), Machiavellian intelligence II. Cambridge: Cambridge University Press. Greene, E. (1983) Landmarks in botany, 2 vol. Stanford: Stanford University. Hall, D.G. & Waxman, S. (1993) Assumptions about word meaning: Individuation and basic-level kinds. Child Development 64:1550-1570. Hatano, G. & Inagaki, K. (1994) Young children's naive theory of biology. Cognition 50:171-188. Hatano, G. & Inagaki, K. (1996) Cognitive and cultural factors in the acquisition of intuitive biology. In D. Olson & N. Torrance (Eds.), The handbook of education and human development. Oxford: Blackwell. Henley, N. (1969) A psychological study of the semantics of animal terms. Journal of Verbal Learning and Verbal Behavior 8:176-184. Hickling, A. & Gelman, S. (1995) How does your garden grow? Evidence of an early conception of plants as biological kinds. Child Development 66:856-876. Hull, D. (1988) Science as a process. Chicago: University of Chicago Press. Hunn, E. (1976) Toward a perceptual model of folk biological classification. American Ethnologist 3:508-524. Hunn, E. (1982) The utilitarian factor in folk biological clasification. American Anthropologist 84:830-847. Isaac, G. (1983) Aspects of human evolution. In D. Bendall (Ed.), Evolution from molecules to men. NY: Cambridge University Press. Jacobs, M. (1980) Revolutions in plant descriptions. In J. Arends, G. Boelema, C. de Groot & A. Leeuwenberg (Eds.), Liber gratulatorius in honorem H.C.D. De Wit. Wageningen: H. Veenman & Zonen. Keil, F. (1979) Semantic and conceptual development: An ontological perspective. Cambridge MA: Harvard University Press. Keil, F. (1994) The birth and nurturance of concepts by domains. In L. Hirschfeld & S. Gelman (Eds.), Mapping the mind. NY: Cambridge University Press. Keil, F. (1995) The growth of understandings of natural kinds. In S. Sperber, D. Premack & A. Premack (Eds.), Causal Cognition. Oxford: Clarendon Press. Keil, F. & Silberstein, C. (1996) Schooling and the acquisition of theoretical knowledge. In D. Olson & N. Torrance (Eds.), The handbook of education and human development. Oxford: Blackwell. Kesby, J. (1979) The Rangi classification of animals and plants. In R. Reason & D. Ellen (Eds.), Classifications in their social contexts. NY: Academic Press. Kripke, S. (1972) Naming and necessity. In. D. Davidson & G. Harman (Eds.), Semantics of natural language. Dordrecht: Reidel. Kuhn, T. (1962) The structure of scientific revolutions. Chicago: University of Chicago Press. Kummer, H. (1995) Causal knowledge in animals. In S. Sperber, D. Premack & A. Premack (Eds.), Causal Cognition. Oxford: Clarendon Press. Kummer, H., Daston, L., Gigerenzer, G. & Silk, J. (in press) The social intelligence hypothesis. In P. Weingart, P. Richerson, S. Mitchell & S. Maasen (Eds.), Human by nature. Hillsdale NJ: Erlbaum. Labandeira, C. & Sepkoski, J. (1993) Insect diversity in the fossil record. Science 261:310-315. Lakatos, I. (1978) The methodology of scientific research programs. Cambridge: Cambridge University Press. Lamarck, J. (1809) Philosophie zoologique. Paris: Dentu. Latour, B. (1987) Science in action. Cambridge MA: Harvard University Press. L?vi-Strauss, C. (1963) The bear and the barber. The Journal of the Royal Anthropological Institute 93:1-11. L?vi-Strauss, C. (1969) The elementary structures of kinship. Boston: Beacon Press. Linnaeus, C. (1751) Philosophia botanica. Stockholm: G. Kiesewetter. Locke, J. (1848/1689) An essay concerning human understanding. London: Tegg. L?pez, A., Atran, S., Coley, J., Medin, D., & Smith, E. (1997) The tree of life: Universals of folk-biological taxonomies and inductions. Cognitive Psychology 32:251-295. L?pez, A., Gelman, S., Gutheil, G. & Smith, E. (1992) The development of category-based induction. Child Development 63:1070-1090. Lumsden, C. & Wilson, E.O. (1981) Genes, mind and culture. Cambridge MA: Harvard University Press. Mandler, J., Bauer, P. & McDonough, L. (1991) Separating the sheep from the goats: Differentiating global categories. Cognitive Psychology 23:263-298. Mayr, E. (1969) Principles of systematic zoology. NY: McGraw-Hill. Medin, D., Lynch, E., Coley, J. & Atran, S. (1996) The basic level and privilege in relation to goals, theories and similarity. In R. Michalski & J. Wnek (Eds.). Proceedings of the third international workshop on multistrategy learning. Palo Alto: American Association for Artificial Intelligence. Medin, D., Lynch, E., Coley, J. & Atran, S. (1997) Categorization and reasoning among tree experts: Do all roads lead to Rome? Cognitive Psychology 32:49-96. Mill, J. (1843) A system of logic. London: Longmans, Green. Millikan, R. (in press) A common structure for concepts of individuals, stuffs, and real kinds: More mama, more milk, and more mouse. Behavioral and Brain Sciences 21. Osherson, D., Smith, E., Wilkie, O., L?pez, A., & Shafir, E. (1990) Category-based induction. Psychological Review 97:85-200. Pinker, S. & Bloom, P. (1990) Natural language and natural selection. Behavioral and Brain Sciences 13:707-727. Popper, K. (1972) Objective knowledge. Oxford: Clarendon Press. Premack, D. (1995) Foreward to Part IV: Causal understanding in na?ve biology. In D. Sperber, D. Premack & A. Premack (Eds.), Causal cognition: A multidisciplinary debate. Oxford: Clarendon Press. Premack, D. & Premack, A. (1994) Moral belief: Form versus content. In L. Hirschfeld & S. Gelman (Eds.), Mapping the mind. NY: Cambridge University Press. Putnam, H. (1986) Meaning holism. In L. Hahn & P. Schlipp (Eds.), The philosophy of W.V.Quine. La Salle IL: Open Court. Quine, W. (1969) Natural kinds. In Ontological relativity and other essays. NY: Columbia University Press. Raven, P., Berlin, B., & Breedlove, D. (1971) The origins of taxonomy. Science 174:1210- Rey, G. (1983) Concepts and stereotypes. Cognition 15:237-262. Rips, L.; Shoben, E.; & Smith, E. (1973) Semantic distance and the verification of semantic relations. Journal of Verbal Learning and Verbal Behavior 12:1-20. Romney, A.K., Weller, S., & Batchelder, W. (1986) Culture as consensus: A theory of culture and informant accuracy. American Anthropologist 88:313-338. Rosch, E. (1975) Universals and cultural specifics in categorization. In R. Brislin, S. Bochner & W. Lonner (Eds.), Cross-cultural perspectives on learning. NY: Halstead. Rosch, E.,, Mervis, C., Grey, W., Johnson, D., & Boyes-Braem, P. (1976) Basic objects in natural categories. Cognitive Psychology 8:382-439. Russell, B. (1948) Human knowledge: Its scope and limits. NY: Simon & Schuster. Russell, B. (1957) Mr Strawson on referring. Mind 66:385-389. Russell, B. (1958) The ABC of relativity. London: George Allen & Unwin. Schwartz, S. (1978) Putnam on artifacts. Philosophical Review 87:566-574. Schwartz, S. (1979). Natural kind terms. Cognition 7:301-15. Simpson, G. (1961). Principles of animal taxonomy. NY: Columbia University Press. Spelke, E. (1990) Principles of object perception. Cognitive Science 14:29-56. Sperber, D. (1985) Anthropology and psychology. Man 20:73-89. Sperber, D. (1994) The modularity of thought and the epidemiology of representations. In L. Hirschfeld & S. Gelman (Eds.), Mapping the mind. NY: Cambridge University. Sperber, D. (1996) La contagion des id?es. Paris: Editions Odile Jacob. Sperber, D. & Wilson, D. (1986) Relevance. London: Blackwell. Stevens, P. (1994) Berlin's "Ethnobiological Classification." Systematic Biology 43:293-295. Stross, B. (1973) Acquisition of botanical terminology by Tzeltal children. In M. Edmonson (Ed.) Meaning in Mayan languages. The Hague: Mouton. Tanaka, J. & Taylor, M. (1991) Object categories and expertise: Is the basic level in the eye of the beholder? Cognitive Psychology 23:457-482. Tooby, J. & Cosmides, L. (1992) The psychological foundations of culture. In J. Barkow, L. Cosmides & J. Tooby (Eds.), The adapted mind. NY: Oxford University Press. Tournefort, J. (1694) El?mens de botanique. Paris: Imprimerie Royale. Wallace, A. (1901) Darwinism, 2nd ed. London: Macmillan. (1st ed. 1889) Warburton, F. (1967) The purposes of classification. Systematic Zoology 16:241-245. Zubin, D. & K?pcke, K.-M. (1986) Gender and folk taxonomy. In C. Craig (Ed.), Noun classes and categorization. Amsterdam: John Benjamins. Footnotes [1] The studies reported here were funded by NSF (SBR 93-19798, 94-22587) and France's Ministry of Research and Education (Contrat CNRS 92-C-0758), with student support from the University of Michigan's "Culture and Cognition" Program. They were co-directed with Douglas Medin. Participants in this project on biological knowledge across cultures include Alejandro L?pez (Psychology, Max Planck), John Coley and Elizabeth Lynch (Psychology, Northwestern U.), Ximena Lois (Linguistics, Crea-Ecole Polytechnique), Valentina Vapnarsky (Anthropology, Universit? de Paris X), Edward Smith and Paul Estin (Psychology, U. Michigan), and Brian Smith (Biology, U. Texas, Arlington). I thank Medin, Dan Sperber, Giyoo Hatano, Susan Carey, Gerd Gigerenzer and the anonymous referees for comments; thanks also to Estin and L?pez for Figures. [2] Thus, comparing constellations in the cosmologies of Ancient China, Greece and the Aztec Empire shows little commonality. By contrast, herbals like the Ancient Chinese ERH YA, Theophrastus's Peri Puton Istorias, and the Aztec Badianus Codex, share important features, such as the classification of generic species into tree and herb life forms (Atran 1990:276). [3] By contrast, a partitioning of artifacts (including those of organic origin, such as foods) is neither mutually exclusive nor composed of inherent natures: some mugs may or may not be cups; an avocado may be a fruit or vegetable depending on how it is served; a given object may be a bar stool or waste bin depending on the social context or perceptual orientation of its user; and so on. [4]4. It makes no difference whether these groups are named. English speakers ambiguously use "animal" to refer to at least three distinct classes of living things: nonhuman animals, animals including humans, and mammals (the prototypical animals). The term "beast" seems to pick out nonhuman animals in English, but is seldom used today. "Plant" is ambiguously used to refer to the plant kingdom, or to members of that kingdom that are not trees. [5]5. Life forms vary across cultures. Ancient Hebrew or modern Rangi (Tanzania) include herpetofauna (reptiles and amphibians) with insects, worms and other "creeping crawlers" (Kesby 1979), whereas Itzaj Maya and (until recently) most Western cultures, include herpetofauna with mammals as "quadrupeds." Itzaj place phenomenally isolated mammals like the bat with birds, just as Rofaifo (New Guinea) place phenomenally isolated birds like cassowaries with mammals (Dwyer 1976). Whatever the content of life-form taxa, the life-form level, or rank, universally partitions the living world into broadly equivalent divisions. [6]6. In the logical structure of folk taxonomy, outliers may be considered monotypic life forms with only one generic species (for a formalism, see the appendix in Atran 1995). [7]7. Botanists and ethnobotanists tend to see preferred folk-biological groups as akin to scientific genera (Bartlett 1940, Berlin, 1972, Greene 1983). Plant genera especially are often groups most easily recognized morphologically without technical aids (Linnaeus 1751). Zoologists and ethnozoologists tend to view them as more like scientific species, where reproductive and geographical isolation are more readily identified in terms of behavior (Simpson 1961, Diamond 1966, Bulmer 1970). [8] In a comparative study of Itzaj Maya and rural Michigan college students, we found that the great majority of mammal taxa in both cultures correspond to scientific species, and most also correspond to monospecific genera: 30 of 40 (75%) basic Michigan mammal terms denote biological species, of which 21 (70%, or 53% of the total) are monospecific genera; 36 of 42 (86%) basic Itzaj mammal terms denote biological species, of which 25 (69%, or 60% of the total) are monospecific genera (Atran 1995, L?pez et al. 1997). Studies of trees in both the Peten rainforest and Chicago area reveal a similar pattern (Atran 1993; Medin et al. 1997). [9] Moving vertically within each graph corresponds to changing the premise while holding the conclusion category constant. This allows us to test another domain-general model of category-based reasoning: The Similarity-Coverage Model (Osherson et al. 1990). According to this model, the closer the premise category is to the conclusion category, the stronger the induction should be. Our results show only weak evidence for this general reasoning heuristic, which fails to account for the various "jumps" in inductive strength that indicate absolute or relative preference (Atran et al. in press). Note also that we conducted separate experiments to control for the effects of linguistic transparency; for example, whether relations between generic species and life forms were marked (e.g., catfish - fish) or unmarked (e.g., bass - fish) had no effect on results (Coley, Medin & Atran in press). [10] The existence of universal, domain-specific cognitions is not tied exclusively, or even necessarily, to cross-cultural pervasiveness. The social subordination of women, for example, appears in all known cultures (i.e., it is a cultural "universal" in the sense of L?vi-Strauss 1969). It could even be argued that this universal has some biological grounding. There is no reason, however, to attribute the varied ways people process this pervasive social phenomenon to a universal cognitive mechanism. Conversely, the ability to understand and develop mathematics may be rooted in some fairly specific cognitive mechanisms, with which humans are innately endowed (Gelman 1990). But if so, many cultures do not require that people use this ability. Nor is it occasioned by every environment. [11] Each group was tested in its native language (Itzaj and English), and included a minimum of 6 men and 6 women on each task. The choice of groups of 12 or more people is based on pilot studies that indicate this is sufficient to establish a cultural consensus (Atran 1994). No statistically significant differences between men and women were found on the tasks reported. The method of successive pile sorts and taxonomic comparison was pioneered by Boster and his colleagues (Boster, Berlin & O'Neill 1986; Boster 1991). [12] For each subject, we have a square symmetric data matrix, with the number of rows and columns equal to the number of generic species sorted. Subjects' taxonomic distance matrices were correlated with each other, yielding a pairwise subject-by-subject correlation matrix representing the degree to which each subject's taxonomy agreed with each other subject's taxonomy. Principal component factor analyses were then performed on the intersubject correlation matrix for each group of informants to determine whether or not there was a "cultural consensus" in informant responses. A cultural consensus is plausible if the factor analysis results in a single factor solution. If a single dimension underlies patterns of agreement within a domain, then consensus can be assumed for that domain and the dimension can be thought of as reflecting the degree to which each subject shares in the consensual knowledge (Romney, Batchelder & Weller 1986). Consensus is indicated by a strong single factor solution in which: (1) the first latent root (eigenvalue) is large compared to the rest, (2) all scores on the first factor are positive, and (3) the first factor accounts for most of the variance. To the extent that some individuals agree more often with the consensus than others, they are considered more "culturally competent" with respect to the domain in question. An estimate of individual knowledge levels, or competencies, is given by each subject's first factor scores. This represents the degree to which that subject's responses agree with the consensus. That is, the pattern of correlations among informants should be based entirely on the extent to which each subject knows the common (culturally relative) "truth." The mean of all first-factor scores provides an overall measure of consensus. [13] Different types of "scientific taxonomy" correlate differently with folk taxonomy, with cladistic taxonomies (based on strict phylogentic branching) generally being the least correlated and phenetic taxonomies (based on relations among observable characters) being the most. Evolutionary taxonomies represent a compromise of sorts between cladistics and phenetics. [14] Apparent lack of taxonomically based diversity is not limited to Itzaj reasoning about mammals (they show the same pattern when reasoning about birds and palms, Atran in press), nor is it limited to nonwestern populations. In another series of studies exploring the impact of different kinds of expertise on categorization and reasoning about trees (Medin et al. 1997), we have found that parks and forestry maintenance workers responded significantly below chance on diversity items (Coley, Medin, Proffitt, Lynch, Coley & Atran in press). As with the Itzaj, justifications focused on ecological factors (e.g. distribution, susceptibility to disease) and associated causal reasoning. Another American group, consisting of taxonomists, sorted and reasoned in accordance with scientific classification. These results confirm the scientific reasoning patterns that were only inferred from the scientific classification in the mammal studies. Like American students on the mammal task, the taxonomists also had overwhlemingly positive responses on the diversity task. Differences in education did not appear to be significantly correlated with diversity or lack of diversity in the American populations (note also that Lopez et al. 1992 found diversity with American ten-year-olds). [15] The situation is arguably similar for naive physics, not only between cultures, but within our own culture. DiSessa (1988) speaks of a "knowledge in pieces" involving concept clusters that reinforce and help to interpret one another in order to guide people's uninstructed expectations and explanations about many situations of potential relevance to them. Although there is appreciable diversity of expectations and explanations, there are strong tendencies towards the convergence of concept clusters across individuals (and presumably across cultures). These are fairly robust, even for people with formal or scientific education, in part because there is substantial overlap between scientific (Newtonian) and commonsense physics. The causal clusters that are formed, however, reflect local family relationships rather than global coverage: "The impetus theory is, at best, about tosses and similar phenomena. It does not explain how people think about objects on tables, or balance scales, or orbits" (diSessa 1996:714). [16] There is also the cryptic notion of "tacit theory" that originally came from Chomskian linguistics. Generative linguists rightly seem to consider this more of a throwaway notion than do some philosophers. Using "tacit theory" to assimilate universal grammar and universal taxonomy would wrongly entail assimilating a core module to an input module, and perhaps also to any complex biological algorithm (instinct) or automatic organizing process. [17] Aristotle first proposed that both living and inert kinds had essential natures. Locke (1848/1689) dubbed these unknowable kinds, "real kinds," claiming that their underlying natures could never be wholly fathomed by the mind. Across cultures, it is not clear that inert substances comprise a cognitive domain that is conceived in terms of underlying essences or natures. Nor it is obvious what the basic elements might be, since the Greek earth, air, fire and water are not universal. The conception of "natural kind," which supposedly spans all sorts of lawful natural phenomena, may turn out not to be a psychologically real predicate of ordinary thinking (i.e., a "natural kind" of cognitive science). It may be simply an epistemic notion peculiar to a growth stage in Western science and philosophy of science. From checker at panix.com Wed Dec 28 23:26:51 2005 From: checker at panix.com (Premise Checker) Date: Wed, 28 Dec 2005 18:26:51 -0500 (EST) Subject: [Paleopsych] BBS: Individual Differences in Reasoning Message-ID: Individual Differences in Reasoning http://www.bbsonline.org/documents/a/00/00/04/77/bbs00000477-00/bbs.stanovich.html [This is the 8th of 8 BBS target articles I found. You're welcome to back up the tree and look for others.] Below is the unedited draft of: Keith E. Stanovich & Richard F. West (2000) Individual Differences in Reasoning: Implications for the Rationality Debate? Behavioral and Brain Sciences 22 (5): XXX-XXX. This is the unedited draft of a BBS target article that has been accepted for publication (Copyright 1999: Cambridge University Press U.K./U.S. -- publication date provisional) and is currently being circulated for Open Peer Commentary. This preprint is for inspection only, to help prospective commentators decide whether or not they wish to prepare a formal commentary. Please do not prepare a commentary unless you have received the hard copy, invitation, instructions and deadline information. _________________________________________________________________ Individual Differences in Reasoning: Implications for the Rationality Debate? Keith E. Stanovich Department of Human Development and Applied Psychology University of Toronto 252 Bloor Street West Toronto, ON Canada M5S 1V6 [4]kstanovich at oise.utoronto.ca Richard F. West School of Psychology MSC 7401 James Madison University Harrisonburg, VA 22807 USA [5]westrf at jmu.edu _________________________________________________________________ [stanovich.stanovich.jpg] Keith E. Stanovich is Professor of Human Development and Applied Psychology at the University of Toronto. He is the author of over 125 scientific articles in the areas of literacy and reasoning, including Who Is Rational? Studies of Individual Differences in Reasoning (Erlbaum, 1999). He is a Fellow of APA and APS and has received the Sylvia Scribner Award from the American Educational Research Association for contributions to research. [stanovich.west.jpg] Richard F. West is a Professor in the School of Psychology at James Madison University, where he has been named a Madison Scholar. He received his Ph.D. in Psychology from the University of Michigan. The author of over 50 publications, his main scientific interests are the study of rational thought, reasoning, decision making, the cognitive consequences of literacy, and cognitive processes of reading. _________________________________________________________________ Abstract Much research in the last two decades has demonstrated that human responses deviate from the performance deemed normative according to various models of decision making and rational judgment (e.g., the basic axioms of utility theory). This gap between the normative and the descriptive can be interpreted as indicating systematic irrationalities in human cognition. However, four alternative interpretations preserve the assumption that human behavior and cognition is largely rational. These explanations posit that the gap is due to (1) performance errors, (2) computational limitations, (3) the wrong norm being applied by the experimenter and (4) a different construal of the task by the subject. In the debates about the viability of these alternative explanations, attention has been focused too narrowly on the modal response. In a series of experiments involving most of the classic tasks in the heuristics and biases literature, we have examined the implications of individual differences in performance for each of the four explanations of the normative and descriptive gap. Performance errors are a minor factor in the gap, computational limitations underlie non-normative responding on several tasks, particularly those that involve some type of cognitive decontextualization. Unexpected patterns of covariance can suggest when the wrong norm is being applied to a task or when an alternative construal of the task is called for. Keywords: rationality, normative models, descriptive models, heuristics, biases, reasoning, individual differences ______________________________________________________________________ Individual Differences in Reasoning: Implications for the Rationality Debate? 1. Introduction A substantial research literature--one comprising literally hundreds of empirical studies conducted over nearly three decades--has firmly established that people's responses often deviate from the performance considered normative on many reasoning tasks. For example, people assess probabilities incorrectly, they display confirmation bias, they test hypotheses inefficiently, they violate the axioms of utility theory, they do not properly calibrate degrees of belief, they overproject their own opinions onto others, they allow prior knowledge to become implicated in deductive reasoning, and they display numerous other information processing biases (for summaries of the large literature, see Baron, 1994, 1998; Evans, 1989; Evans & Over, 1996; Kahneman, Slovic, & Tversky, 1982; Newstead & Evans, 1995; Nickerson, 1998; Osherson, 1995; Piattelli-Palmarini, 1994; Plous, 1993; Reyna, Lloyd, & Brainerd, in press; Shafir, 1994; Shafir & Tversky, 1995). Indeed, demonstrating that descriptive accounts of human behavior diverged from normative models was a main theme of the so-called heuristics and biases literature of the 1970s and early 1980s (see Arkes & Hammond, 1986; Kahneman, Slovic, & Tversky, 1982). The interpretation of the gap between descriptive models and normative models in the human reasoning and decision making literature has been the subject of contentious debate for almost two decades now (a substantial portion of that debate appearing in this journal; for summaries, see Baron, 1994; Cohen, 1981, 1983; Evans & Over, 1996; Gigerenzer, 1996a; Kahneman, 1981; Kahneman & Tversky, 1983, 1996; Koehler, 1996; Stein, 1996). The debate has arisen because some investigators wished to interpret the gap between the descriptive and the normative as indicating that human cognition was characterized by systematic irrationalities. Due to the emphasis that these theorists place on reforming human cognition, they were labelled the Meliorists by Stanovich (1999). Disputing this contention were numerous investigators (termed the Panglossians, see Stanovich, 1999) who argued that there were other reasons why reasoning might not accord with normative theory (see Cohen, 1981 and Stein, 1996 for extensive discussions of the various possibilities)--reasons that prevent the ascription of irrationality to subjects. First, instances of reasoning might depart from normative standards due to performance errors--temporary lapses of attention, memory deactivation, and other sporadic information processing mishaps. Second, there may be stable and inherent computational limitations that prevent the normative response (Cherniak, 1986; Goldman, 1978; Harman, 1995; Oaksford & Chater, 1993, 1995, 1998; Stich, 1990). Third, in interpreting performance, we might be applying the wrong normative model to the task (Koehler, 1996). Alternatively, we may be applying the correct normative model to the problem as set, but the subject might have construed the problem differently and be providing the normatively appropriate answer to a different problem (Adler, 1984, 1991; Berkeley & Humphreys, 1982; Broome, 1990; Hilton, 1995; Schwarz, 1996). However, in referring to the various alternative explanations (other than systematic irrationality) for the normative/descriptive gap, Rips (1994) warns that "a determined skeptic can usually explain away any instance of what seems at first to be a logical mistake" (p. 393). In an earlier criticism of Henle's (1978) Panglossian position, Johnson-Laird (1983) made the same point: "There are no criteria independent of controversy by which to make a fair assessment of whether an error violates logic. It is not clear what would count as crucial evidence, since it is always possible to provide an alternative explanation for an error." (p. 26). The most humorous version of this argument was made by Kahneman (1981) in his dig at the Panglossians who seem to have only two categories of errors, "pardonable errors by subjects and unpardonable ones by psychologists" (p. 340). Referring to the four classes of alternative explanation discussed above--performance errors, computational limitations, alternative problem construal, and incorrect norm application--Kahneman notes that Panglossians have "a handy kit of defenses that may be used if [subjects are] accused of errors: temporary insanity, a difficult childhood, entrapment, or judicial mistakes--one of them will surely work, and will restore the presumption of rationality" (p. 340). These comments by Rips (1994), Johnson-Laird (1983), and Kahneman (1981) highlight the need for principled constraints on the alternative explanations of normative/descriptive discrepancies. In this target article we describe a research logic aimed at inferring such constraints from patterns of individual differences that are revealed across a wide range of tasks in the heuristics and biases literature. We argue here--using selected examples of empirical results (Stanovich, 1999; Stanovich & West, 1998a, 1998b, 1998c, 1998d, 1999)--that these individual differences and their patterns of covariance have implications for explanations of why human behavior often departs from normative models^1. 2. Performance Errors Panglossian theorists who argue that discrepancies between actual responses and those dictated by normative models are not indicative of human irrationality (e.g., Cohen, 1981) sometimes attribute the discrepancies to performance errors. Borrowing the idea of a competence/performance distinction from linguists (see Stein, 1996, pp. 8-9), these theorists view performance errors as the failure to apply a rule, strategy, or algorithm that is part of a person's competence because of a momentary and fairly random lapse in ancillary processes necessary to execute the strategy (lack of attention, temporary memory deactivation, distraction, etc.). Stein (1996) explains the idea of a performance error by referring to a "mere mistake"--a more colloquial notion that involves "a momentary lapse, a divergence from some typical behavior. This is in contrast to attributing a divergence from norm to reasoning in accordance with principles that diverge from the normative principles of reasoning. Behavior due to irrationality connotes a systematic divergence from the norm" (p. 8). Similarly, in the heuristics and biases literature, the term bias is reserved for systematic deviations from normative reasoning and does not refer to transitory processing errors ("a bias is a source of error which is systematic rather than random," Evans, 1984, p. 462). Another way to think of the performance error explanation is to conceive of it within the true score/measurement error framework of classical test theory. Mean or modal performance might be viewed as centered on the normative response--the response all people are trying to approximate. However, scores will vary around this central tendency due to random performance factors (error variance). It should be noted that Cohen (1981) and Stein (1996) sometimes encompass computational limitations within their notion of a performance error. In the present target article, the two are distinguished even though both are identified with the algorithmic level of analysis (see Anderson, 1990; Marr, 1982; and the discussion below on levels of analysis in cognitive theory) because they have different implications for covariance relationships across tasks. Here, performance errors represent algorithmic-level problems that are transitory in nature. Nontransitory problems at the algorithmic level that would be expected to recur on a readministration of the task are termed computational limitations. This notion of a performance error as a momentary attention, memory, or processing lapse that causes responses to appear nonnormative even when competence is fully normative has implications for patterns of individual differences across reasoning tasks. For example, the strongest possible form of this view is that all discrepancies from normative responses are due to performance errors. This strong form of the hypothesis has the implication that there should be virtually no correlations among nonnormative processing biases across tasks. If each departure from normative responding represents a momentary processing lapse due to distraction, carelessness, or temporary confusion, then there is no reason to expect covariance among biases across tasks (or covariance among items within tasks, for that matter) because error variances should be uncorrelated. In contrast, positive manifold (uniformly positive bivariate associations in a correlation matrix) among disparate tasks in the heuristics and biases literature--and among items within tasks--would call into question the notion that all variability in responding can be attributable to performance errors. This was essentially Rips and Conrad's (1983) argument when they examined individual differences in deductive reasoning: "Subjects' absolute scores on the propositional tests correlated with their performance on certain other reasoning tests....If the differences in propositional reasoning were merely due to interference from other performance factors, it would be difficult to explain why they correlate with these tests" (p. 282-283). In fact, a parallel argument has been made in economics where, as in reasoning, models of perfect market rationality are protected from refutation by positing the existence of local market mistakes of a transitory nature (temporary information deficiency, insufficient attention due to small stakes, distractions leading to missed arbitrage opportunities, etc.). Advocates of perfect market rationality in economics admit that people make errors but defend their model of idealized competence by claiming that the errors are essentially random. The following defense of the rationality assumption in economics is typical in the way it defines performance errors as unsystematic: "In mainstream economics, to say that people are rational is not to assume that they never make mistakes, as critics usually suppose. It is merely to say that they do not make systematic mistakes--i.e., that they do not keep making the same mistake over and over again" (The Economist, December 12, 1998, p. 80). Not surprisingly, others have attempted to refute the view that the only mistakes in economic behavior are unpredictable performance errors by pointing to the systematic nature of some of the mistakes: "The problem is not just that we make random computational mistakes; rather it is that our judgmental errors are often systematic" (Frank, 1990, p. 54). Likewise, Thaler (1992) argues that "a defense in the same spirit as Friedman's is to admit that of course people make mistakes, but the mistakes are not a problem in explaining aggregate behavior as long as they tend to cancel out. Unfortunately, this line of defense is also weak because many of the departures from rational choice that have been observed are systematic" (pp. 4-5). Thus, in parallel to our application of an individual differences methodology to the tasks in the heuristics and biases literature, Thaler argues that variance and covariance patterns can potentially falsify some applications of the performance error argument in the field of economics. Thus, as in economics, we distinguish systematic from unsystematic deviations from normative models. The latter we label performance errors and view them as inoculating against attributions of irrationality. Just as random, unsystematic errors of economic behavior do not impeach the model of perfect market rationality, transitory and random errors in thinking on a heuristics and biases problem do not impeach the Panglossian assumption of ideal rational competence. Systematic and repeatable failures in algorithmic-level functioning likewise do not impeach intentional-level rationality, but they are classified as computational limitations in our taxonomy and are discussed in Section [6]3. Systematic mistakes not due to algorithmic-level failure do call into question whether the intentional-level description of behavior is consistent with the Panglossian assumption of perfect rationality--provided the normative model being applied is not inappropriate (see Section [7]4) or that the subject has not arrived at a different, intellectually-defensible interpretation of the task (see Section [8]5). In several studies, we have found very little evidence for the strong version of the performance error view. With virtually all of the tasks from the heuristics and biases literature that we have examined, there is considerable internal consistency. Further, at least for certain classes of task, there are significant cross-task correlations. For example, in two different studies (Stanovich & West, 1998c) we found correlations in the range of .25 to .40 (considerably higher when corrected for attenuation) among the following measures: 1. Nondeontic versions of Wason's (1966) selection task: The subject is shown four cards lying on a table showing two letters and two numbers (A, D, 3, 7). They are told that each card has a number on one side and a letter on the other and that the experimenter has the following rule (of the if P, then Q type) in mind with respect to the four cards: "If there is an A on one side then there is a 3 on the other". The subject is then told that he/she must turn over whichever cards are necessary to determine whether the experimenter's rule is true or false. Only a small number of subjects make the correct selections of the A card (P) and 7 card (not-Q) and, as a result, the task has generated a substantial literature (Evans, Newstead, & Byrne, 1993; Johnson-Laird, 1999; Newstead & Byrne, 1995). 2. A syllogistic reasoning task in which logical validity conflicted with the believability of the conclusion (see Evans, Barston, & Pollard, 1983). An example item is: All mammals walk. Whales are mammals. Conclusion: Whales walk 3. Statistical reasoning problems of the type studied by the Nisbett group (e.g., Fong, Krantz, & Nisbett, 1986) and inspired by the finding that human judgment is overly influenced by vivid but unrepresentative personal and case evidence and under-influenced by more representative and diagnostic, but pallid, statistical evidence. The quintessential problem involves choosing between contradictory car purchase recommendations--one from a large-sample survey of car buyers and the other the heartfelt and emotional testimony of a single friend. 4. A covariation detection task modeled on the work of Wasserman, Dorner, and Kao (1990). Subjects evaluated data derived from a 2 x 2 contingency matrix. 5. A hypothesis testing task modeled on Tschirgi (1980) in which the score on the task was the number of times subjects attempted to test a hypothesis in a manner that did not unconfound variables. 6. A measure of outcome bias modelled on the work of Baron and Hershey (1988). This bias is demonstrated when subjects rate a decision with a positive outcome as superior to a decision with a negative outcome even when the information available to the decision maker was the same in both cases. 7. A measure of if/only thinking bias (Epstein, Lipson, Holstein, & Huh, 1992; Miller, Turnbull, & McFarland, 1990). If/only bias refers to the tendency for people to have differential responses to outcomes based on the differences in counterfactual alternative outcomes that might have occurred. The bias is demonstrated when subjects rate a decision leading to a negative outcome as worse than a control condition when the former makes it easier to imagine a positive outcome occurring. 8. An argument evaluation task (Stanovich & West, 1997) that tapped reasoning skills of the type studied in the informal reasoning literature (Baron, 1995; Klaczynski, Gordon, & Fauth, 1997; Perkins, Farady, & Bushey, 1991). Importantly, it was designed so that to do well on it one had to adhere to a stricture not to implicate prior belief in the evaluation of the argument. 3. Computational Limitations Patterns of individual differences have implications that extend beyond testing the view that discrepancies between descriptive models and normative models arise entirely from performance errors. For example, patterns of individual differences also have implications for prescriptive models of rationality. Prescriptive models specify how reasoning should proceed given the limitations of the human cognitive apparatus and the situational constraints (e.g., time pressure) under which the decision maker operates (Baron, 1985). Thus, normative models might not always be prescriptive for a given individual and situation. Judgments about the rationality of actions and beliefs must take into account the resource-limited nature of the human cognitive apparatus (Cherniak, 1986; Goldman, 1978; Harman, 1995; Oaksford & Chater, 1993, 1995, 1998; Stich, 1990). More colloquially, Stich (1990) has argued that "it seems simply perverse to judge that subjects are doing a bad job of reasoning because they are not using a strategy that requires a brain the size of a blimp" (p. 27). Following Dennett (1987) and the taxonomy of Anderson (1990; see also, Marr, 1982; Newell, 1982), we distinguish the algorithmic/design level from the rational/intentional level of analysis in cognitive science (the first term in each pair is that preferred by Anderson, the second that preferred by Dennett). The latter provides a specification of the goals of the system's computations (what the system is attempting to compute and why). At this level, we are concerned with the goals of the system, beliefs relevant to those goals, and the choice of action that is rational given the system's goals and beliefs (Anderson, 1990; Bratman, Israel, & Pollack, 1991; Dennett, 1987; Newell, 1982, 1990; Pollock, 1995). However, even if all humans were optimally rational at the intentional level of analysis, there may still be computational limitations at the algorithmic level (e.g., Cherniak, 1986; Goldman, 1978; Oaksford & Chater, 1993, 1995). We would therefore still expect individual differences in actual performance (despite equal rational-level competence) due to differences at the algorithmic level. Using such a framework, we view the magnitude of the correlation between performance on a reasoning task and cognitive capacity as an empirical clue about the importance of algorithmic limitations in creating discrepancies between descriptive and normative models. A strong correlation suggests important algorithmic-level limitations that might make the normative response not prescriptive for those of lower cognitive capacity (Panglossian theorists drawn to this alternative explanation of normative/descriptive gaps were termed Apologists by Stanovich, 1999). In contrast, the absence of a correlation between the normative response and cognitive capacity suggests no computational limitation and thus no reason why the normative response should not be considered prescriptive (see Baron, 1985). In our studies, we have operationalized cognitive capacity in terms of well-known cognitive ability (intelligence) and academic aptitude tasks^2 but have most often used the total score on the Scholastic Aptitude Test^3,4. All are known to load highly on psychometric g (Carpenter, Just, & Shell, 1990; Carroll, 1993; Matarazzo, 1972), and such measures have been linked to neurophysiological and information processing indicators of efficient cognitive computation (Caryl, 1994; Deary, 1995; Deary & Stough, 1996; Detterman, 1994; Fry & Hale, 1996; Hunt, 1987; Stankov & Dunn, 1993; Vernon, 1991, 1993). Furthermore, measures of general intelligence have been shown to be linked to virtually all of the candidate subprocesses of mentality that have been posited as determinants of cognitive capacity (Carroll, 1993). For example, working memory is the quintessential component of cognitive capacity (in theories of computability, computational power often depends on memory for the results of intermediate computations). Consistent with this interpretation, Bara, Bucciarelli, and Johnson-Laird, (1995) have found that "as working memory improves--for whatever reason--it enables deductive reasoning to improve too" (p. 185). But it has been shown that, from a psychometric perspective, variation in working memory is almost entirely captured by measures of general intelligence (Kyllonen, 1996; Kyllonen & Christal, 1990). Measures of general cognitive ability such as those utilized in our research are direct marker variables for Spearman's (1904, 1927) positive manifold--that performance on all reasoning tasks tends to be correlated. Below, we will illustrate how we use this positive manifold to illuminate reasons for the normative/descriptive gap. [9]Table 1 indicates the magnitude of the correlation between one such measure--Scholastic Aptitude Test total scores--and the eight different reasoning tasks studied by Stanovich and West (1998c, Experiments 1 and 2) and mentioned in the previous section. In Experiment 1, syllogistic reasoning in the face of interfering content displayed the highest correlation (.470) and the other three correlations were roughly equal in magnitude (.347 to .394). All were statistically significant (p < .001). The remaining correlations in the table are the results from a replication and extension experiment. Three of the four tasks from the previous experiment were carried over (all but the selection task) and displayed correlations similar in magnitude to those obtained in the first experiment. The correlations involving the four new tasks introduced in Experiment 2 were also all statistically significant. The sign on the hypothesis testing, outcome bias, and if/only thinking tasks was negative because high scores on these tasks reflect susceptibility to non-normative cognitive biases. The correlations on the four new tasks were generally lower (range .172 to .239) than the correlations involving the other tasks (.371 to .410). The scores on all of the tasks in Experiment 2 were standardized and summed to yield a composite score. The composite's correlation with SAT scores was .547. It thus appears that to a moderate extent, discrepancies between actual performance and normative models can be accounted for by variation in computational limitations at the algorithmic level--at least with respect to the tasks investigated in these particular experiments. Table 1 Correlations Between the Reasoning Tasks and Scholastic Aptitude Test Total Scores in the Stanovich and West (1998c) Studies Experiment 1 Syllogisms .470** Selection task .394** Statistical reasoning .347** Argument evaluation task .358** Experiment 2 Syllogisms .410** Statistical reasoning .376** Argument evaluation task .371** Covariation detection .239** Hypothesis testing bias -.223** Outcome bias -.172** If/Only thinking -.208** Composite score .547** ** = p < .001, all two-tailed Ns = 178 to 184 in Experiment 1 and 527 to 529 in Experiment 2 However, there are some tasks in the heuristics and biases literature which lack any association at all with cognitive ability. The so-called false consensus effect in the opinion prediction paradigm (Krueger & Clement, 1994; Krueger & Zeiger, 1993) displays complete dissociation with cognitive ability (Stanovich, 1999; Stanovich & West, 1998c). Likewise, the overconfidence effect in the knowledge calibration paradigm (e.g., Lichtenstein, Fischhoff, & Phillips, 1982) displays a negligible correlation with cognitive ability (Stanovich, 1999; Stanovich & West, 1998c). Collectively, these results indicate that computational limitations seem far from absolute. That is, although computational limitations appear implicated to some extent in many of the tasks, the normative[ ]responses for all of them were computed by some university students who had modest cognitive abilities (e.g., below the mean in a university sample). Such results help to situate the relationship between prescriptive and normative models for the tasks in question because the boundaries of prescriptive recommendations for particular individuals might be explored by examining the distribution of the cognitive capacities of individuals who gave the normative response on a particular task. For most of these tasks, only a small number of the students with the very lowest cognitive ability in this sample would have prescriptive models for any of these tasks that deviated substantially from the normative model for computational reasons. Such findings also might be taken to suggest that perhaps other factors might account for variation--a prediction that will be confirmed when work on styles of epistemic regulation is examined in section [10]7. Of course, the deviation between the normative and prescriptive model due to computational limitations will certainly be larger in unselected or nonuniversity populations. This point also serves to reinforce the caveat that the correlations observed in [11]Table 1 were undoubtedly attenuated due to restriction of range in the sample. Nevertheless, if the normative/prescriptive gap is indeed modest, then there may well be true individual differences at the intentional level--that is, true individual differences in rational thought. All of the camps in the dispute about human rationality recognize that positing computational limitations as an explanation for differences between normative and descriptive models is a legitimate strategy. Meliorists agree on the importance of assessing such limitations. Likewise, Panglossians will, when it is absolutely necessary, turn themselves into Apologists to rescue subjects from the charge of irrationality. Thus, they too acknowledge the importance of assessing computational limitations. In the next section, however, we examine an alternative explanation of the normative/descriptive gap that is much more controversial--the notion that incorrect normative models have been applied to certain tasks in the heuristics and biases literature. 4. Applying the Wrong Normative Model The possibility of incorrect norm application arises because psychologists must appeal to the normative models of other disciplines (statistics, logic, etc.) in order to interpret the responses on various tasks, and these models must be applied to a particular problem or situation. Matching a problem to a normative model is rarely an automatic or clear cut procedure. The complexities involved in matching problems to norms make possible the argument that the gap between the descriptive and normative occurs because psychologists are applying the wrong normative model to the situation. It is a potent strategy for the Panglossian theorist to use against the advocate of Meliorism and such claims have become quite common in critiques of the heuristics and biases literature: "many critics have insisted that in fact it is Kahneman & Tversky, not their subjects, who have failed to grasp the logic of the problem" (Margolis, 1987, p. 158). "if a 'fallacy' is involved, it is probably more attributable to the researchers than to the subjects" (Messer & Griggs, 1993, p. 195). "When ordinary people reject the answers given by normative theories, they may do so out of ignorance and lack of expertise, or they may be signaling the fact that the normative theory is inadequate" (Lopes, 1981, p. 344). "in the examples of alleged base rate fallacy considered by Kahneman and Tversky, they, and not their experimental subjects, commit the fallacies" (Levi, 1983, p. 502). "what Wason and his successors judged to be the wrong response is in fact correct" (Wetherick, 1993, p. 107). "Perhaps the only people who suffer any illusion in relation to cognitive illusions are cognitive psychologists" (Ayton & Hardman, 1997, p. 45). These quotations reflect the numerous ongoing critiques of the heuristics and biases literature in which it is argued that the wrong normative standards have been applied to performance. For example, Lopes (1982) has argued that the literature on the inability of human subjects to generate random sequences (e.g., Wagenaar, 1972) has adopted a narrow concept of randomness that does not acknowledge broader conceptions that are debated in the philosophy and mathematics literature. Birnbaum (1983) has demonstrated that conceptualizing the well-known taxicab base-rate problem (see Bar-Hillel, 1980; Tversky & Kahneman, 1982) within a signal-detection framework can lead to different estimates than those assumed to be normatively correct under the less flexible Bayesian model that is usually applied. Gigerenzer (1991a, 1991b, 1993; Gigerenzer et al., 1991) has argued that the overconfidence effect in knowledge calibration experiments (Lichtenstein, Fischhoff, & Phillips, 1982) and the conjunction effect in probability judgment (Tversky & Kahneman, 1983) have been mistakenly classified as a cognitive biases because of the application of an inappropriate normative model of probability assessment (i.e., requests for single-event subjective judgments when under some conceptions of probability such judgments are not subject to the rules of a probability calculus). Dawes (1989, 1990) and Hoch (1987) have argued that social psychologists have too hastily applied an overly simplified normative model in labeling performance in opinion prediction experiments as displaying a so-called false consensus (see also Krueger & Clement, 1994; Krueger & Zeiger, 1993). 4.1 From the Descriptive to the Normative in Reasoning and Decision Making The cases just mentioned provide examples of how the existence of deviations between normative models and actual human reasoning have been called into question by casting doubt on the appropriateness of the normative models used to evaluate performance. Stein (1996, p. 239) terms this the "reject-the-norm" strategy. It is noteworthy that this strategy is used exclusively by the Panglossian camp in the rationality debate, although this connection is not a necessary one. Specifically, the reject-the-norm-application strategy is exclusively used to eliminate gaps between descriptive models of performance and normative models. When this type of critique is employed, the normative model that is suggested as a substitute for the one traditionally used in the heuristics and biases literature is one that coincides perfectly with the descriptive model of the subjects' performance--thus preserving a view of human rationality as ideal. It is rarely noted that the strategy could be used in just the opposite way--to create gaps between the normative and descriptive. Situations where the modal response coincides with the standard normative model could be critiqued, and alternative models could be suggested that would result in a new normative/descriptive gap. But this is never done. The Panglossian camp, often highly critical of empirical psychologists ("Kahneman and Tversky...and not their experimental subjects, commit the fallacies" Levi, 1983, p. 502), is never critical of psychologists who design reasoning tasks in instances where the modal subject gives the response the experimenters deem correct. Ironically, in these cases, according to the Panglossians, the same psychologists seem never to err in their task designs and interpretations. The fact that the use of the reject-the-norm-application strategy is entirely contingent on the existence or nonexistence of a normative/descriptive gap suggests that the strategy is empirically, not conceptually, triggered (normative applications are never rejected for purely conceptual reasons when they coincide with the modal human response). What this means is that in an important sense the norms being endorsed by the Panglossian camp are conditioned (if not indexed entirely) by descriptive facts about human behavior. The debate itself is, reflexively, evidence that the descriptive models of actual behavior condition expert notions of the normative. That is, there would have been no debate (or at least much less of one) had people behaved in accord with the then-accepted norms. Gigerenzer (1991b) is clear about his adherence to an empirically-driven reject-the-norm-application strategy: "Since its origins in the mid-seventeenth century....When there was a striking discrepancy between the judgment of reasonable men and what probability theory dictated--as with the famous St. Petersburg paradox--then the mathematicians went back to the blackboard and changed the equations (Daston, 1980). Those good old days have gone....If, in studies on social cognition, researchers find a discrepancy between human judgment and what probability theory seems to dictate, the blame is now put on the human mind, not the statistical model" (p. 109). One way of framing the current debate between the Panglossians and Meliorists is to observe that the Panglossians wish for a return of the "good old days" where the normative was derived from the intuitions of the untutored layperson ("an appeal to people's intuitions is indispensable," Cohen, 1981, p. 318); whereas the Meliorists (with their greater emphasis on the culturally constructed nature of norms) view the mode of operation during the "good old days" as a contingent fact of history--the product of a period when few aspects of epistemic and pragmatic rationality had been codified and preserved for general diffusion through education. Thus, the Panglossian reject-the-norm-application view can in essence be seen as a conscious application of the naturalistic fallacy (deriving ought from is). For example, Cohen (1981), like Gigerenzer, feels that the normative is indexed to the descriptive in the sense that a competence model of actual behavior can simply be interpreted as the normative model. Stein (1996) notes that proponents of this position believe that the normative can simply be "read off" from a model of competence because "whatever human reasoning competence turns out to be, the principles embodied in it are the normative principles of reasoning" (p. 231). Although both endorse this linking of the normative to the descriptive, Gigerenzer (1991b) and Cohen (1981) do so for somewhat different reasons. For Cohen (1981), it follows from his endorsement of narrow reflective equilibrium as the sine qua non of normative justification. Gigerenzer's (1991b) endorsement is related to his position in the "cognitive ecologist" camp (to use Piattelli-Palmarini's, 1994, p. 183 term) with its emphasis on the ability of evolutionary mechanisms to achieve an optimal Brunswikian tuning of the organism to the local environment (Brase, Cosmides, & Tooby, 1998; Cosmides & Tooby, 1994, 1996; Oaksford & Chater, 1994, 1998; Pinker, 1997). That Gigerenzer and Cohen concur here--even though they have somewhat different positions on normative justification--simply shows how widespread is the acceptance of the principle that descriptive facts about human behavior condition our notions about the appropriateness of the normative models used to evaluate behavior. In fact, stated in such broad form, this principle is not restricted to the Panglossian position. For example, in decision science, there is a long tradition of acknowledging descriptive influences when deciding which normative model to apply to a particular situation. Slovic (1995) refers to this "deep interplay between descriptive phenomena and normative principles" (p. 370). Larrick, Nisbett, and Morgan (1993) have reminded us that "there is also a tradition of justifying, and amending, normative models in response to empirical considerations" (p. 332). March (1988) refers to this tradition when he discusses how actual human behavior has conditioned models of efficient problem solving in artificial intelligence and in the area of organizational decision making. The assumptions underlying the naturalistic project in epistemology (e.g., Kornblith, 1985, 1993) have the same implication--that findings about how humans form and alter beliefs should have a bearing on which normative theories are correctly applied when evaluating the adequacy of belief acquisition. This position is in fact quite widespread: "if people's (or animals') judgments do not match those predicted by a normative model, this may say more about the need for revising the theory to more closely describe subjects' cognitive processes than it says about the adequacy of those processes" (Alloy & Tabachnik, 1984, p. 140). "We must look to what people do in order to gather materials for epistemic reconstruction and self-improvement" (Kyburg, 1991, p. 139). "When ordinary people reject the answers given by normative theories, they may do so out of ignorance and lack of expertise, or they may be signaling the fact that the normative theory is inadequate" (Lopes, 1981, p. 344). Of course, in this discussion we have conjoined disparate views that are actually arrayed on a continuum. The reject-the-norm advocates represent the extreme form of this view--they simply want to read off the normative from the descriptive: "the argument under consideration here rejects the standard picture of rationality and takes the reasoning experiments as giving insight not just into human reasoning competence but also into the normative principles of reasoning" (Stein, 1996, p. 233). In contrast, other theorists (e.g., March, 1988) simply want to subtly fine-tune and adjust normative applications based on descriptive facts about reasoning performance. One thing that all of the various camps in the rationality dispute have in common is that each conditions their beliefs about the appropriate norm to apply based on the centraltendency of the responses to a problem. They all seem to see that single aspect of performance as the only descriptive fact that is relevant to conditioning their views about the appropriate normative model to apply. For example, advocates of the reject-the-norm-application strategy for dealing with normative/descriptive discrepancies view the mean, or modal, response as a direct pointer to the appropriate normative model. One goal of the present research program is to expand the scope of the descriptive information used to condition our views about appropriate norms. 4.2 Putting Descriptive Facts to Work: The Understanding/Acceptance Assumption How should we interpret situations where the majority of individuals respond in ways that depart from the normative model applied to the problem by reasoning experts? Thagard (1982) calls the two different interpretations the populist strategy and the elitist strategy: "The populist strategy, favored by Cohen (1981), is to emphasize the reflective equilibrium of the average person....The elitist strategy, favored by Stich and Nisbett (1980), is to emphasize the reflective equilibrium of experts" (p. 39). Thus, Thagard (1982) identifies the populist strategy with the Panglossian position and the elitist strategy with the Meliorist position. But there are few controversial tasks in the heuristics and biases literature where all untutored laypersons disagree with the experts. There are always some who agree. Thus, the issue is not the untutored average person versus experts (as suggested by Thagard's formulation), but experts plus some laypersons versus other untutored individuals. Might the cognitive characteristics of those departing from expert opinion have implications for which normative model we deem appropriate? Larrick, Nisbett, and Morgan (1993) made just such an argument in their analysis of what justified the cost-benefit reasoning of microeconomics: "Intelligent people would be more likely to use cost-benefit reasoning. Because intelligence is generally regarded as being the set of psychological properties that makes for effectiveness across environments...intelligent people should be more likely to use the most effective reasoning strategies than should less intelligent people" (p. 333). Larrick et al. (1993) are alluding to the fact that we may want to condition our inferences about appropriate norms based not only on what response the majority of people make but also on what response the most cognitively competent subjects make. Slovic and Tversky (1974) made essentially this argument years ago, although it was couched in very different terms in their paper and thus was hard to discern. Slovic and Tversky (1974) argued that descriptive facts about argument endorsement should condition the inductive inferences of experts regarding appropriate normative principles. In response to the argument that there is "no valid way to distinguish between outright rejection of the axiom and failure to understand it" (p. 372), Slovic and Tversky observed that "the deeper the understanding of the axiom, the greater the readiness to accept it" (pp. 372-373). Slovic and Tversky (1974) argued that this understanding/acceptance congruence suggested that the gap between the descriptive and normative was due to an initial failure to fully process and/or understand the task. We might call Slovic and Tversky's argument the understanding/acceptance assumption--that more reflective and engaged reasoners are more likely to affirm the appropriate normative model for a particular situation. From their understanding/acceptance principle, it follows that if greater understanding resulted in more acceptance of the axiom, then the initial gap between the normative and descriptive would be attributed to factors that prevented problem understanding (for example lack of ability or reflectiveness on the part of the subject). Such a finding would increase confidence in the normative appropriateness of the axioms and/or in their application to a particular problem. In contrast, if better understanding failed to result in greater acceptance of the axiom, then its normative status for that particular problem might be considered to be undermined. Using their understanding/acceptance principle, Slovic and Tversky (1974) examined the Allais (1953) problem and found little support for the applicability of the independence axiom of utility theory (the axiom stating that if the outcome in some state of the world is the same across options, then that state of the world should be ignored; Baron, 1993; Savage, 1954). When presented with arguments to explicate both the Allais (1953) and Savage (1954) positions, subjects found the Allais argument against independence at least as compelling and did not tend to change their task behavior in the normative direction (see MacCrimmon, 1968 and MacCrimmon & Larsson, 1979 for more mixed results on the independence axiom using related paradigms). Although Slovic and Tversky (1974) failed to find support for this particular normative application, they presented a principle that may be of general usefulness in theoretical debates about why human performance deviates from normative models. The central idea behind Slovic and Tversky's (1974) development of the understanding/acceptance assumption is that increased understanding should drive performance in the direction of the truly normative principle for the particular situation--so that the direction that performance moves in response to increased understanding provides an empirical clue as to what is the proper normative model to be applied. One might conceive of two generic strategies for applying the understanding/acceptance principle based on the fact that variation in understanding can be created or it can be studied by examining naturally occurring individual differences. Slovic and Tversky employed the former strategy by providing subjects with explicated arguments supporting the Allais or Savage normative interpretation (see also Doherty, Schiavo, Tweney, & Mynatt, 1981; Stanovich & West, 1999). Other methods of manipulating understanding have provided consistent evidence in favor of the normative principle of descriptive invariance (see Kahneman & Tversky, 1984). For example, it has been found that being forced to take more time or to provide a rationale for selections increases adherence to descriptive invariance (Larrick, Smith, & Yates, 1992; Miller & Fagley, 1991; Sieck & Yates, 1997; Takemura, 1992, 1993, 1994). Moshman and Geil (1998) found that group discussion facilitated performance on Wason's selection task. As an alternative to manipulating understanding, the understanding/acceptance principle can be transformed into an individual differences prediction. For example, the principle might be interpreted as indicating that more reflective, engaged, and intelligent reasoners are more likely to respond in accord with normative principles. Thus, it might be expected that those individuals with cognitive/personality characteristics more conducive to deeper understanding would be more accepting of the appropriate normative principles for a particular problem. This was the emphasis of Larrick et al. (1993) when they argued that more intelligent people should be more likely to use cost-benefit principles. Similarly, need for cognition--a dispositional variable reflecting the tendency toward thoughtful analysis and reflective thinking--has been associated with aspects of epistemic and practical rationality (Cacioppo, Petty, Feinstein, & Jarvis, 1996; Kardash & Scholes, 1996; Klaczynski et al., 1997; Smith & Levin, 1996; Verplanken, 1993). This particular application of the understanding/acceptance principle derives from the assumption that a normative/descriptive gap that is disproportionately created by subjects with a superficial understanding of the problem provides no warrant for amending the application of standard normative models. 4.3 Tacit Acceptance of the Understanding/Acceptance Principle as a Mechanism for Adjudicating Disputes About the Appropriate Normative Models to Apply It is important to point out that many theorists on all sides of the rationality debate have acknowledged the force of the understanding/acceptance argument (without always labeling the argument as such or citing Slovic & Tversky, 1974). For example, Gigerenzer and Goldstein (1996) lament the fact that Apologist theorists who emphasize Simon's (1956, 1957, 1983) concept of bounded rationality seemingly accept the normative models applied by the heuristics and biases theorists by their assumption that, if computational limitations were removed, individuals' responses would indeed be closer to the behavior those models prescribe. Lopes and Oden (1991) also wish to deny this tacit assumption in the literature on computational limitations: "discrepancies between data and model are typically attributed to people's limited capacity to process information....There is, however, no support for the view that people would choose in accord with normative prescriptions if they were provided with increased capacity" (pp. 208-209). In stressing the importance of the lack of evidence for the notion that people would "choose in accord with normative prescriptions if they were provided with increased capacity" (p. 209), Lopes and Oden (1991) acknowledge the force of the individual differences version of the understanding/acceptance principle--because examining variation in cognitive ability is just that: looking at what subjects who have "increased capacity" actually do with that increased capacity. In fact, critics of the heuristics and biases literature have repeatedly drawn on an individual differences version of the understanding/acceptance principle to bolster their critiques. For example, Cohen (1982) critiques the older "bookbag and poker chip" literature on Bayesian conservatism (Phillips & Edwards, 1966; Slovic, Fischhoff, Lichtenstein, 1977) by noting that "if so-called 'conservatism' resulted from some inherent inadequacy in people's information-processing systems one might expect that, when individual differences in information-processing are measured on independently attested scales, some of them would correlate with degrees of 'conservatism.' In fact, no such correlation was found by Alker and Hermann (1971). And this is just what one would expect if 'conservatism' is not a defect, but a rather deeply rooted virtue of the system" (pp. 259-260). This is precisely how Alker and Hermann (1971) themselves argued in their paper: "Phillips et al. (1966) have proposed that conservatism is the result of intellectual deficiencies. If this is the case, variables such as rationality, verbal intelligence, and integrative complexity should have related to deviation from optimality--more rational, intelligent, and complex individuals should have shown less conservatism" (p. 40). Wetherick (1971, 1995) has been a critic of the standard interpretation of the four-card selection task (Wason, 1966) for over 25 years. As a Panglossian theorist, he has been at pains to defend the modal response chosen by roughly 50% of the subjects (the P and Q cards). As did Cohen (1982) and Lopes and Oden (1991), Wetherick (1971) points to the lack of associations with individual differences to bolster his critique of the standard interpretation of the task: "in Wason's experimental situation subjects do not choose the not-Q card nor do they stand and give three cheers for the Queen, neither fact is interesting in the absence of a plausible theory predicting that they should....If it could be shown that subjects who choose not-Q are more intelligent or obtain better degrees than those who do not this would make the problem worth investigation, but I have seen no evidence that this is the case" (Wetherick, 1971, p. 213). Funder (1987), like Cohen (1982) and Wetherick (1971), uses a finding about individual differences to argue that a particular attribution bias is not necessarily produced by a process operating suboptimally. Block and Funder (1986) analyzed the role effect observed by Ross, Amabile, and Steinmetz (1977): that people rated questioners more knowledgeable than contestants in a quiz game. Although the role effect is usually viewed as an attributional error--people allegedly failed to consider the individual's role when estimating the knowledge displayed--Block and Funder (1986) demonstrated that subjects most susceptible to this attributional "error" were more socially competent, more well adjusted, and more intelligent. Funder (1987) argued that "manifestation of this 'error,' far from being a symptom of social maladjustment, actually seems associated with a degree of competence" (p. 82) and that the so-called error is thus probably produced by a judgmental process that is generally efficacious. In short, the argument is that the signs of the correlations with the individual difference variables point in the direction of the response that is produced by processes that are ordinarily useful. Thus, Funder (1987), Lopes and Oden (1991), Wetherick (1971), and Cohen (1982) all make recourse to patterns of individual differences (or the lack of such patterns) to pump our intuitions (Dennett, 1980) in the direction of undermining the standard interpretations of the tasks under consideration. In other cases, however, examining individual differences may actually reinforce confidence in the appropriateness of the normative models applied to problems in the heuristics and biases literature. 4.4 The Understanding/Acceptance Principle and Spearman's Positive Manifold With these arguments in mind, it is thus interesting to note that the direction of all of the correlations displayed in Table 1 is consistent with the standard normative models used by psychologists working in the heuristics and biases tradition. The directionality of the systematic correlations with intelligence are embarrassing for those reject-the-norm-application theorists who argue that norms are being incorrectly applied if we interpret the correlations in terms of the understanding/acceptance principle (a principle which, as seen in section [12]4.3, is endorsed in various forms by a host of Panglossian critics of the heuristics and biases literature). Surely we would want to avoid the conclusion that individuals with more computational power are systematically computing the nonnormative response. Such an outcome would be an absolute first in a psychometric field that is one hundred years and thousands of studies old (Brody, 1997; Carroll, 1993, 1997; Lubinski & Humphreys, 1997; Neisser et al., 1996; Sternberg & Kaufman, 1998). It would mean that Spearman's (1904, 1927) positive manifold for cognitive tasks--virtually unchallenged for one hundred years--had finally broken down. Obviously, parsimony dictates that positive manifold remains a fact of life for cognitive tasks and that the response originally thought to be normative actually is. In fact, it is probably helpful to articulate the understanding/acceptance principle somewhat more formally in terms of positive manifold--the fact that different measures of cognitive ability almost always correlate with each other (see Carroll, 1993, 1997). The individual differences version of the understanding/acceptance principle puts positive manifold to use in areas of cognitive psychology where the nature of the appropriate normative model to apply is in dispute. The point is that scoring a vocabulary item on a cognitive ability test and scoring a probabilistic reasoning response on a task from the heuristics and biases literature are not the same. The correct response in the former task has a canonical interpretation agreed upon by all investigators; whereas the normative appropriateness of responses on tasks from the latter domain has been the subject of extremely contentious dispute (Cohen, 1981, 1982, 1986; Cosmides & Tooby, 1996; Einhorn & Hogarth, 1981; Gigerenzer, 1991a, 1993, 1996a; Kahneman & Tversky, 1996; Koehler, 1996; Stein, 1996). Positive manifold between the two classes of task would only be expected if the normative model being used for directional scoring of the tasks in the latter domain is correct^5. Likewise, given that positive manifold is the norm among cognitive tasks, the negative correlation (or, to a lesser extent, the lack of a correlation) between a probabilistic reasoning task and more standard cognitive ability measures might be taken as a signal that the wrong normative model is being applied to the former task or that there are alternative models that are equally appropriate. The latter point is relevant because the pattern of results in our studies has not always mirrored the positive manifold displayed in Table 1. We have previously mentioned the false-consensus effect and overconfidence effect as such examples, and further instances are discussed in the next section. 4.5 Noncausal Base Rates The statistical reasoning problems utilized in the experiments discussed so far (those derived from Fong, et al. 1986) involved causal aggregate information, analogous to the causal base rates discussed by Ajzen (1977) and Bar-Hillel (1980, 1990)--that is, base rates that had a causal relationship to the criterion behavior. Noncausal base-rate problems--those involving base rates with no obvious causal relationship to the criterion behavior--have had a much more controversial history in the research literature. They have been the subject of over a decade's worth of contentious dispute (Bar-Hillel, 1990; Birnbaum, 1983; Cohen, 1979, 1982, 1986; Cosmides & Tooby, 1996; Gigerenzer, 1991b, 1993, 1996a; Gigerenzer & Hoffrage, 1995; Kahneman & Tversky, 1996; Koehler, 1996; Kyburg, 1983; Levi, 1983; Macchi, 1995)--important components of which have been articulated in this journal (e.g., Cohen, 1981, 1983; Koehler, 1996; Krantz, 1981; Kyburg, 1983; Levi, 1983). In several experiments, we have examined some of the noncausal base-rate problems that are notorious for provoking philosophical dispute. One was an AIDS testing problem modeled on Casscells, Schoenberger, and Grayboys (1978): "Imagine that AIDS occurs in one in every 1000 people. Imagine also there is a test to diagnose the disease that always gives a positive result when a person has AIDS. Finally, imagine that the test has a false positive rate of 5 percent. This means that the test wrongly indicates that AIDS is present in 5 percent of the cases where the person does not have AIDS. Imagine that we choose a person randomly, administer the test, and that it yields a positive result (indicates that the person has AIDS). What is the probability that the individual actually has AIDS, assuming that we know nothing else about the individual's personal or medical history?" The Bayesian posterior probability for this problem is slightly less than .02. In several analyses and replications (see Stanovich, 1999; Stanovich & West, 1998c) in which we have classified responses of less than 10% as Bayesian, responses of over 90% as indicating strong reliance on indicant information, and responses between 10% and 90% as intermediate, we have found that subjects giving the indicant response were higher in cognitive ability than those giving the Bayesian response^6. Additionally, when tested on causal base-rate problems (e.g., Fong et al., 1986), the greatest base-rate usage was displayed by the group highly reliant on the indicant information in the AIDS problem. The subjects giving the Bayesian answer on the AIDS problem were least reliant on the aggregate information in the causal statistical reasoning problems. A similar violation of the expectation of positive manifold was observed on the notorious cab problem (see Bar-Hillel, 1980; Lyon & Slovic, 1976; Tversky & Kahneman, 1982)--also the subject of almost two decades-worth of dispute: "A cab was involved in a hit-and-run accident at night. Two cab companies, the Green and the Blue, operate in the city in which the accident occurred. You are given the following facts: 85 percent of the cabs in the city are Green and 15 percent are Blue. A witness identified the cab as Blue. The court tested the reliability of the witness under the same circumstances that existed on the night of the accident and concluded that the witness correctly identified each of the two colors 80 percent of the time. What is the probability that the cab involved in the accident was Blue?" Bayes' rule yields .41 as the posterior probability of the cab being blue. Thus, responses over 70% were classified as reliant on indicant information, responses between 30% and 70% as Bayesian, and response less than 30% as reliant on indicant information. Again, it was found that subjects giving the indicant response were higher in cognitive ability and need for cognition than those giving the Bayesian or base-rate response (Stanovich & West, 1998c, 1999). Finally, both the cabs problem and the AIDS problem were subjected to the second of Slovic and Tversky's (1974) methods of operationalizing the understanding/acceptance principle--presenting the subjects with arguments explicating the traditional normative interpretation (Stanovich & West, 1999). On neither problem was there a strong tendency for responses to move in the Bayesian direction subsequent to explication. The results from both of these problems indicate that the noncausal base-rate problems display patterns of individual differences quite unlike those shown on the causal aggregate problems. On the latter, subjects giving the statistical response (choosing the aggregate rather than the case or indicant information) scored consistently higher on measures of cognitive ability. This pattern did not hold for the AIDS and cab problem where the significant differences were in the opposite direction--subjects strongly reliant on the indicant information scored higher on measures of cognitive ability and were more likely to give the Bayesian response on causal base-rate problems. We examined the processing of noncausal base rates in another task with very different task requirements (see Stanovich, 1999; Stanovich & West, 1998d)--a selection task in which individuals were not forced to compute a Bayesian posterior, but instead simply had to indicate whether or not they thought the base rate was relevant to their decision. The task was taken from the work of Doherty and Mynatt (1990). Subjects were given the following instructions: "Imagine you are a doctor. A patient comes to you with a red rash on his fingers. What information would you want in order to diagnose whether the patient has the disease Digirosa? Below are four pieces of information that may or may not be relevant to the diagnosis. Please indicate all of the pieces of information that are necessary to make the diagnosis, but only those pieces of information that are necessary to do so." Subjects then chose from the alternatives listed in the order: % of people without Digirosa who have a red rash, % of people with Digirosa, % of people without Digirosa, and % of people with Digirosa who have a red rash. These alternatives represented the choices of P(D/~H), P(H), P(~H), and P(D/H), respectively. The normatively correct choice of P(H), P(D/H), and P(D/~H) was made by 13.4% of our sample. The most popular choice (made by 35.5% of the sample) was the two components of the likelihood ratio, (P(D/H) and P(D/~H); 21.9% of the sample chose P(D/H) only; and 22.7% chose the base rate, P(H), and the numerator of the likelihood ratio, P(D/H)--ignoring the denominator of the likelihood ratio, P(D/~H). Collapsed across these combinations, almost all subjects (96.0%) viewed P(D/H) as relevant and very few (2.8%) viewed P(~H) as relevant. Overall, 54.3% of the subjects deemed that P(D/~H) was necessary information and 41.5% of the sample thought it was necessary to know the base rate, P(H). We examined the cognitive characteristics of the subjects who thought the baserate was relevant and found that the did not display higher SAT than those who did not choose the baserate. The pattern of individual differences was quite different for the denominator of the likelihood ratio, P(D/~H)--a component which is normatively uncontroversial. Subjects seeing this information as relevant had significantly higher SAT scores. Interestingly, in light of these patterns of individual differences showing lack of positive manifold when the tasks are scored in terms of the standard Bayesian approach, noncausal base-rate problems like the AIDS and cab problem have been the focus of intense debate in the literature (Cohen, 1979, 1981, 1982, 1986; Koehler, 1996; Kyburg, 1983; Levi, 1983). Several authors have argued that a rote application of the Bayesian formula to these problems is unwarranted because noncausal base rates of the AIDS-problem type lack relevance and reference-class specificity. Finally, our results might also suggest that the Bayesian subjects on the AIDS problem might not actually be arriving at their response through anything resembling Bayesian processing (whether or not they were operating in a frequentist mode; Gigerenzer & Hoffrage, 1995), because on causal aggregate statistical reasoning problems these subjects were less likely to rely on the aggregate information. 5. Alternative Task Construals Theorists who resist interpreting the gap between normative and descriptive models as indicating human irrationality have one more strategy available in addition to those previously described. In the context of empirical cognitive psychology, it is a commonplace argument, but it is one that continues to create enormous controversy and to bedevil efforts to compare human performance to normative standards. It is the argument that although the experimenter may well be applying the correct normative model to the problem as set, the subject might be construing the problem differently and be providing the normatively appropriate answer to a different problem--in short, that subjects have a different interpretation of the task (see, for example, Adler, 1984, 1991; Broome, 1990; Henle, 1962; Hilton, 1995; Levinson, 1995; Margolis, 1987; Schick, 1987, 1997; Schwarz, 1996). Such an argument is somewhat different from any of the critiques examined thus far. It is not the equivalent of positing that a performance error has been made, because performance errors (attention lapses, etc.)--being transitory and random--would not be expected to recur in exactly the same way in a readministration of the same task. Whereas, if the subject has truly misunderstood the task, they would be expected to do so again on an identical re-administration of the task. Correspondingly, this criticism is different from the argument that the task exceeds the computational capacity of the subject. The latter explanation locates the cause of the suboptimal performance within the subject. In contrast, the alternative task construal argument places the blame at least somewhat on the shoulders of the experimenter for failing to realize that there were task features that might lead subjects to frame the problem in a manner different from that intended^7. As with incorrect norm application, the alternative construal argument locates the problem with the experimenter. However, it is different in that in the wrong norm explanation it is assumed that the subject is interpreting the task as the experimenter intended--but the experimenter is not using the right criteria to evaluate performance. In contrast, the alternative task construal argument allows that the experimenter may be applying the correct normative model to the problem the experimenter intends the subject to solve--but posits that the subject has construed the problem in some other way and is providing a normatively appropriate answer to a different problem. It seems that in order to comprehensively evaluate the rationality of human cognition it will be necessary to evaluate the appropriateness of various task construals. This is because--contrary to thin theories of means/ends rationality that avoid evaluating the subject's task construal (Elster, 1983; Nathanson, 1994)--it will be argued here that if we are going to have any normative standards at all, then we must also have standards for what are appropriate and inappropriate task construals. In the remainder of this section, we will sketch the arguments of philosophers and decision scientists who have made just this point. Then it will be argued that: 1) in order to tackle the difficult problem of evaluating task construals, criteria of wide reflective equilibrium come into play; 2) it will be necessary to use all descriptive information about human performance that could potentially affect expert wide reflective equilibrium; 3) included in the relevant descriptive facts are individual differences in task construal and their patterns of covariance. This argument will again make use of the understanding/acceptance principle of Slovic and Tversky (1974) discussed in Section [13]4.2. 5.1 The Necessity of Principles of Rational Construal It is now widely recognized that the evaluation of the normative appropriateness of a response to a particular task is always relative to a particular interpretation of the task. For example, Schick (1987) argues that "how rationality directs us to choose depends on which understandings are ours....[and that] the understandings people have bear on the question of what would be rational for them" (pp. 53, 58). Likewise, Tversky (1975) argued that "the question of whether utility theory is compatible with the data or not, therefore, depends critically on the interpretation of the consequences" (p. 171). However, others have pointed to the danger inherent in too permissively explaining away nonnormative responses by positing different construals of the problem. Normative theories will be drained of all of their evaluative force if we adopt an attitude that is too charitable toward alternative construals. Broome (1990) illustrates the problem by discussing the preference reversal phenomenon (Lichtenstein & Slovic, 1971; Slovic, 1995). In a choice between two gambles, A and B, a person chooses A over B. However, when pricing the gambles, the person puts a higher price on B. This violation of procedural invariance leads to what appears to be intransitivity. Presumably there is an amount of money, M, that would be preferred to A but given a choice of M and B the person would choose B. Thus, we appear to have B > M, M > A, A > B. Broome (1990) points out that when choosing A over B the subject is choosing A and is simultaneously rejecting B. Evaluating A in the M versus A comparison is not the same. Here, when choosing A, the subject is not rejecting B. The A alternative here might be considered to be a different prospect (call it A'), and if it is so considered there is no intransitivity (B > M, M > A', A > B). Broome (1990) argues that whenever the basic axioms such as transitivity, independence, or descriptive or procedural invariance are breached, the same inoculating strategy could be invoked--that of individuating outcomes so finely that the violation disappears. Broome's (1990) point is that the thinner the categories we use to individuate outcomes, the harder it will be to attribute irrationality to a set of preferences if we evaluate rationality only in instrumental terms. He argues that we need, in addition to the formal principles of rationality, those that deal with content so as to enable us to evaluate the reasonableness of a particular individuation of outcomes. Broome (1990) acknowledges that "this procedure puts principles of rationality to work at a very early stage of decision theory. They are needed in fixing the set of alternative prospects that preferences can then be defined upon. The principles in question might be called "'rational principles of indifference'" (p. 140). Broome (1990) admits that "many people think there can be no principles of rationality apart from the formal ones. This goes along with the common view that rationality can only be instrumental....[however] if you acknowledge only formal principles of rationality, and deny that there are any principles of indifference, you will find yourself without any principles of rationality at all" (pp. 140-141). Broome cites Tversky (1975) as concurring in this view: "I believe that an adequate analysis of rational choice cannot accept the evaluation of the consequences as given, and examine only the consistency of preferences. There is probably as much irrationality in our feelings, as expressed in the way we evaluate consequences, as there is in our choice of actions. An adequate normative analysis must deal with problems such as the legitimacy of regret in Allais' problem....I do not see how the normative appeal of the axioms could be discussed without a reference to a specific interpretation" (Tversky, 1975, p. 172). Others agree with the Broome/Tversky analysis (see Baron, 1993, 1994; Frisch, 1994; Schick, 1997). But while there is some support for Broome's generic argument, the contentious disputes about rational principles of indifference and rational construals of the tasks in the heuristics and biases literature (Adler, 1984, 1991; Berkeley & Humphreys, 1982; Cohen, 1981, 1986; Gigerenzer, 1993, 1996a; Hilton, 1995; Jepson, Krantz, & Nisbett, 1983; Kahneman & Tversky, 1983, 1996; Lopes, 1991; Nisbett, 1981; Schwarz, 1996) highlight the difficulties to be faced when attempting to evaluate specific problem construals. For example, Margolis (1987) agrees with Henle (1962) that the subjects' nonnormative responses will almost always be logical responses to some other problem representation. But unlike Henle (1962), Margolis (1987) argues that many of these alternative task construals are so bizarre--so far from what the very words in the instructions said--that they represent serious cognitive errors that deserve attention: "But in contrast to Henle and Cohen, the detailed conclusions I draw strengthen rather than invalidate the basic claim of the experimenters. For although subjects can be--in fact, I try to show, ordinarily are--giving reasonable responses to a different question, the different question can be wildly irrelevant to anything that plausibly could be construed as the meaning of the question asked. The locus of the illusion is shifted, but the force of the illusion is confirmed not invalidated or explained away" (p. 141) 5.2 Evaluating Principles of Rational Construal: The Understanding/Acceptance Assumption Revisited Given current arguments that principles of rational construal are necessary for a full normative theory of human rationality (Broome, 1990; Einhorn & Hogarth, 1981; Jungermann, 1986; Schick, 1987, 1997; Shweder, 1987; Tversky, 1975), how are such principles to be derived? When searching for principles of rational task construal the same mechanisms of justification used to assess principles of instrumental rationality will be available. Perhaps in some cases--instances where the problem structure maps the world in an unusually close and canonical way--problem construals could be directly evaluated by how well they serve the decision maker in achieving their goals (Baron, 1993, 1994). In such cases, it might be possible to prove the superiority or inferiority of certain construals by appeals to Dutch Book or money pump arguments (de Finetti, 1970/1990; Maher, 1993; Skyrms, 1986; Osherson, 1995; Resnik, 1987). Also available will be the expert wide reflective equilibrium view discussed by Stich and Nisbett (1980; see Stanovich, 1999; Stein, 1996). In contrast, Baron (1993, 1994) and Thagard (1982) argue that rather than any sort of reflective equilibrium, what is needed here are "arguments that an inferential system is optimal with respect to the criteria discussed" (Thagard, 1982, p. 40). But in the area of task construal, finding optimization of criteria may be unlikely--there will be few money pumps or Dutch Books to point the way. If in the area of task construal there will be few money pumps or Dutch Books to prove that a particular task interpretation has disastrous consequences, then the field will be again thrust back upon the debate that Thagard (1982) calls the argument between the populists and the elitists. But as argued before, this is really a misnomer. There are few controversial tasks in the heuristics and biases literature where all untutored laypersons interpret tasks differently from those of the experts who designed them. The issue is not the untutored average person versus experts, but experts plus some laypersons versus other untutored individuals. The cognitive characteristics of those departing from the expert construal might--for reasons parallel to those argued in section [14]4--have implications for how we evaluate particular task interpretations. It is argued here that Slovic and Tversky's (1974) assumption ("the deeper the understanding of the axiom, the greater the readiness to accept it" pp. 372-373) can again be used as a tool to condition the expert reflective equilibrium regarding principles of rational task construal. Framing effects are ideal vehicles for demonstrating how the understanding/acceptance principle might be utilized. First, it has already been shown that there are consistent individual differences across a variety of framing problems (Frisch, 1993). Second, framing problems have engendered much dispute regarding issues of appropriate task construal. The Disease Problem of Tversky and Kahneman (1981) has been the subject of much contention: Problem 1. Imagine that the U.S. is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows: If Program A is adopted, 200 people will be saved. If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved. Which of the two programs would you favor, Program A or Program B? Problem 2. Imagine that the U.S. is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows: If Program C is adopted, 400 people will die. If Program D is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die. Which of the two programs would you favor, Program C or Program D? Many subjects select alternatives A and D in these two problems despite the fact that the two problems are redescriptions of each other and that Program A maps to Program C rather than D. This response pattern violates the assumption of descriptive invariance of utility theory. However, Berkeley and Humphreys (1982) argue that the Programs A and C might not be descriptively invariant in subjects' interpretations. They argue that the wording of the outcome of Program A ("will be saved") combined with the fact that its outcome is seemingly not described in the exhaustive way as the consequences for Program B suggests the possibility of human agency in the future which might enable the saving of more lives (see also, Kuhberger, 1995). The wording of the outcome of Program C ("will die") does not suggest the possibility of future human agency working to possibly save more lives (indeed, the possibility of losing a few more might be inferred by some people). Under such a construal of the problem, it is no longer non-normative to choose Programs A and D. Likewise, Macdonald (1986) argues that, regarding the "200 people will be saved" phrasing, "it is unnatural to predict an exact number of cases" (p. 24) and that "ordinary language reads 'or more' into the interpretation of the statement" (p. 24; see also Jou, Shanteau, & Harris, 1996). However, consistent with the finding that being forced to provide a rationale or take more time reduces framing effects (e.g., Larrick et al., 1992; Sieck & Yates, 1997; Takemura, 1994) and that people higher in need for cognition displayed reduced framing effects (Smith & Levin, 1996), in our within-subjects study of framing effects on the Disease Problem (Stanovich & West, 1998b), we found that subjects giving a consistent response to both descriptions of the problem--who were actually the majority in our within-subjects experiment--were significantly higher in cognitive ability than those subjects displaying a framing effect. Thus, the results of studies investigating the effects of giving a rationale, taking more time, associations with cognitive engagement, and associations with cognitive ability are all consistent in suggesting that the response dictated by the construal of the problem originally favored by Tversky and Kahneman (1981) should be considered the correct response because it is endorsed even by untutored subjects as long as they are cognitively engaged with the problem, had enough time to process the information, and had the cognitive ability to fully process the information^8. Perhaps no finding in the heuristics and biases literature has been the subject of as much criticism as Tversky and Kahneman's (1983) claim to have demonstrated a conjunction fallacy in probabilistic reasoning. Most of the criticisms have focused on the issue of differential task construal, and several critics have argued that there are alternative construals of the tasks that are, if anything, more rational than that which Tversky and Kahneman (1983) regard as normative for examples such as the well-known Linda problem: Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Please rank the following statements by their probability, using 1 for the most probable and 8 for the least probable. a. Linda is a teacher in an elementary school b. Linda works in a bookstore and takes Yoga classes c. Linda is active in the feminist movement d. Linda is a psychiatric social worker e. Linda is a member of the League of Women Voters f. Linda is a bank teller g. Linda is an insurance salesperson h. Linda is a bank teller and is active in the feminist movement Because alternative h is the conjunction of alternatives c and f, the probability of h cannot be higher than that of either c or f, yet 85% of the subjects in Tversky and Kahneman's (1983) study rated alternative h as more probable than f. What concerns us here is the argument that there are subtle linguistic and pragmatic features of the problem that lead subjects to evaluate alternatives different than those listed. For example, Hilton (1995) argues that under the assumption that the detailed information given about the target means that the experimenter knows a considerable amount about Linda, then it is reasonable to think that the phrase "Linda is a bank teller" does not contain the phrase "and is not active in the feminist movement" because the experimenter already knows this to be the case. If "Linda is a bank teller" is interpreted in this way, then rating h as more probable than f no longer represents a conjunction fallacy. Similarly, Morier and Borgida (1984) point out that the presence of the unusual conjunction "Linda is a bank teller and is active in the feminist movement" itself might prompt an interpretation of "Linda is a bank teller" as "Linda is a bank teller and is not active in the feminist movement". Actually, Tversky and Kahneman (1983) themselves had concerns about such an interpretation of the "Linda is a bank teller" alternative and ran a condition in which this alternative was rephrased as "Linda is a bank teller whether or not she is active in the feminist movement". They found that conjunction fallacy was reduced from 85% of their sample to 57% when this alternative was used. Several other investigators have suggested that pragmatic inferences lead to seeming violations of the logic of probability theory in the Linda Problem^9 (see Adler, 1991; Dulany & Hilton, 1991; Levinson, 1995; Macdonald & Gilhooly, 1990; Politzer & Noveck, 1991; Slugoski & Wilson, 1998). These criticisms all share the implication that actually committing the conjunction fallacy is a rational response to an alternative construal of the different statements about Linda. Assuming that those committing the so-called conjunction fallacy are making the pragmatic interpretation and that those avoiding the fallacy are making the interpretation that the investigators intended, we examined whether the subjects making the pragmatic interpretation were subjects who were disproportionately the subjects of higher cognitive ability. Because this group is in fact the majority in most studies--and because the use of such pragmatic cues and background knowledge is often interpreted as reflecting adaptive information processing (e.g., Hilton, 1995)--it might be expected that these individuals would be the subjects of higher cognitive ability. In our study (Stanovich & West, 1998b), we examined the performance of 150 subjects on the Linda Problem presented above. Consistent with the results of previous experiments on this problem (Tversky & Kahneman, 1983), 80.7% of our sample committed the conjunction effect--they rated the feminist bank teller alternative as more probable than the bank teller alternative. The mean SAT score of the 121 subjects who committed the conjunction fallacy was 82 points lower than the mean score of the 29 who avoided the fallacy. This difference was highly significant and it translated into an effect size of .746, which Rosenthal and Rosnow (1991, p. 446) classify as "large." Tversky and Kahneman (1983) and Reeves and Lockhart (1993) have demonstrated that the incidence of the conjunction fallacy can be decreased if the problem describes the event categories in some finite population or if the problem is presented in a frequentist manner (see also Fiedler, 1988; Gigerenzer, 1991b, 1993). We have replicated this well-known finding, but we have also found that frequentist representations of these problems markedly reduce--if not eliminate--cognitive ability differences (Stanovich & West, 1998b). Another problem that has spawned many arguments about alternative construals is Wason's (1966) selection task. Performance on abstract versions of the selection task is extremely low (see Evans, Newstead, & Byrne, 1993). Typically, less than 10% of subjects make the correct selections of the A card (P) and 7 card (not-Q). The most common incorrect choices made by subjects are the A card and the 3 card (P and Q) or the selection of the A card only (P). The preponderance of P and Q responses has most often been attributed to a so-called matching bias that is automatically triggered by surface-level relevance cues (Evans, 1996; Evans & Lynch, 1973), but some investigators have championed an explanation based on an alternative task construal. For example, Oaksford and Chater (1994, 1996; see also Nickerson, 1996) argue that rather than interpreting the task as one of deductive reasoning (as the experimenter intends), many subjects interpret it as an inductive problem of probabilistic hypothesis testing. They show that the P and Q response is expected under a formal Bayesian analysis which assumes such an interpretation in addition to optimal data selection. We have examined individual differences in responding on a variety of abstract and deontic selection task problems (Stanovich & West, 1998a, 1998c). Typical results are displayed in [15]Table 2. The table presents the mean SAT scores of subjects responding correctly (as traditionally interpreted--with the responses P and not-Q) on various versions of selection task problems. One was a commonly used nondeontic problem with content, the so-called Destination Problem (e.g., Manktelow & Evans, 1979). Replicating previous research, few subjects responded correctly on this problem. However, those that did had significantly higher SAT scores than those that did not and the difference was quite large in magnitude (effect size of .815). Also presented in the table are two well-known problems (Dominowski, 1995; Griggs, 1983; Griggs & Cox, 1982, 1983; Newstead & Evans, 1995) with deontic rules (reasoning about rules used to guide human behavior--about what "ought to" or "must" be done, see Manktelow & Over, 1991)--the Drinking-Age Problem (If a person is drinking beer then the person must be over 21 years of age) and the Sears Problem (Any sale over $30 must be approved by the section manager, Mr. Jones). Both are known to facilitate performance and this effect is clearly replicated in the data presented in [16]Table 2. However, it is also clear that the differences in cognitive ability are much less in these two problems. The effect size is reduced from .815 to .347 in the case of the Drinking-Age Problem and it fails to even reach statistical significance in the case of the Sears Problem (effect size of .088). The bottom half of the table indicates that exactly the same pattern was apparent when the P and not-Q responders were compared only with the P and Q responders on the Destination Problem--the latter being the response that is most consistent with an inductive construal of the problem (see Nickerson, 1996; Oaksford & Chater, 1994, 1996). Table 2 Mean SAT Total Scores of Subjects Who Gave the Correct and Incorrect Responses to Three Different Selection Task Problems (Numbers in Parentheses are the Number of Subjects) Incorrect P & not-Q (Correct) t value Effect Size^a Nondeontic Problem: Destination Problem 1187 (197) 1270 (17) 3.21*** .815 Deontic Problems: Drinking-Age Problem 1170 (72) 1206 (143) 2.39** .347 Sears Problem 1189 (87) 1198 (127) 0.63 .088 Nondeontic Problem: P & Q P & not-Q t value Effect Size^a Destination Problem 1195 (97) 1270 (17) 3.06*** .812 Note: df = 212 for the Destination and Sears Problems and 213 for the Drinking-Age Problem; df = 112 for the P&Q comparison on the Destination Problem * = p < .05, ** = p < .025, *** = p < .01, all two-tailed ^a Cohen's d Thus, on the selection task, it appears that cognitive ability differences are strong in cases where there is a dispute about the proper construal of the task (in nondeontic tasks). In cases where there is little controversy about alternative construals--the deontic rules of the Drinking-Age and Sears problems--cognitive ability differences are markedly attenuated. This pattern--cognitive ability differences large on problems where there is contentious dispute regarding the appropriate construal and cognitive ability differences small when there is no dispute about task construal--is mirrored in our results on the conjunction effect and framing effect (Stanovich & West, 1998b). 6. Dual Process Theories and Alternative Task Construals The sampling of results just presented (for other examples, see Stanovich, 1999) has demonstrated that the responses associated with alternative construals of a well-known framing problem (the Disease Problem), for the Linda Problem, and for the nondeontic selection task were consistently associated with lower cognitive ability. How might we interpret this consistent pattern displayed on three tasks from the heuristics and biases literature where alternative task construals have been championed? One possible interpretation of this pattern is in terms of two-process theories of reasoning (Epstein, 1994; Evans, 1984, 1996; Evans & Over, 1996; Sloman, 1996). A summary of the generic properties distinguished by several two-process views are presented in [17]Table 3. Although the details and technical properties of these dual-process theories do not always match exactly, nevertheless there are clear family resemblances (for discussions, see Evans & Over, 1996; Gigerenzer & Regier, 1996; Sloman, 1996). In order to emphasize the prototypical view that is adopted here, the two systems have simply been generically labeled System 1 and System 2. Table 3 The Terms for the Two Systems Used by a Variety of Theorists and the Properties of Dual-Process Theories of Reasoning System 1 System 2 Dual-Process Theories: Sloman (1996) associative system rule-based system Evans (1984, 1989) heuristic processing analytic processing Evans & Over (1996) tacit thought processes explicit thought processes Reber (1993) implicit cognition explicit learning Levinson (1995) interactional intelligence analytic intelligence Epstein (1994) experiential system rational system Pollock (1991) quick & inflexible modules intellection Hammond (1996) intuitive cognition analytical cognition Klein (1998) recognition-primed decisions rational choice strategy Johnson-Laird (1983) implicit inferences explicit inferences Properties: associative rule-based holistic analytic automatic controlled relatively undemanding of cognitive capacity demanding of cognitive capacity relatively fast relatively slow acquisition by biology, exposure, and personal experience acquisition by cultural and formal tuition Task Construal: highly contextualized decontextualized personalized depersonalized conversational and social asocial Type of Intelligence Indexed: interactional (conversational implicature) analytic (psychometric IQ) The key differences in the properties of the two systems are listed next. System 1 is characterized as automatic, largely unconscious, and relatively undemanding of computational capacity. Thus, it conjoins properties of automaticity and heuristic processing as these constructs have been variously discussed in the literature. These properties characterize what Levinson (1995) has termed interactional intelligence--a system composed of the mechanisms that support a Gricean theory of communication that relies on intention-attribution. This system has as its goal the ability to model other minds in order to read intention and to make rapid interactional moves based on those modeled intentions. System 2 conjoins the various characteristics that have been viewed as typifying controlled processing. System 2 encompasses the processes of analytic intelligence that have traditionally been studied by information processing theorists trying to uncover the computational components underlying intelligence. For the purposes of the present discussion, the most important difference between the two systems is that they tend to lead to different types of task construals. Construals triggered by System 1 are highly contextualized, personalized, and socialized. They are driven by considerations of relevance and are aimed at inferring intentionality by the use of conversational implicature even in situations that are devoid of conversational features (see Margolis, 1987). The primacy of these mechanisms leads to what has been termed the fundamental computational bias in human cognition (Stanovich, 1999)--the tendency toward automatic contextualization of problems. In contrast, System 2's more controlled processes serve to decontextualize and depersonalize problems. This system is more adept at representing in terms of rules and underlying principles. It can deal with problems without social content and is not dominated by the goal of attributing intentionality or by the search for conversational relevance. Using the distinction between System 1 and System 2 processing, it is conjectured here that in order to observe large cognitive ability differences in a problem situation, the two systems must strongly cue different responses^10. It is not enough simply that both systems are engaged. If both cue the same response (as in deontic selection task problems), then this could have the effect of severely diluting any differences in cognitive ability. One reason that this outcome is predicted is that it is assumed that individual differences in System 1 processes (interactional intelligence) bear little relation to individual differences in System 2 processes (analytic intelligence). This is a conjecture for which there is a modest amount of evidence. Reber (1993) has shown preconscious processes to have low variability and to show little relation to analytic intelligence (see Jones & Day, 1997; McGeorge, Crawford, & Kelly, 1997; Reber, Walkenfeld, & Hernstadt, 1991). In contrast, if the two systems cue opposite responses, rule-based System 2 will tend to differentially cue those of high analytic intelligence and this tendency will not be diluted by System 1 (the associative system) nondifferentially drawing subjects to the same response. For example, the Linda Problem maximizes the tendency for the two systems to prime different responses and this problem produced a large difference in cognitive ability. Similarly, in nondeontic selection tasks there is ample opportunity for the two systems to cue different responses. A deductive interpretation conjoined with an exhaustive search for falsifying instances yields the response P and not-Q. This interpretation and processing style is likely associated with the rule-based System 2--individual differences in which underlie the psychometric concept of analytic intelligence. In contrast, within the heuristic-analytic framework of Evans (1984, 1989, 1996), the matching response of P and Q reflects the heuristic processing of System 1 (in Evans' theory, a linguistically-cued relevance response). In deontic problems, both deontic and rule-based logics are cuing construals of the problem that dictate the same response (P and not-Q). Whatever is one's theory of responding in deontic tasks--preconscious relevance judgments, pragmatic schemas, or Darwinian algorithms (e.g., Cheng & Holyoak, 1989; Cosmides, 1989; Cummins, 1996; Evans, 1996)--the mechanisms triggering the correct response resemble heuristic or modular structures that fall within the domain of System 1. These structures are unlikely to be strongly associated with analytic intelligence (Cummins, 1996; Levinson, 1995; McGeorge, Crawford, & Kelly, 1997; Reber, 1993; Reber, Walkenfeld, & Hernstadt, 1991), and hence they operate to draw subjects of both high and low analytic intelligence to the same response dictated by the rule-based system--thus serving to dilute cognitive ability differences between correct and incorrect responders (see Stanovich & West, 1998a for a data simulation). 6.1 Alternative Construals: Evolutionary Optimization Versus Normative Rationality The sampling of experimental results reviewed here (see Stanovich, 1999 for further examples) has demonstrated that the response dictated by the construal of the inventors of the Linda Problem (Tversky & Kahneman, 1983), Disease Problem (Tversky & Kahneman, 1981), and selection task (Wason, 1966) is the response favored by subjects of high analytic intelligence. The alternative responses dictated by the construals favored by the critics of the heuristics and biases literature were the choices of the subjects of lower analytic intelligence. In this section we will explore the possibility that these alternative construals may have been triggered by heuristics that make evolutionary sense, but that subjects higher in a more flexible type of analytic intelligence (and those more cognitively engaged, see Smith & Levin, 1996) are more prone to follow normative rules that maximize personal utility. In a very restricted sense, such a pattern might be said to have relevance for the concept of rational task construal. The argument depends on the distinction between evolutionary adaptation and instrumental rationality (utility maximization given goals and beliefs). The key point is that for the latter (variously termed practical, pragmatic, or means/ends rationality), maximization is at the level of the individual person. Adaptive optimization in the former case is at the level of the genes. In Dawkins' (1976, 1982) terms, evolutionary adaptation concerns optimization processes relevant to the so-called replicators (the genes), whereas instrumental rationality concerns utility maximization for the so-called vehicle (or interactor, to use Hull's, 1982, term), which houses the genes. Anderson (1990, 1991) emphasizes this distinction in his treatment of adaptionist models in psychology. In his advocacy of such models, Anderson (1990, 1991) eschews Dennett's (1987) assumption of perfect rationality in the instrumental sense (hereafter termed normative rationality) for the somewhat different assumption of evolutionary optimization (i.e., evolution as a local fitness maximizer). Anderson (1990) accepts Stich's (1990; see also Cooper, 1989; Skyrms, 1996) argument that evolutionary adaptation (hereafter termed evolutionary rationality)^11 does not guarantee perfect human rationality in the normative sense: "Rationality in the adaptive sense, which is used here, is not rationality in the normative sense that is used in studies of decision making and social judgment....It is possible that humans are rational in the adaptive sense in the domains of cognition studied here but not in decision making and social judgment" (p. 31). Thus, Anderson (1991) acknowledges that there may be arguments for "optimizing money, the happiness of oneself and others, or any other goal. It is just that these goals do not produce optimization of the species" (pp. 510-511). As a result, a descriptive model of processing that is adaptively optimal could well deviate substantially from a normative model. This is because Anderson's (1990, 1991) adaptation assumption is that cognition is optimally adapted in an evolutionary sense--and this is not the same as positing that human cognitive activity will result in normatively appropriate responses. Such a view can encompass both the impressive record of descriptive accuracy enjoyed by a variety of adaptionist models (Anderson, 1990, 1991; Oaksford & Chater, 1994, 1996, 1998) as well as the fact that cognitive ability sometimes dissociates from the response deemed optimal on an adaptionist analysis (Stanovich & West, 1998a). As discussed above, Oaksford and Chater (1994) have had considerable success in modeling the nondeontic selection task as an inductive problem in which optimal data selection is assumed (see also, Oaksford, Chater, Grainger, & Larkin, 1997). Their model predicts the modal response of P and Q and the corresponding dearth of P and not-Q choosers. Similarly, Anderson (1990, p. 157-160) models the 2 x 2 contingency assessment experiment using a model of optimally adapted information processing and shows how it can predict the much-replicated finding that the D cell (cause absent and effect absent) is vastly underweighted (see also Friedrich, 1993; Klayman & Ha, 1987). Finally, a host of investigators (Adler, 1984, 1991; Dulany & Hilton, 1991; Hilton, 1995; Levinson, 1995) have stressed how a model of rational conversational implicature predicts that violating the conjunction rule in the Linda Problem reflects the adaptive properties of interactional intelligence. Yet in all three of these cases--despite the fact that the adaptionist models predict the modal response quite well--individual differences analyses demonstrate associations that also must be accounted for. Correct responders on the nondeontic selection task (P and not-Q choosers--not those choosing P and Q) are higher in cognitive ability. In the 2 x 2 covariation detection experiment, it is those subjects weighting cell D more equally (not those underweighting the cell in the way that the adaptionist model dictates) who are higher in cognitive ability and who tend to respond normatively on other tasks (Stanovich & West, 1998d). Finally, despite conversational implicatures indicating the opposite, individuals of higher cognitive ability disproportionately tend to adhere to the conjunction rule. These patterns make sense if it is assumed that the two systems of processing are optimized for different situations and different goals and that these data patterns reflect the greater probability that the analytic intelligence of System 2 will override the interactional intelligence of System 1 in individuals of higher cognitive ability. In summary, the biases introduced by System 1 heuristic processing may well be universal--because the computational biases inherent in this system are ubiquitous and shared by all humans. However, it does not necessarily follow that errors on tasks from the heuristics and biases literature will be universal (we have known for some time that they are not). This is because, for some individuals, System 2 processes operating in parallel (see Evans & Over, 1996) will have the requisite computational power (or a low enough threshold) to override the response primed by System 1. It is hypothesized that the features of System 1 are designed to very closely track increases in the reproduction probability of genes. System 2, while also clearly an evolutionary product, is also primarily a control system focused on the interests of the whole person. It is the primary maximizer of an individual's personal utility^12. Maximizing the latter will occasionally result in sacrificing genetic fitness (Barkow, 1989; Cooper, 1989; Skyrms, 1996). Because System 2 is more attuned to normative rationality than is System 1, System 2 will seek to fulfill the individual's goals in the minority of cases where those goals conflict with the responses triggered by System 1. It is proposed that just such conflicts are occurring in three of the tasks discussed previous previously (the Disease Problem, the Linda Problem, and the selection task). This conjecture is supported by the fact that evolutionary rationality has been conjoined with Gricean principles of conversational implicature by several theorists (Gigerenzer, 1996b; Hilton, 1995, Levinson, 1995) who emphasize the principle of "conversationally rational interpretation" (Hilton, 1995, pp. 265). According to this view, the pragmatic heuristics are not simply inferior substitutes for computationally costly logical mechanisms which would work better. Instead, the heuristics are optimally designed to solve an evolutionary problem in another domain--attributing intentions to conspecifics and coordinating mutual intersubjectivity so as to optimally negotiate cooperative behavior (Cummins, 1996; Levinson, 1995; Skyrms, 1996). It must be stressed though that in the vast majority of mundane situations, the evolutionary rationality embodied in System 1 processes will also serve the goals of normative rationality. Our automatic, System 1 processes for accurately navigating around objects in the natural world were adaptive in an evolutionary sense, and they likewise serve our personal goals as we carry out our lives in the modern world (that is, navigational abilities are an evolutionary adaptation that serve the instrumental goals of the vehicle as well). One way to view the difference between what we have termed here evolutionary and normative rationality is to note that they are not really two different types of rationality (see Oaksford & Chater, 1998, pp. 291-297) but are instead terms for characterizing optimization procedures operating at the subpersonal and personal levels, respectively. That there are two optimization procedures in operation here that could come into conflict is a consequence of the insight that the genes--as subpersonal replicators--can increase their fecundity and longevity in ways that do not necessarily serve the instrumental goals of the vehicles built by the genome (Cooper, 1989; Skyrms, 1996). Skyrms (1996) devotes an entire book on evolutionary game theory to showing that the idea that "natural selection will weed out irrationality" (p. x) is false because optimization at the subpersonal replicator level is not coextensive with the optimization of the instrumental goals of the vehicle (i.e., normative rationality). Gigerenzer (1996b) provides an example by pointing out that neither rats nor humans maximize utility in probabilistic contingency experiments. Instead of responding by choosing the most probable alternative on every trial, subjects alternate in a manner that matches the probabilities of the stimulus alternatives. This behavior violates normative strictures on utility maximization, but Gigerenzer (1996b) demonstrates how probability matching could actually be an evolutionarily stable strategy (see Cooper, 1989, and Skyrms, 1996 for many such examples). Examples such as this led Skyrms (1996) to note that "when I contrast the results of the evolutionary account with those of rational decision theory, I am not criticizing the normative force of the latter. I am just emphasizing the fact that the different questions asked by the two traditions may have different answers" (p. xi). Skyrms' (1996) book articulates the environmental and population parameters under which "rational choice theory completely parts ways with evolutionary theory" (p. 106; see also Cooper, 1989). Cognitive mechanisms that were fitness enhancing might well thwart our goals as personal agents in an industrial society (see Baron, 1998) because the assumption that our cognitive mechanisms are adapted in the evolutionary sense (Pinker, 1997) does not entail normative rationality. Thus, situations where evolutionary and normative rationality dissociate might well put the two processing Systems in partial conflict with each other. These conflicts may be rare, but the few occasions on which they occur might be important ones. This is because knowledge-based, technological societies often put a premium on abstraction and decontextualization, and they sometimes require that the fundamental computational bias of human cognition be overridden by System 2 processes. 6.2 The Fundamental Computational Bias and Task Interpretation The fundamental computational bias, that "specific features of problem content, and their semantic associations, constitute the dominant influence on thought" (Evans et al., 1983, p. 295; Stanovich, 1999), is no doubt rational in the evolutionary sense. Selection pressure was probably in the direction of radical contextualization. An organism that could bring more relevant information to bear (not forgetting the frame problem) on the puzzles of life probably dealt with the world better than competitors and thus reproduced with greater frequency and contributed more of its genes to future generations. Evans and Over (1996) argue that an overemphasis on normative rationality has led us to overlook the adaptiveness of contextualization and the nonoptimality of always decoupling prior beliefs from problem situations ("beliefs that have served us well are not lightly to be abandoned," p. 114). Their argument here parallels the reasons that philosophy of science has moved beyond naive falsificationism (see Howson & Urbach, 1993). Scientists do not abandon a richly confirmed and well integrated theory at the first little bit of falsifying evidence, because abandoning the theory might actually decrease explanatory coherence (Thagard, 1992). Similarly, Evans and Over (1996) argue that beliefs that have served us well in the past should be hard to dislodge, and projecting them on to new information--because of their past efficacy--might actually help in assimilating the new information. Evans and Over (1996) note the mundane but telling fact that when scanning a room for a particular shape, our visual systems register color as well. They argue that we do not impute irrationality to our visual systems because they fail to screen out the information that is not focal. Our systems of recruiting prior knowledge and contextual information to solve problems with formal solutions are probably likewise adaptive in the evolutionary sense. However, Evans and Over (1996) do note that there is an important disanalogy here as well, because studies of belief bias in syllogistic reasoning have shown that "subjects can to some extent ignore belief and reason from a limited number of assumptions when instructed to do so" (p. 117). That is, in the case of reasoning--as opposed to the visual domain--some people do have the cognitive flexibility to decouple unneeded systems of knowledge and some do not. The studies reviewed here indicate that those who do have the requisite flexibility are somewhat higher in cognitive ability and in actively open-minded thinking (see Stanovich & West, 1997). These styles and skills are largely System 2, not System 1, processes. Thus, the heuristics triggering alternative task construals in the various problems considered here may well be the adaptive evolutionary products embodied in System 1 as Levinson (1995) and others argue. Nevertheless, many of our personal goals may have become detached from their evolutionary context (see Barkow, 1989). As Morton (1997) aptly puts it: "We can and do find ways to benefit from the pleasures that our genes have arranged for us without doing anything to help the genes themselves. Contraception is probably the most obvious example, but there are many others. Our genes want us to be able to reason, but they have no interest in our enjoying chess" (p. 106). Thus, we seek "not evolution's end of reproductive success but evolution's means, love-making. The point of this example is that some human psychological traits may, at least in our current environment, be fitness-reducing" (see Barkow, 1989, p. 296). And if the latter are pleasurable, analytic intelligence achieves normative rationality by pursuing them--not the adaptive goals of our genes. This is what Larrick et al. (1993) argue when they speak of analytic intelligence as "the set of psychological properties that enables a person to achieve his or her goals effectively. On this view, intelligent people will be more likely to use rules of choice that are effective in reaching their goals than will less intelligent people" (p. 345). Thus, high analytic intelligence may lead to task construals that track normative rationality; whereas the alternative construals of subjects low in analytic intelligence (and hence more dominated by System 1 processing) might be more likely to track evolutionary rationality in situations that put the two types of rationality in conflict--as is conjectured to be the case with the problems discussed previously. If construals consistent with normative rationality are more likely to satisfy our current individual goals (Baron, 1993, 1994) than are construals determined by evolutionary rationality (which are construals determined by our genes' metaphorical goal--reproductive success), then it is in this very restricted sense that individual difference relationships such as those illustrated here tell us which construals are "best". 6.3 The Fundamental Computational Bias and the Ecology of the Modern World A conflict between the decontextualizing requirements of normative rationality and the fundamental computational bias may perhaps be one of the main reasons that normative and evolutionary rationality dissociate. The fundamental computational bias is meant to be a global term that captures the pervasive bias toward the contextualization of all informational encounters. It conjoins the following processing tendencies: (a) the tendency to adhere to Gricean conversational principles even in situations that lack many conversational features (Adler, 1984; Hilton, 1995); (b) the tendency to contextualize a problem with as much prior knowledge as is easily accessible, even when the problem is formal and the only solution is a content-free rule (Evans, 1982, 1989; Evans, Barston, & Pollard, 1983); (c) the tendency to see design and pattern in situations that are either undesigned, unpatterned, or random (Levinson, 1995); (d) the tendency to reason enthymematically--to make assumptions not stated in a problem and then reason from those assumptions (Henle, 1962; Rescher, 1988); (e) the tendency toward a narrative mode of thought (Bruner, 1986, 1990). All of these properties conjoined together represent a cognitive tendency toward radical contextualization. The bias is termed fundamental because it is thought to stem largely from System 1 and that system is assumed to be primary in that it permeates virtually all of our thinking (e.g., Evans & Over, 1996). If the properties of this system are not to be the dominant factors in our thinking, then they must be overridden by System 2 processes so that the latter can carry out one of their important functions of abstracting complex situations into canonical representations that are stripped of context. Thus, it is likely that one computational task of System 2 is to decouple (see Navon, 1989a, 1989b) contextual features automatically supplied by System 1 when they are potentially interfering. In short, one of the functions of System 2 is to serve as an override system (see Pollock, 1991) for some of the automatic and obligatory computational results provided by System 1 . This override function might only be needed in a tiny minority of information processing situations (in most cases, the two Systems will interact in concert), but they may be unusually important ones. For example, numerous theorists have warned about a possible mismatch between the fundamental computational bias and the processing requirements of many tasks in a technological society containing many symbolic artifacts and often requiring skills of abstraction (Adler, 1984, 1991; Donaldson, 1978, 1993). Hilton (1995) warns that the default assumption that Gricean conversational principles are operative may be wrong for many technical settings because "many reasoning heuristics may have evolved because they are adaptive in contexts of social interaction. For example, the expectation that errors of interpretation will be quickly repaired may be correct when we are interacting with a human being but incorrect when managing a complex system such as an aircraft, a nuclear power plant, or an economy. The evolutionary adaptiveness of such an expectation to a conversational setting may explain why people are so bad at dealing with lagged feedback in other settings" (p. 267). Concerns about the real-world implications of the failure to engage in necessary cognitive abstraction (see Adler, 1984) were what led Luria (1976) to warn against minimizing the importance of decontextualizing thinking styles. In discussing the syllogism, he notes that "a considerable proportion of our intellectual operations involve such verbal and logical systems; they comprise the basic network of codes along which the connections in discursive human thought are channeled" (p. 101). Likewise, regarding the subtle distinctions on many decontextualized language tasks, Olson (1986) has argued that "the distinctions on which such questions are based are extremely important to many forms of intellectual activity in a literate society. It is easy to show that sensitivity to the subtleties of language are crucial to some undertakings. A person who does not clearly see the difference between an expression of intention and a promise or between a mistake and an accident, or between a falsehood and a lie, should avoid a legal career or, for that matter, a theological one" (p. 341). Objective measures of the requirements for cognitive abstraction have been increasing across most job categories in technological societies throughout the past several decades (Gottfredson, 1997). This is why measures of the ability to deal with abstraction remains the best employment predictor and the best earnings predictor in postindustrial societies (Brody, 1997; Gottfredson, 1997; Hunt, 1995). Einhorn and Hogarth (1981) highlighted the importance of decontextualized environments in their discussion of the optimistic (Panglossian/Apologist) and pessimistic (Meliorist) views of the cognitive biases revealed in laboratory experimentation. They noted that "the most optimistic asserts that biases are limited to laboratory situations which are unrepresentative of the natural ecology" (p. 82), but they go on to caution that "in a rapidly changing world it is unclear what the relevant natural ecology will be. Thus, although the laboratory may be an unfamiliar environment, lack of ability to perform well in unfamiliar situations takes on added importance" (p. 82). There is a caution in this comment for critics of the abstract content of most laboratory tasks and standardized tests. The issue is that, ironically, the argument that the laboratory tasks and tests are not like "real life" is becoming less and less true. "Life," in fact, is becoming more like the tests! The cognitive ecologists have, nevertheless, contributed greatly in the area of remediation methods for our cognitive deficiencies (Brase et al., 1998; Cosmides & Tooby, 1996; Fiedler, 1988; Gigerenzer & Hoffrage, 1995; Sedlmeier, 1997). Their approach is, however, somewhat different from that of the Meliorists. The ecologists concentrate on shaping the environment (changing the stimuli presented to subjects) so that the same evolutionarily adapted mechanisms that fail the standard of normative rationality under one framing of the problem give the normative response under an alternative (e.g., frequentistic) version. Their emphasis on environmental alteration provides a much-needed counterpoint to the Meliorist emphasis on cognitive change. The latter, with their emphasis on reforming human thinking, no doubt miss opportunities to shape the environment so that it fits the representations that our brains are best evolved to deal with. Investigators framing cognition within a Meliorist perspective are often blind to the fact that there may be remarkably efficient mechanisms available in the brain--if only it was provided with the right type of representation. On the other hand, it is not always the case that the world will let us deal with representations that are optimally suited to our evolutionarily designed cognitive mechanisms. For example, in a series of elegant experiments, Gigerenzer, Hoffrage, and Kleinbolting (1991) have shown how at least part of the overconfidence effect in knowledge calibration studies is due to the unrepresentative stimuli used in such experiments--stimuli that do not match the subjects' stored cue validities which are optimally tuned to the environment. But there are many instances in real-life when we are suddenly placed in environments where the cue validities have changed. Metacognitive awareness of such situations (a System 2 activity) and strategies for suppressing incorrect confidence judgments generated by the responses to cues automatically generated by System 1 will be crucial here. Every high school musician who aspires to a career in music has to recalibrate when they arrive at university and encounter large numbers of talented musicians for the first time. If they persist in their old confidence judgments they may not change majors when they should. Many real-life situations where accomplishment yields a new environment with even more stringent performance requirements share this logic. Each time we "ratchet up" in the competitive environment of a capitalist economy we are in a situation just like the overconfidence knowledge calibration experiments with their unrepresentative materials (Frank & Cook, 1995). It is important to have learned System 2 strategies that will temper one's overconfidence in such situations (Koriat, Lichtenstein, & Fischhoff, 1980). 7. Individual Differences and the Normative/Descriptive Gap In our research program, we have attempted to demonstrate that a consideration of individual differences in the heuristics and biases literature may have implications for debates about the cause of the gap between normative models and descriptive models of actual performance. Patterns of individual differences have implications for arguments that all such gaps reflect merely performance errors. Individual differences are also directly relevant to theories that algorithmic-level limitations prevent the computation of the normative response in a system that would otherwise compute it. The wrong norm and alternative construal explanations of the gap involve many additional complications but, at the very least, patterns of individual differences might serve as "intuition pumps" (Dennett, 1980) and alter our reflective equilibrium regarding the plausibility of such explanations (Stanovich, 1999). Different outcomes occurred across the wide range of tasks we have examined in our research program. Of course, all the tasks had some unreliable variance and thus some responses that deviated from the response considered normative could easily be considered as performance errors. But not all deviations could be so explained. Several tasks (e.g., syllogistic reasoning with interfering content, four-card selection task) were characterized by heavy computational loads that made the normative response not prescriptive for some subjects--but these were usually few in number^13. Finally, a few tasks yielded patterns of covariance that served to raise doubts about the normative models applied to them and/or the task construals assumed by the problem inventors (e.g., several noncausal baserate items, false consensus effect). Although many normative/descriptive gaps could be reduced by these mechanisms, not all of the discrepancies could be explained by factors that do not bring human rationality into question. Algorithmic-level limitations were far from absolute. The magnitude of the associations with cognitive ability left much room for the possibility that the remaining reliable variance might indicate that there are systematic irrationalities in intentional-level psychology. A heretofore unmentioned component of our research program produced data consistent with this possibility. Specifically, it was not the case that once capacity limitations had been controlled, that the remaining variations from normative responding were unpredictable (which would have indicated that the residual variance consisted largely of performance errors). In several studies, we have shown that there was significant covariance among the scores from a variety of tasks in the heuristics and biases literature after they had been residualized on measures of cognitive ability (Stanovich, 1999). The residual variance (after partialling cognitive ability) was also systematically associated with questionnaire responses that were conceptualized as intentional-level styles relating to epistemic regulation (S?, West, & Stanovich, 1999; Stanovich & West, 1997, 1998c). Both of these findings are indications that the residual variance is systematic. They falsify models that attempt to explain the normative/descriptive gap entirely in terms of computational limitations and random performance errors. Instead, the findings support the notion that the normative/descriptive discrepancies that remain after computational limitations have been accounted for reflect a systematically suboptimal intentional-level psychology. One of the purposes of the present research program is to reverse the figure and ground in the rationality debate, which has tended to be dominated by the particular way that philosophers frame the competence/performance distinction. For example, Cohen (1982) argues that there really are only two factors affecting performance on rational thinking tasks: "normatively correct mechanisms on the one side, and adventitious causes of error on the other" (p. 252). Not surprisingly given such a conceptualization, the processes contributing to error ("adventitious causes") are of little interest to Cohen (1981, 1982). But from a psychological standpoint, there may be important implications in precisely the aspects of performance that have been backgrounded in this controversy ("adventitious causes"). For example, Johnson-Laird and Byrne (1993) articulate a view of rational thought that parses the competence/performance distinction much differently from that of Cohen (1981, 1982, 1986) and that simultaneously leaves room for systematically varying cognitive styles to play a more important role in theories of rational thought. At the heart of the rational competence that Johnson-Laird and Byrne (1993) attribute to humans is not perfect rationality but instead just one meta-principle: People are programmed to accept inferences as valid provided that they have constructed no mental model of the premises that contradict the inference. Inferences are categorized as false when a mental model is discovered that is contradictory. However, the search for contradictory models is "not governed by any systematic or comprehensive principles" (p. 178). The key point in Johnson-Laird and Byrne's (1993; see Johnson-Laird, 1999; Johnson-Laird & Byrne, 1991) account^14 is that once an individual constructs a mental model from the premises, once the individual draws a new conclusion from the model, and once the individual begins the search for an alternative model of the premises which contradicts the conclusion, the individual "lacks any systematic method to make this search for counter-examples" (p. 205; see Bucciarelli & Johnson-Laird, in press). Here is where Johnson-Laird and Byrne's (1993) model could be modified to allow for the influence of thinking styles in ways that the impeccable competence view of Cohen (1981, 1982) does not. In this passage, Johnson-Laird and Byrne seem to be arguing that there are no systematic control features of the search process. But styles of epistemic regulation (S? et al., 1999; Stanovich & West, 1997) may in fact be reflecting just such control features. Individual differences in the extensiveness of the search for contradictory models could arise from a variety of cognitive factors that, although they may not be completely systematic, may be far from "adventitious" (see Johnson-Laird & Oatley, 1992; Oatley, 1992; Overton, 1985, 1990)--factors such as dispositions toward premature closure, cognitive confidence, reflectivity, dispositions toward confirmation bias, ideational generativity, etc. Dennett (1988) argues that we use the intentional stance for humans and dogs but not for lecterns because for the latter "there is no predictive leverage gained by adopting the intentional stance" (p. 496). In the experiments just mentioned (S? et al., 1999; Stanovich & West, 1997, 1998c), it has been shown that there is additional predictive leverage to be gained by relaxing the idealized rationality assumption of Dennett's (1987, 1988) intentional stance and by positing measurable and systematic variation in intentional-level psychologies. Knowledge about such individual differences in people's intentional-level psychologies can be used to predict variance in the normative/descriptive gap displayed on many reasoning tasks. Consistent with the Meliorist conclusion that there can be individual differences in human rationality, our results show that there is variability in reasoning that cannot be accommodated within a model of perfect rational competence operating in the presence of performance errors and computational limitations. References Adler, J. E. (1984). Abstraction is uncooperative. Journal for the Theory of Social Behaviour, 14, 165-181. Adler, J. E. (1991). An optimist's pessimism: Conversation and conjunctions. In E. Eells & T. Maruszewski (Eds.), Probability and rationality: Studies on L. Jonathan Cohen's philosophy of science (pp. 251-282). Amsterdam: Editions Rodopi. Ajzen, I. (1977). Intuitive theories of events and the effects of base-rate information on prediction. Journal of Personality and Social Psychology, 35, 303-314. Alker, H., & Hermann, M. (1971). Are Bayesian decisions artificially intelligent? The effect of task and personality on conservatism in information processing. Journal of Personality and Social Psychology, 19, 31-41. Allais, M. (1953). Le comportement de l'homme rationnel devant le risque: Critique des postulats et axioms de l'ecole americaine. Econometrica, 21, 503-546. Alloy, L. B., & Tabachnik, N. (1984). Assessment of covariation by humans and animals: The joint influence of prior expectations and current situational information. Psychological Review, 91, 112-149. Anderson, J. R. (1990). The adaptive character of thought. Hillsdale, NJ: Erlbaum. Anderson, J. R. (1991). Is human cognition adaptive? Behavioral and Brain Sciences, 14, 471-517. Arkes, H., & Hammond, K. (Eds.) (1986). Judgment and decision making. Cambridge, England: Cambridge University Press. Ayton, P., & Hardman, D. (1997). Are two rationalities better than one? Current Psychology of Cognition, 16, 39-51. Bara, B. G., Bucciarelli, M., & Johnson-Laird, P. N. (1995). Development of syllogistic reasoning. American Journal of Psychology, 108, 157-193. Bar-Hillel, M. (1980). The base-rate fallacy in probability judgments. Acta Psychologica, 44, 211-233. Bar-Hillel, M. (1990). Back to base rates. In R. M. Hogarth (Eds.), Insights into decision making: A tribute to Hillel J. Einhorn (pp. 200-216). Chicago: University of Chicago Press. Barkow, J. H. (1989). Darwin, sex, and status: Biological approaches to mind and culture. Toronto: University of Toronto Press. Baron, J. (1985). Rationality and intelligence. Cambridge: Cambridge University Press. Baron, J. (1993). Morality and rational choice. Dordrecht: Kluwer. Baron, J. (1994). Nonconsequentialist decisions. Behavioral and Brain Sciences, 17, 1-42. Baron, J. (1995). Myside bias in thinking about abortion. Thinking and Reasoning, 1, 221-235. Baron, J. (1998). Judgment misguided: Intuition and error in public decision making. New York: Oxford University Press. Baron, J., & Hershey, J. C. (1988). Outcome bias in decision evaluation. Journal of Personality and Social Psychology, 54, 569-579. Bell, D., Raiffa, H., & Tversky, A. (Eds.), Decision making: Descriptive, normative, and prescriptive interactions. Cambridge: Cambridge University Press. Berkeley, D., & Humphreys, P. (1982). Structuring decision problems and the "bias heuristic". Acta Psychologica, 50, 201-252. Birnbaum, M. H. (1983). Base rates in Bayesian inference: Signal detection analysis of the cab problem. American Journal of Psychology, 96, 85-94. Block, J., & Funder, D. C. (1986). Social roles and social perception: Individual differences in attribution and "error". Journal of Personality and Social Psychology, 51, 1200-1207. Brase, G. L., Cosmides, L., & Tooby, J. (1998). Individuation, counting, and statistical inference: The role of frequency and whole-object representations in judgment under uncertainty. Journal of Experimental Psychology: General, 127, 3-21. Bratman, M. E., Israel, D. J., & Pollack, M. E. (1991). Plans and resource-bounded practical reasoning. In J. Cummins & J. Pollock (Eds.), Philosophy and AI: Essays at the interface (pp. 7-22). Cambridge, MA: MIT Press. Brody, N. (1997). Intelligence, schooling, and society. American Psychologist, 52, 1046-1050. Broome, J. (1990). Should a rational agent maximize expected utility? In K. S. Cook & M. Levi (Eds.), The limits of rationality (pp. 132-145). Chicago: University of Chicago Press. Bruner, J. (1986). Actual minds, possible worlds. Cambridge, MA: Harvard University Press. Bruner, J. (1990). Acts of meaning. Cambridge, MA: Harvard University Press. Bucciarelli, M., & Johnson-Laird, P. N. (in press). Strategies in syllogistic reasoning. Cognitive science. Byrnes, J. P., & Overton, W. F. (1986). Reasoning about certainty and uncertainty in concrete, causal, and propositional contexts. Developmental Psychology, 22, 793-799. Cacioppo, J. T., Petty, R. E., Feinstein, J., & Jarvis, W. (1996). Dispositional differences in cognitive motivation: The life and times of individuals varying in need for cognition. Psychological Bulletin, 119, 197-253. Carpenter, P. A., Just, M. A., & Shell, P. (1990). What one intelligence test measures: A theoretical account of the processing in the Raven Progressive Matrices Test. Psychological Review, 97, 404-431. Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge: Cambridge University Press. Carroll, J. B. (1997). Psychometrics, intelligence, and public perception. Intelligence, 24, 25-52. Caryl, P. G. (1994). Early event-related potentials correlate with inspection time and intelligence. Intelligence, 18, 15-46. Casscells, W., Schoenberger, A., & Graboys, T. (1978). Interpretation by physicians of clinical laboratory results. New England Journal of Medicine, 299, 999-1001. Ceci, S. J. (1996). On intelligence : A bioecological treatise on intellectual development (Expanded Edition). Cambridge, MA: Harvard University Press. Cheng, P. W., & Holyoak, K. J. (1989). On the natural selection of reasoning theories. Cognition, 33, 285-313. Cherniak, C. (1986). Minimal rationality. Cambridge, MA: MIT Press. Cohen, L. J. (1979). On the psychology of prediction: Whose is the fallacy? Cognition, 7, 385-407. Cohen, L. J. (1981). Can human irrationality be experimentally demonstrated? Behavioral and Brain Sciences, 4, 317-370. Cohen, L. J. (1982). Are people programmed to commit fallacies? Further thoughts about the interpretation of experimental data on probability judgment. Journal for the Theory of Social Behavior, 12, 251-274. Cohen, L. J. (1983). The controversy about irrationality. Behavioral and Brain Sciences, 6, 510-517. Cohen, L. J. (1986). The dialogue of reason. Oxford: Oxford University Press. Cooper, W. S. (1989). How evolutionary biology challenges the classical theory of rational choice. Biology and Philosophy, 4, 457-481. Cosmides, L. (1989). The logic of social exchange: Has natural selection shaped how humans reason? Studies with the Wason selection task. Cognition, 31, 187-276. Cosmides, L., & Tooby, J. (1994). Beyond intuition and instinct blindness: Toward an evolutionarily rigorous cognitive science. Cognition, 50, 41-77. Cosmides, L., & Tooby, J. (1996). Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition, 58, 1-73. Cummins, D. D. (1996). Evidence for the innateness of deontic reasoning. Mind & Language, 11, 160-190. Daston, L. (1980). Probabilistic expectation and rationality in classical probability theory. Historia Mathematica, 7, 234-260. Dawes, R. M. (1989). Statistical criteria for establishing a truly false consensus effect. Journal of Experimental Social Psychology, 25, 1-17. Dawes, R. M. (1990). The potential nonfalsity of the false consensus effect. In R. M. Hogarth (Ed.), Insights into decision making (pp. 179-199). Chicago: University of Chicago Press. Dawkins, R. (1976). The selfish gene (New edition, 1989). New York: Oxford University Press. Dawkins, R. (1982). The extended phenotype. New York: Oxford University Press. Deary, I. J. (1995). Auditory inspection time and intelligence: What is the direction of causation? Developmental Psychology, 31, 237-250. Deary, I. J., & Stough, C. (1996). Intelligence and inspection time. American Psychologist, 51, 599-608. de Finetti, B. (1970). Theory of probability (Vol. 1). New York: John Wiley (republished, 1990). Dennett, D. (1980). The milk of human intentionality. Behavioral and Brain Sciences, 3, 428-430. Dennett, D. (1987). The intentional stance. Cambridge, MA: MIT Press. Dennett, D. C. (1988). Precis of "The Intentional Stance". Behavioral and Brain Sciences, 11, 493-544. Detterman, D. K. (1994). Intelligence and the brain. In P. A. Vernon (Eds.), The neuropsychology of individual differences (pp. 35-57). San Diego, CA: Academic Press. Doherty, M. E., Chadwick, R., Garavan, H., Barr, D., & Mynatt, C. R. (1996). On people's understanding of the diagnostic implications of probabilistic data. Memory & Cognition, 24, 644-654. Doherty, M. E., & Mynatt, C. (1990). Inattention to P(H) and to P(D/~H): A converging operation. Acta Psychologica, 75, 1-11. Doherty, M. E., Schiavo, M., Tweney, R., & Mynatt, C. (1981). The influence of feedback and diagnostic data on pseudodiagnositicity. Bulletin of the Psychonomic Society, 18, 191-194. Dominowski, R. L. (1995). Content effects in Wason's selection task. In S. E. Newstead & J. S. B. T. Evans (Eds.), Perspectives on thinking and reasoning (pp. 41-65). Hove, England: Erlbaum. Donaldson, M. (1978). Children's minds. London: Fontana Paperbacks. Donaldson, M. (1993). Human minds: An exploration. New York: Viking Penguin. Dulany, D. E., & Hilton, D. J. (1991). Conversational implicature, conscious representation, and the conjunction fallacy. Social Cognition, 9, 85-110. The Economist (December 12, 1998). The benevolence of self-interest. p. 80. Einhorn, H. J., & Hogarth, R. M. (1981). Behavioral decision theory: Processes of judgment and choice. Annual Review of Psychology, 32, 53-88. Elster, J. (1983). Sour grapes: Studies in the subversion of rationality. Cambridge, England: Cambridge University Press. Epstein, S. (1994). Integration of the cognitive and the psychodynamic unconscious. American Psychologist, 49, 709-724. Epstein, S., Lipson, A., Holstein, C., & Huh, E. (1992). Irrational reactions to negative outcomes: Evidence for two conceptual systems. Journal of Personality and Social Psychology, 62, 328-339. Evans, J. St. B. T. (1982). The psychology of deductive reasoning. London: Routledge. Evans, J. St. B. T. (1984). Heuristic and analytic processes in reasoning. British Journal of Psychology, 75, 451-468. Evans, J. St. B. T. (1989). Bias in human reasoning: Causes and consequences. London: Erlbaum Associates. Evans, J. St. B. T. (1996). Deciding before you think: Relevance and reasoning in the selection task. British Journal of Psychology, 87, 223-240. Evans, J. St. B. T., Barston, J., & Pollard, P. (1983). On the conflict between logic and belief in syllogistic reasoning. Memory & Cognition, 11, 295-306. Evans, J. St. B. T., & Lynch, J. S. (1973). Matching bias in the selection task. British Journal of Psychology, 64, 391-397. Evans, J. St. B. T., Newstead, S. E., & Byrne, R. M. J. (1993). Human reasoning: The psychology of deduction. Hove, England: Erlbaum. Evans, J. St. B. T., & Over, D. E. (1996). Rationality and reasoning. Hove, England: Psychology Press. Fiedler, K. (1988). The dependence of the conjunction fallacy on subtle linguistic factors. Psychological Research, 50, 123-129. Fong, G. T., Krantz, D. H., & Nisbett, R. E. (1986). The effects of statistical training on thinking about everyday problems. Cognitive Psychology, 18, 253-292. Frank, R. H. (1990). Rethinking rational choice. In R. Friedland & A. Robertson (Eds.), Beyond the marketplace (pp. 53-87). New York: Aldine de Gruyter. Friedrich, J. (1993). Primary error detection and minimization (PEDMIN) strategies in social cognition: A reinterpretation of confirmation bias phenomena. Psychological Review, 100, 298-319. Frisch, D. (1993). Reasons for framing effects. Organizational Behavior and Human Decision Processes, 54, 399-429. Frisch, D. (1994). Consequentialism and utility theory. Behavioral and Brain Sciences, 17, 16. Fry, A. F., & Hale, S. (1996). Processing speed, working memory, and fluid intelligence. Psychological Science, 7, 237-241. Funder, D. C. (1987). Errors and mistakes: Evaluating the accuracy of social judgment. Psychological Bulletin, 101, 75-90. Gigerenzer, G. (1991a). From tools to theories: A heuristic of discovery in cognitive psychology. Psychological Review, 98, 254-267. Gigerenzer, G. (1991b). How to make cognitive illusions disappear: Beyond "heuristics and biases". European Review of Social Psychology, 2, 83-115. Gigerenzer, G. (1993). The bounded rationality of probabilistic mental models. In K. Manktelow & D. Over (Eds.), Rationality: Psychological and philosophical perspectives (pp. 284-313). London: Routledge. Gigerenzer, G. (1996a). On narrow norms and vague heuristics: A reply to Kahneman and Tversky (1996). Psychological Review, 103, 592-596. Gigerenzer, G. (1996b). Rationality: Why social context matters. In P. B. Baltes & U. Staudinger (Eds.), Interactive minds: Life-span perspectives on the social foundation of cognition (pp. 319-346). Cambridge: Cambridge University Press. Gigerenzer, G., & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review, 103, 650-669. Gigerenzer, G., & Hoffrage, U. (1995). How to improve Bayesian reasoning without instruction: Frequency formats. Psychological Review, 102, 684-704. Gigerenzer, G., Hoffrage, U., & Kleinbolting, H. (1991). Probabilistic mental models: A Brunswikian theory of confidence. Psychological Review, 98, 506-528. Gigerenzer, G., & Regier, T. (1996). How do we tell an association from a rule? Comment on Sloman (1996). Psychological Bulletin, 119, 23-26. Goldman, A. I. (1978). Epistemics: The regulative theory of cognition. Journal of Philosophy, 55, 509-523. Gottfredson, L. S. (1997). Why g matters: The complexity of everyday life. Intelligence, 24, 79-132. Greenfield, P. M. (1997). You can't take it with you: Why ability assessments don't cross cultures. American Psychologist, 52, 1115-1124. Griggs, R. A. (1983). The role of problem content in the selection task and in the THOG problem. In J. S. B. T. Evans (Eds.), Thinking and reasoning: Psychological approaches (pp. 16-43). London: Routledge & Kegan Paul. Griggs, R. A., & Cox, J. R. (1982). The elusive thematic-materials effect in Wason's selection task. British Journal of Psychology, 73, 407-420. Griggs, R. A., & Cox, J. R. (1983). The effects of problem content and negation on Wason's selection task. Quarterly Journal of Experimental Psychology, 35, 519-533. Hammond, K. R. (1996). Human judgment and social policy. New York: Oxford University Press. Harman, G. (1995). Rationality. In E. E. Smith & D. N. Osherson (Eds.), Thinking (Vol. 3, pp. 175-211). Cambridge, MA: The MIT Press. Henle, M. (1962). On the relation between logic and thinking. Psychological Review, 69, 366-378. Henle, M. (1978). Foreword. In R. Revlin & R. Mayer (Eds.), Human reasoning (pp. xiii-xviii). New York: John Wiley. Hilton, D. J. (1995). The social context of reasoning: Conversational inference and rational judgment. Psychological Bulletin, 118, 248-271. Hoch, S. J. (1987). Perceived consensus and predictive accuracy: The pros and cons of projection. Journal of Personality and Social Psychology, 53, 221-234. Hoch, S. J., & Tschirgi, J. E. (1985). Logical knowledge and cue redundancy in deductive reasoning. Memory & Cognition, 13, 453-462. Howson, C., & Urbach, P. (1993). Scientific reasoning: The Bayesian approach (Second Edition). Chicago: Open Court. Hull, D. L. (1982). The naked meme. In H. C. Plotkin (Ed.), Learning, development, and culture: Essays in evolutionary epistemology (pp. 273-327). Chichester, England: John Wiley. Hunt, E. (1987). The next word on verbal ability. In P. A. Vernon (Ed.), Speed of information-processing and intelligence (pp. 347-392). Norwood, NJ: Ablex. Hunt, E. (1995). Will we be smart enough? A cognitive analysis of the coming workforce. New York: Russell Sage Foundation. Hunt, E. (1997). Nature vs. nurture: The feeling of vuja de. In R. J. Sternberg & E. L. Grigorenko (Eds.), Intelligence, heredity, and environment (pp. 531-551). Cambridge: Cambridge University Press. Jacobs, J. E., & Potenza, M. (1991). The use of judgment heuristics to make social and object decisions: A developmental perspective. Child Development, 62, 166-178. Jepson, C., Krantz, D., & Nisbett, R. (1983). Inductive reasoning: Competence or skill? Behavioral and Brain Sciences, 6, 494-501. Johnson-Laird, P. N. (1983). Mental models. Cambridge, MA: Harvard University Press. Johnson-Laird, P. N. (1999). Deductive reasoning. Annual Review of Psychology, 50, 109-135. Johnson-Laird, P. N., & Byrne, R. M. J. (1991). Deduction. Hillsdale, NJ: Erlbaum. Johnson-Laird, P. N., & Byrne, R. M. J. (1993). Models and deductive rationality. In K. Manktelow & D. Over (Eds.), Rationality: Psychological and philosophical perspectives (pp. 177-210). London: Routledge. Johnson-Laird, P., & Oatley, K. (1992). Basic emotions, rationality, and folk theory. Cognition and Emotion, 6, 201-223. Jones, K., & Day, J. D. (1997). Discrimination of two aspects of cognitive-social intelligence from academic intelligence. Journal of Educational Psychology, 89, 486-497. Jou, J., Shanteau, J., & Harris, R. J. (1996). An information processing view of framing effects: The role of causal schemas in decision making. Memory & Cognition, 24, 1-15. Jungermann, H. (1986). The two camps on rationality. In H. R. Arkes & K. R. Hammond (Eds.), Judgment and decision making (pp. 627-641). Cambridge: Cambridge University Press. Kahneman, D. (1981). Who shall be the arbiter of our intuitions? Behavioral and Brain Sciences, 4, 339-340. Kahneman, D., Slovic, P., & Tversky, A. (Eds.) (1982). Judgment under uncertainty: Heuristics and biases. Cambridge: Cambridge University Press. Kahneman, D., & Tversky, A. (1982). On the study of statistical intuitions. Cognition, 11, 123-141. Kahneman, D., & Tversky, A. (1983). Can irrationality be intelligently discussed? Behavioral and Brain Sciences, 6, 509-510. Kahneman, D., & Tversky, A. (1984). Choices, values, and frames. American Psychologist, 39, 341-350. Kahneman, D., & Tversky, A. (1996). On the reality of cognitive illusions. Psychological Review, 103, 582-591. Kardash, C. M., & Scholes, R. J. (1996). Effects of pre-existing beliefs, epistemological beliefs, and need for cognition on interpretation of controversial issues. Journal of Educational Psychology, 88, 260-271. Klaczynski, P. A., Gordon, D. H., & Fauth, J. (1997). Goal-oriented critical reasoning and individual differences in critical reasoning biases. Journal of Educational Psychology, 89, 470-485. Klahr, D.,Fay, A. L., & Dunbar, K. (1993). Heuristics for scientific experimentation: A developmental study. Cognitive Psychology, 25, 111-146. Klayman, J., & Ha, Y. (1987). Confirmation, disconfirmation, and information in hypothesis testing. Psychological Review, 94, 211-228. Klein, G. (1998). Sources of power: How people make decisions. Cambridge, MA: MIT Press. Koehler, J. J. (1996). The base rate fallacy reconsidered: Descriptive, normative and methodological challenges. Behavioral and Brain Sciences, 19, 1-53. Kornblith, H. (Ed.). (1985). Naturalizing epistemology. Cambridge, MA: MIT University Press. Kornblith, H. (1993). Inductive inference and its natural ground. Cambridge, MA: MIT University Press. Krantz, D. H. (1981). Improvements in human reasoning and an error in L. J. Cohen's. Behavioral and Brain Sciences, 4, 340-341. Krueger, J., & Clement, R. (1994). The truly false consensus effect: An ineradicable and egocentric bias in social perception. Journal of Personality and Social Psychology, 65, 596-610. Krueger, J., & Zeiger, J. (1993). Social categorization and the truly false consensus effect. Journal of Personality and Social Psychology, 65, 670-680. Kuhberger, A. (1995). The framing of decisions: A new look at old problems. Organizational Behavior and Human Decision Processes, 62, 230-240. Kyburg, H. E. (1983). Rational belief. Behavioral and Brain Sciences, 6, 231-273. Kyburg, H. E. (1991). Normative and descriptive ideals. In J. Cummins & J. Pollock (Eds.), Philosophy and AI: Essays at the interface (pp. 129-139). Cambridge, MA: MIT Press. Kyllonen, P. C. (1996). Is working memory capacity Spearman's g? In I. Dennis & P. Tapsfield (Eds.), Human abilities: Their nature and measurement (pp. 49-76). Lawrence Erlbaum: Mahweh, NJ. Kyllonen, P. C., & Christal, R. E. (1990). Reasoning ability is (little more than) working memory capacity?! Intelligence, 14, 389-433. Larrick, R. P., Nisbett, R. E., & Morgan, J. N. (1993). Who uses the cost-benefit rules of choice? Implications for the normative status of microeconomic theory. Organizational Behavior and Human Decision Processes, 56, 331-347. Larrick, R. P., Smith, E. E., & Yates, J. F. (1992, November). Reflecting on the reflection effect: Disrupting the effects of framing through thought. Paper presented at the meetings of the society for Judgment and Decision Making, St. Louis, MO. Levi, I. (1983). Who commits the base rate fallacy? Behavioral and Brain Sciences, 6, 502-506. Levinson, S. C. (1995). Interactional biases in human thinking. In E. Goody (Eds.), Social intelligence and interaction (pp. 221-260). Cambridge: Cambridge University Press. Liberman, N., & Klar, Y. (1996). Hypothesis testing in Wason's selection task: Social exchange cheating detection or task understanding. Cognition, 58, 127-156. Lichtenstein, S., Fischhoff, B., & Phillips, L. (1982). Calibration and probabilities: The state of the art to 1980. In D. Kahneman,P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (pp. 306-334). Cambridge: Cambridge University Press. Lichtenstein, S., & Slovic, P. (1971). Reversal of preferences between bids and choices in gambling decisions. Journal of Experimental Psychology, 89, 46-55. Lopes, L. L. (1981). Performing competently. Behavioral and Brain Sciences, 4, 343-344. Lopes, L. L. (1982). Doing the impossible: A note on induction and the experience of randomness. Journal of Experimental Psychology: Learning, Memory, and Cognition, 8, 626-636. Lopes, L. (1991). The rhetoric of irrationality. Theory & Psychology, 1, 65-82. Lopes, L. L., & Oden, G. C. (1991). The rationality of intelligence. In E. Eells & T. Maruszewski (Eds.), Probability and rationality: Studies on L. Jonathan Cohen's philosophy of science (pp. 199-223). Amsterdam: Editions Rodopi. Lubinski, D., & Humphreys, L. G. (1997). Incorporating general intelligence into epidemiology and the social sciences. Intelligence, 24, 159-201. Luria, A. R. (1976). Cognitive development: Its cultural and social foundations. Cambridge, MA: Harvard University Press. Lyon, D., & Slovic, P. (1976). Dominance of accuracy information and neglect of base rates in probability estimation. Acta Psychologica, 40, 287-298. Macchi, L. (1995). Pragmatic aspects of the base-rate fallacy. Quarterly Journal of Experimental Psychology, 48A, 188-207. MacCrimmon, K. R. (1968). Descriptive and normative implications of the decision-theory postulates. In K. Borch & J. Mossin (Eds.), Risk and uncertainty (pp. 3-32). London: Macmillan. MacCrimmon, K. R., & Larsson, S. (1979). Utility theory: Axioms versus 'paradoxes'. In M. Allais & O. Hagen (Eds.), Expected utility hypotheses and the Allais paradox (pp. 333-409). Dordrecht: D. Reidel. Macdonald, R. (1986). Credible conceptions and implausible probabilities. British Journal of Mathematical and Statistical Psychology, 39, 15-27. Macdonald, R. R., & Gilhooly, K. J. (1990). More about Linda or conjunctions in context. European Journal of Cognitive Psychology, 2, 57-70. Maher, P. (1993). Betting on theories. Cambridge: Cambridge University Press. Manktelow, K. I., & Evans, J. S. B. T. (1979). Facilitation of reasoning by realism: Effect or non-effect? British Journal of Psychology, 70, 477-488. Manktelow, K. I., & Over, D. E. (1991). Social roles and utilities in reasoning with deontic conditionals. Cognition, 39, 85-105. March, J. G. (1988). Bounded rationality, ambiguity, and the engineering of choice. In D. Bell,H. Raiffa, & A. Tversky (Eds.), Decision making: Descriptive, normative, and prescriptive interactions (pp. 33-57). Cambridge: Cambridge University Press. Margolis, H. (1987). Patterns, thinking, and cognition. Chicago: University of Chicago Press. Markovits, H., & Vachon, R. (1989). Reasoning with contrary-to-fact propositions. Journal of Experimental Child Psychology, 47, 398-412. Marr, D. (1982). Vision. San Francisco: W. H. Freeman. Matarazzo, J. D. (1972). Wechsler's measurement and appraisal of adultintelligence (Fifth Ed.). Baltimore: The Williams & Wilkins Co. McGeorge, P., Crawford, J., & Kelly, S. (1997). The relationships between psychometric intelligence and learning in an explicit and an implicit task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23, 239-245. Messer, W. S., & Griggs, R. A. (1993). Another look at Linda. Bulletin of the Psychonomic Society, 31, 193-196. Miller, D. T., Turnbull, W., & McFarland, C. (1990). Counterfactual thinking and social perception: Thinking about what might have been. In M. P. Zanna (Eds.), Advances in Experimental Social Psychology (pp. 305-331). San Diego: Academic Press. Miller, P. M., & Fagley, N. S. (1991). The effects of framing, problem variations, and providing rationale on choice. Personality and Social Psychology Bulletin, 17, 517-522. Morier, D. M., & Borgida, E. (1984). The conjunction fallacy: A task specific phenomenon? Personality and Social Psychology Bulletin, 10, 243-252. Morton, O. (1997, Nov. 3). Doing what comes naturally: A new school of psychology finds reasons for your foolish heart. The New Yorker, 73, 102-107. Moshman, D., & Franks, B. (1986). Development of the concept of inferential validity. Child Development, 57, 153-165. Moshman, D., & Geil, M. (1998). Collaborative reasoning: Evidence for collective rationality. Thinking and Reasoning, 4, 231-248. Mynatt, C. R., Tweney, R. D., & Doherty, M. E. (1983). Can philosophy resolve empirical issues? Behavioral and Brain Sciences, 6, 506-507. Nathanson, S. (1994). The ideal of rationality. Chicago: Open Court. Navon, D. (1989a). The importance of being visible: On the role of attention in a mind viewed as an anarchic intelligence system: I. Basic tenets. European Journal of Cognitive Psychology, 1, 191-213. Navon, D. (1989b). The importance of being visible: On the role of attention in a mind viewed as an anarchic intelligence system: II. Application to the field of attention. European Journal of Cognitive Psychology, 1, 215-238. Neisser, U., Boodoo, G., Bouchard, T., Boykin, A. W., Brody, N., Ceci, S. J., Halpern, D., Loehlin, J., Perloff, R., Sternberg, R., & Urbina, S. (1996). Intelligence: Knowns and unknowns. American Psychologist, 51, 77-101. Newell, A. (1982). The knowledge level. Artificial Intelligence, 18, 87-127. Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard University Press. Newstead, S. E., & Evans, J. St. B. T. (Eds.) (1995). Perspectives on thinking and reasoning. Hove, England: Erlbaum. Nickerson, R. S. (1996). Hempel's paradox and Wason's selection task: Logical and psychological puzzles of confirmation. Thinking and Reasoning, 2, 1-31. Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2, 175-220. Nisbett, R. E. (1981). Lay arbitration of rules of inference. Behavioral and Brain Sciences, 4, 349-350. Oaksford, M., & Chater, N. (1993). Reasoning theories and bounded rationality. In K. Manktelow & D. Over (Eds.), Rationality: Psychological and philosophical perspectives (pp. 31-60). London: Routledge. Oaksford, M., & Chater, N. (1994). A rational analysis of the selection task as optimal data selection. Psychological Review, 101, 608-631. Oaksford, M., & Chater, N. (1995). Theories of reasoning and the computational explanation of everyday inference. Thinking and Reasoning, 1, 121-152. Oaksford, M., & Chater, N. (1996). Rational explanation of the selection task. Psychological Review, 103, 381-391. Oaksford, M., & Chater, N. (1998). Rationality in an uncertain world. Hove, England: Psychology Press. Oaksford, M., Chater, N., Grainger, B., & Larkin, J. (1997). Optimal data selection in the reduced array selection task (RAST). Journal of Experimental Psychology: Learning, Memory, and Cognition, 23, 441-458. Oatley, K. (1992). Best laid schemes: The psychology of emotions. Cambridge: Cambridge University Press. O'Brien, D. P. (1995). Finding logic in human reasoning requires looking in the right places. In S. E. Newstead & J. S. B. T. Evans (Eds.), Perspectives on thinking and reasoning (pp. 189-216). Hove, England: Erlbaum. Osherson, D. N. (1995). Probability judgment. In E. E. Smith & D. N. Osherson (Eds.), Thinking (Vol. 3) (pp. 35-75). Cambridge, MA: The MIT Press. Overton, W. F. (1985). Scientific methodologies and the competence-moderator performance issue. In E. D. Neimark, R. DeLisi, & J. L. Newman (Eds.), Moderators of competence (pp. 15-41). Hillsdale, NJ: Erlbaum. Overton, W. F. (1990). Competence and procedures: Constraints on the development of logical reasoning. In W. F. Overton (Eds.), Reasoning, necessity, and logic (pp. 1-32). Hillsdale, NJ: Erlbaum. Perkins, D. N., Farady, M., & Bushey, B. (1991). Everyday reasoning and the roots of intelligence. In J. Voss,D. Perkins, & J. Segal (Eds.), Informal reasoning and education (pp. 83-105). Hillsdale, NJ: Erlbaum. Phillips, L. D., & Edwards, W. (1966). Conservatism in a simple probability inference task. Journal of Experimental Psychology, 72, 346-354. Phillips, L. D., Hays, W. L., & Edwards, W. (1966). Conservatism in complex probabilistic inference. IEEE Transactions on Human Factors in Electronics, 7, 7-18. Piattelli-Palmarini, M. (1994). Inevitable illusions: How mistakes of reason rule our minds. New York: John Wiley. Pinker, S. (1997). How the mind works. New York: Norton. Plous, S. (1993). The psychology of judgment and decision making. New York: McGraw-Hill. Politzer, G., & Noveck, I. A. (1991). Are conjunction rule violations the result of conversational rule violations? Journal of Psycholinguistic Research, 20, 83-103. Pollock, J. L. (1991). OSCAR: A general theory of rationality. In J. Cummins & J. L. Pollock (Eds.), Philosophy and AI: Essays at the interface (pp. 189-213). Cambridge, MA: MIT Press. Pollock, J. L. (1995). Cognitive carpentry: A blueprint for how to build a person. Cambridge, MA: MIT Press. Reber, A. S. (1993). Implicit learning and tacit knowledge. New York: Oxford University Press. Reber, A. S.,Walkenfeld, F. F., & Hernstadt, R. (1991). Implicit and Explicit Learning: Individual Differences and IQ. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17, 888-896. Reeves, T., & Lockhart, R. S. (1993). Distributional versus singular approaches to probability and errors in probabilistic reasoning. Journal of Experimental Psychology: General, 122, 207-226. Rescher, N. (1988). Rationality: A philosophical inquiry into the nature and rationale of reason. Oxford: Oxford University Press. Resnik, M. D. (1987). Choices: An introduction to decision theory. Minneapolis: University of Minnesota Press. Reyna, V. F., Lloyd, F. J., & Brainerd, C. J. (in press). Memory, development, and rationality: An integrative theory of judgment and decision making. In D. Schneider & J. Shanteau (Eds.), Emerging perspectives on decision research New York: Cambridge University Press. Rips, L. J. (1994). The logic of proof. Cambridge, MA: MIT Press. Rips, L. J., & Conrad, F. G. (1983). Individual differences in deduction. Cognition and Brain Theory, 6, 259-285. Roberts, M. J. (1993). Human reasoning: Deduction rules or mental models, or both? Quarterly Journal of Experimental Psychology, 46A, 569-589. Rosenthal, R., & Rosnow, R. L. (1991). Essentials of behavioral research: Methods and data analysis (Second Edition). New York: McGraw-Hill. Ross, L., Amabile, T., & Steinnetz, J. (1977). Social roles, social control, and biases in the social perception process. Journal of Personality and Social Psychology, 35, 485-494. S?, W., West, R. F., & Stanovich, K. E. (1999). The domain specificity and generality of belief bias: Searching for a generalizable critical thinking skill. Journal of Educational Psychology, 91, Savage, L. J. (1954). The foundations of statistics. New York: Wiley. Schick, F. (1987). Rationality: A third dimension. Economics and Philosophy, 3, 49-66. Schick, F. (1997). Making choices: A recasting of decision theory. Cambridge: Cambridge University Press. Schwarz, N. (1996). Cognition and communication: Judgmental biases, research methods, and the logic of conversation. Mahweh, NJ: Lawrence Erlbaum Associates. Scribner, S., & Cole, M. (1981). The psychology of literacy. Cambridge, MA: Harvard University Press. Shafir, E. (1994). Uncertainty and the difficulty of thinking through disjunctions. Cognition, 50, 403-430. Shafir, E., & Tversky, A. (1995). Decision making. In E. E. Smith & D. N. Osherson (Eds.), Thinking (Vol. 3) (pp. 77-100). Cambridge, MA: The MIT Press. Shanks, D. R. (1995). Is human learning rational? Quarterly Journal of Experimental Psychology, 48A, 257-279. Shweder, R. A. (1987). Comments on Plott and on Kahneman, Knetsch, and Thaler. In R. M. Hogarth & M. W. Reder (Eds.), Rational choice: The contrast between economics and psychology (pp. 161-170). Chicago: Chicago University Press. Sieck, W., & Yates, J. F. (1997). Exposition effects on decision making: Choice and confidence in choice. Organizational Behavior and Human Decision Processes, 70, 207-219. Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review, 63, 129-138. Simon, H. A. (1957). Models of man. New York: Wiley. Simon, H. A. (1983). Reason in human affairs. Stanford, CA: Stanford University Press. Skyrms, B. (1986). Choice & chance: An introduction to inductive logic (Third Ed). Belmont, CA: Wadsworth. Skyrms, B. (1996). The evolution of the social contract. Cambridge: Cambridge University Press. Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological Review, 119, 3-22. Slovic, P. (1995). The construction of preference. American Psychologist, 50, 364-371. Slovic, P., Fischhoff, B., & Lichtenstein, S. (1977). Behavioral decision theory. Annual Review of Psychology, 28, 1-39. Slovic, P., & Tversky, A. (1974). Who accepts Savage's axiom? Behavioral Science, 19, 368-373. Slugoski, B. R., & Wilson, A. E. (1998). Contribution of conversation skills to the production of judgmental errors. European Journal of Social Psychology, 28, 575-601. Smith, S. M., & Levin, I. P. (1996). Need for cognition and choice framing effects. Journal of Behavioral Decision Making, 9, 283-290. Snyderman, M., & Rothman, S. (1990). The IQ controversy: The media and public policy. New Brunswick, NJ: Transaction Publishers. Spearman, C. (1904). General intelligence, objectively determined and measured. American Journal of Psychology, 15, 201-293. Spearman, C. (1927). The abilities of man. London: Macmillan. Stankov, L., & Dunn, S. (1993). Physical substrata of mental energy: Brain capacity and efficiency of cerebral metabolism. Learning and Individual Differences, 5, 241-257. Stanovich, K. E. (1999). Who is rational? Studies of individual differences in reasoning. Mahweh, NJ: Erlbaum. Stanovich, K. E., & West, R. F. (1997). Reasoning independently of prior belief and individual differences in actively open-minded thinking. Journal of Educational Psychology, 89, 342-357. Stanovich, K. E., & West, R. F. (1998a). Cognitive ability and variation in selection task performance. Thinking and Reasoning, 4, 193-230. Stanovich, K. E., & West, R. F. (1998b). Individual differences in framing and conjunction effects. Thinking and Reasoning, 4, 289-317. Stanovich, K. E., & West, R. F. (1998c). Individual differences in rational thought. Journal of Experimental Psychology: General, 127, 161-188. Stanovich, K. E., & West, R. F. (1998d). Who uses base rates and P(D/~H)? An analysis of individual differences. Memory & Cognition, 28, 161-179. Stanovich, K. E., & West, R. F. (1999). Discrepancies between normative and descriptive models of decision making and the understanding/acceptance principle. Cognitive Psychology, 38, 349-385. Stein, E. (1996). Without good reason: The rationality debate in philosophy and cognitive science. Oxford: Oxford University Press. Sternberg, R. J. (1985). Beyond IQ: A triarchic theory of human intelligence. Cambridge: Cambridge University Press. Sternberg, R. J. (1997). The concept of intelligence and its role in lifelong learning and success. American Psychologist, 52, 1030-1037. Sternberg, R. J., & Gardner, M. K. (1982). A componential interpretation of the general factor in human intelligence. In H. J. Eysenck (Eds.), A model for intelligence (pp. 231-254). Berlin: Springer-Verlag. Sternberg, R. J., & Kaufman, J. C. (1998). Human abilities. Annual Review of Psychology, 49, 479-502. Stich, S. P. (1990). The fragmentation of reason. Cambridge: MIT Press. Stich, S. P., & Nisbett, R. E. (1980). Justification and the psychology of human reasoning. Philosophy of Science, 47, 188-202. Takemura, K. (1992). Effect of decision time on framing of decision: A case of risky choice behavior. Psychologia, 35, 180-185. Takemura, K. (1993). The effect of decision frame and decision justification on risky choice. Japanese Psychological Research, 35, 36-40. Takemura, K. (1994). Influence of elaboration on the framing of decision. Journal of Psychology, 128, 33-39. Thagard, P. (1982). From the descriptive to the normative in philosophy and logic. Philosophy of Science, 49, 24-42. Thagard, P. (1992). Conceptual revolutions. Princeton, NJ: Princeton University Press. Thaler, R. H. (1992). The winner's curse: Paradoxes and anomalies of economic life. New York: Free Press. Tschirgi, J. E. (1980). Sensible reasoning: A hypothesis about hypotheses. Child Development, 51, 1-10. Tversky, A. (1975). A critique of expected utility theory: Descriptive and normative considerations. Erkenntnis, 9, 163-173. Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211, 453-458. Tversky, A., & Kahneman, D. (1982). Evidential impact of base rates. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (pp. 153-160). Cambridge: Cambridge University Press. Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review, 90, 293-315. Vernon, P. A. (1991). The use of biological measures to estimate behavioral intelligence. Educational Psychologist, 25, 293-304. Vernon, P. A. (1993). Biological approaches to the study of human intelligence. Norwood, NJ: Ablex. Verplanken, B. (1993). Need for cognition and external information search: Responses to time pressure during decision-making. Journal of Research in Personality, 27, 238-252. Wagenaar, W. A. (1972). Generation of random sequences by human subjects: A critical survey of the literature. Psychological Bulletin, 77, 65-72. Wason, P. C. (1966). Reasoning. In B. Foss (Eds.), New horizons in psychology (pp. 135-151). Harmonsworth, England: Penguin: Wasserman, E. A., Dorner, W. W., & Kao, S. F. (1990). Contributions of specific cell information to judgments of interevent contingency. Journal of Experimental Psychology: Learning, Memory, and Cognition, 16, 509-521. Wetherick, N. E. (1971). Representativeness in a reasoning problem: A reply to Shapiro. Bulletin of the British Psychological Society, 24, 213-214. Wetherick, N. E. (1993). Human rationality. In K. Manktelow & D. Over (Eds.), Rationality: Psychological and philosophical perspectives (pp. 83-109). London: Routledge. Wetherick, N. E. (1995). Reasoning and rationality: A critique of some experimental paradigms. Theory & Psychology, 5, 429-448. Yates, J. F., Lee, J., & Shinotsuka, H. (1996). Beliefs about overconfidence, including its cross-national variation. Organizational Behavior and Human Decision Processes, 65, 138-147. _________________________________________________________________ Footnotes ^1 Individual differences on tasks in the heuristics and biases literature have been examined previously by investigators such as Hoch and Tschirgi (1985), Jepson, Krantz, and Nisbett, (1983), Rips and Conrad (1983), Slugoski and Wilson (1998), and Yates, Lee, and Shinotsuka (1996). Our focus here is the examination of individual differences through a particular metatheoretical lens--as providing principled constraints on alternative explanations for the normative/descriptive gap. ^2 All of the work cited here was conducted within Western cultures which matched the context of the tests. Of course, we recognize the inapplicability of such measures as indicators of cognitive ability in cultures other than those within which the tests were derived (Ceci, 1996; Greenfield, 1997; Scribner & Cole, 1981). Nevertheless, it is conceded by even those supporting more contextualist views of intelligence (e.g., Sternberg, 1985; Sternberg & Gardner, 1982) that measures of general intelligence do identify individuals with superior reasoning ability--reasoning ability that is then applied to problems that may have a good degree of cultural specificity (see Sternberg, 1997; Sternberg & Kaufman, 1998). ^3 The Scholastic Aptitude Test is a three-hour paper-and-pencil exam used for university admissions testing. The verbal section of the SAT test contains four types of items: antonyms, reading comprehension, verbal analogies, and sentence completion items in which the examinee chooses words or phrases to fill in a blank or blanks in a sentence. The mathematical section contains "varied items chiefly requiring quantitative reasoning and inductive ability" (Carroll, 1993, p. 705). ^4 We note that the practice of analyzing a single score from such ability measures does not imply the denial of the existence of second-order factors in a hierarchical model of intelligence. However, theorists from a variety of persuasions (Carroll, 1993, 1997; Hunt, 1997; Snyderman & Rothman, 1990; Sternberg & Gardner, 1982; Sternberg & Kaufman, 1998) acknowledge that the second order factors are correlated. Thus, such second-order factors are not properly interpreted as separate faculties (despite the popularity of such colloquial interpretations of so-called "multiple intelligences"). In the most comprehensive survey of intelligence researchers, Snyderman and Rothman (1990) found that by a margin of 58% to 13%, the surveyed experts endorsed a model of "a general intelligence factor with subsidiary group factors" over a "separate faculties" model. Throughout this target article we utilize a single score which loads highly on the general factor, but analyses which separated out group factors (Stratum II in Carroll's widely accepted model based on his analysis of 460 data sets, see Carroll, 1993) would reveal convergent trends. ^5 Positive correlations with developmental maturity (e.g., Byrnes & Overton, 1986; Jacobs & Potenza, 1991; Klahr, Fay, & Dunbar, 1993; Markovits & Vachon, 1989; Moshman & Franks, 1986) would seem to have the same implication. ^6 However, we have found (Stanovich & West, 1999) that the patterns of individual differences reversed somewhat when the potentially confusing term "false positive rate" was removed from the problem (see Cosmides & Tooby, 1996 for work on the effect of this factor). It is thus possible that this term was contributing to an incorrect construal of the problem (see Section [18]5). ^7 However, sometimes alternative construals might be computational escape hatches (Stanovich, 1999). That is, an alternative construal might be hiding an inability to compute the normative model. Thus, for example, in the selection task, perhaps some people represent the task as an inductive problem of optimal data sampling in the manner that Oaksford and Chater (1994, 1996) have outlined because of the difficulty of solving the problem if interpreted deductively. As O'Brien (1995) demonstrates, the abstract selection task is a very hard problem for a mental logic without direct access to the truth table for the material conditional. Likewise, Johnson-Laird and Byrne (1991) have shown that tasks requiring the generation of counter-examples are difficult unless the subject is primed to do so. ^8 The results with respect to the framing problems studied by Frisch (1993) do not always go in this direction. See Stanovich and West (1998b) for examples of framing problems where the more cognitively able subjects are not less likely to display framing effects. ^9 Kahneman and Tversky (1982) themselves (pp. 132-135) were among the first to discuss the issue of conversational implicatures in the tasks employed in the heuristics and biases research program. ^10 Of course, another way that cognitive ability differences might be observed is if the task engages only System 2. For the present discussion, this is an uninteresting case. ^11 It should be noted that the distinction between normative and evolutionary rationality used here is different from the distinction between rationality[1] and rationality[2 ]utilized by Evans and Over (1996). They define rationality[1] as reasoning and acting "in a way that is generally reliable and efficient for achieving one's goals" (p. 8). Rationality[2] concerns reasoning and acting "when one has a reason for what one does sanctioned by a normative theory" (p. 8). Because normative theories concern goals at the personal level, not the genetic level, both of the rationalities defined by Evans and Over (1996) fall within what has been termed here normative rationality. Both concern goals at the personal level. Evans and Over (1996) wish to distinguish the explicit (i.e., conscious) following of a normative rule (rationality[2]) from the largely unconscious processes "that do much to help them achieve their ordinary goals" (p. 9). Their distinction is between two sets of algorithmic mechanisms that can both serve normative rationality. The distinction we draw is in terms of levels of optimization (at the level of the replicator itself--the gene--or the level of the vehicle); whereas theirs is in terms of the mechanism used to pursue personal goals (mechanisms of conscious, reason-based rule following versus tacit heuristics). It should also be noted that, for the purposes of our discussion here, the term evolutionary rationality has less confusing connotations than the term 'adaptive rationality' discussed by Oaksford and Chater (1998). The latter could potentially blur precisely the distinction stressed here--that between behavior resulting from adaptations in service of the genes and behavior serving the organism's current goals. ^12 Evidence for this assumption comes from voluminous data indicating that analytic intelligence is related to the very type of outcomes that normative rationality would be expected to maximize. For example, the System 2 processes that collectively comprise the construct of cognitive ability are moderately and reliably correlated with job success and with the avoidance of harmful behaviors (Brody, 1997; Lubinski & Humphreys, 1997; Gottfredson, 1997). ^13 Even on tasks with clear computational limitations, some subjects from the lowest strata of cognitive ability solved the problem. Conversely, on virtually all the problems, some university subjects of the highest cognitive ability failed to give the normative response. Fully 55.6% of the university subjects who were at the 75%ile or above in our sample in cognitive ability committed the conjunction fallacy on the Linda problem. Fully 82.4% of the same group failed to solve a nondeontic selection task problem. ^14 A reviewer has pointed out that the discussion here is not necessarily tied to the mental models approach. The notion of searching for counter-examples under the guidance of some sort of control process is at the core of any implementation of logic. References 1. mailto:bbs at soton.ac.uk 2. mailto:journals_subscriptions at cup.org 3. mailto:journals_marketing at cup.cam.ac.uk 4. mailto:kstanovich at oise.utoronto.ca 5. mailto:westrf at jmu.edu 6. http://www.bbsonline.org/documents/a/00/00/04/77/bbs00000477-00/bbs.stanovich.html#3. Computational 7. http://www.bbsonline.org/documents/a/00/00/04/77/bbs00000477-00/bbs.stanovich.html#4. Applying the Wrong Normative 8. http://www.bbsonline.org/documents/a/00/00/04/77/bbs00000477-00/bbs.stanovich.html#5. Alternative Task 9. http://www.bbsonline.org/documents/a/00/00/04/77/bbs00000477-00/bbs.stanovich.html#Table 1 10. http://www.bbsonline.org/documents/a/00/00/04/77/bbs00000477-00/bbs.stanovich.html#7. Individual Differences and 11. http://www.bbsonline.org/documents/a/00/00/04/77/bbs00000477-00/bbs.stanovich.html#Table 1 12. http://www.bbsonline.org/documents/a/00/00/04/77/bbs00000477-00/bbs.stanovich.html#4.3 Tacit Acceptance of the 13. http://www.bbsonline.org/documents/a/00/00/04/77/bbs00000477-00/bbs.stanovich.html#4.2 Putting Descriptive Facts to Work: The 14. http://www.bbsonline.org/documents/a/00/00/04/77/bbs00000477-00/bbs.stanovich.html#4. Applying the Wrong Normative 15. http://www.bbsonline.org/documents/a/00/00/04/77/bbs00000477-00/bbs.stanovich.html#Table 2 16. http://www.bbsonline.org/documents/a/00/00/04/77/bbs00000477-00/bbs.stanovich.html#Table 2 17. http://www.bbsonline.org/documents/a/00/00/04/77/bbs00000477-00/bbs.stanovich.html#Table 3 18. http://www.bbsonline.org/documents/a/00/00/04/77/bbs00000477-00/bbs.stanovich.html#5. Alternative Task From checker at panix.com Thu Dec 29 02:31:10 2005 From: checker at panix.com (Premise Checker) Date: Wed, 28 Dec 2005 21:31:10 -0500 (EST) Subject: [Paleopsych] Thomas Jansen, ed.: Reflections on European Identity Message-ID: Reflections on European Identity Edited by Thomas Jansen EUROPEAN COMMISSION FORWARD STUDIES UNIT WORKING PAPER, 1999 The contents of this publication do not necessarily reflect the opinion or position of the European Commission. Table of contents Preface.... 5 by Jean-Claude Th?bault The dimensions of the historical and cultural core of a European identity .... 7 by Heinrich Schneider Consciousness of European identity after 1945.... 21 by Gilbert Trausch European Identity and /or the Identity of the European Union.... 27 by Thomas Jansen A contribution from political psychology.... 37 by Tom Bryder What is it ? Why do we need it ? Where do we find it ? .... 51 by Korthals Altes European identity and political experience....57 by Mario Soares How to define the European identity today and in the future? .... 63 by Ingmar Karlsson European identity - A perspective from a Norwegian European, or a European Norwegian....73 by Truls Frogner European identity - an anthropoligical approach.... 77 by Maryon McDonald European identity and citizenship .... 81 by Massimo La Torre From poetic citizenship to European citizenship .... 89 by Claire Lejeune L'identit? europ?enne comme engagement transnational dans la soci?t?.... 99 by R?diger Stephan Security and a common area.... 103 by Adriano Moreira Neither Reich nor Nation - another future for the European Union.... 107 by Roger De Weck What does it mean to be a European ? Preliminary conclusions .... 111 by J?r?me Vignon Annex:....115 A dialogue on unemployment between Truls Frogner and his Neighbour List of contributors....119 Preface The texts that have been gathered in the following pages were written or pronounced during the ?Carrefour Europ?en des sciences et de la culture? which was held in 1996 in Coimbra. This event had been organised by the Forward Studies Unit in cooperation with the ancient University of Coimbra whose academic excellence made this small Portuguese town so famous. The Carrefours Europ?ens aim to provide a forum where personalities coming from the world of science or culture can discuss and exchange their views with Commission officials. Participants come from different European countries to propound ideas on issues that are particularly important for the future of our continent. Each of them brings different experience and sensibilities and thus contributes to the openness and the richness of the reflection. The debates that took place in Coimbra focused on understanding how the European identity expresses itself. Their richness is reflected in the following texts that are at long last submitted to our readers with the deep conviction that neither their relevance nor their actuality has been lost. A characteristic of European identity is that it facilitates, fosters and stimulates variety in modes of expression, form, content and approach. And it is clear that this same principle can be applied to the definition of this identity itself: several paths may lead to the recognition and the assertion of an European identity which in itself is made of a plurality of ethnic, religious, cultural, national, or local identities. Each of the discussions that took place in Coimbra have, in their own way, reflected this approach. Both the University's rector and Marcelino Oreja Aguirre (the Commissioner in charge of communication, information, culture and institutional questions at that time (1995-1999)) highlighted three constituent poles of European identity. First, Europe is steeped in humanism and all the values that make up its heritage today. The second is European diversity: even if the construction of the Community seems to be a harmonisation process, this harmonisation is just a necessary step towards the realisation of a European market-place which should allow underlying diversitiy to flourish. Diversity is truly Europe's richness. Finally, universalism is a European value and an obligation. At a time when Europe is sometimes tempted by the idea of becoming a "fortress Europe", this founding principle has to be constantly remembered and revived. The debates gave further opportunities to put forward some key issues linked to identity, memory or nation. Thus, identity appears as two-sided: on the one hand memory, and heritage, and on the other hand voluntarism and a project to be achieved. Contrary to what is usually thought, identity seems to be constantly evolving and changeable. All these reflections ended in a discussion on the theme of "Europe and its role in the World", and of its contribution to the promotion of peace and progress. Marcelino Oreja had expressed the initial interest in a meeting such as this and had encouraged the Forward Studies Unit to organise it. The Commissioner's active participation highly contributed to the intellectual and human success of the event. We now offer our readers these collected thoughts, for which we most warmly thank the participants with the wish that they will cast light on a question that reaches right to the heart of the European political project. Jean-Claude Thebault Forward Studies Unit Director ----------------- The dimensions of the historical and cultural core of a european identity Heinrich Schneider Preliminary remarks The topic "dimensions of the historical and cultural core of a European identity" may appear to be a historical and theoretical one. However, it is political in its nature. It stands in the context of a political discussion. Obviously, it is a contribution to the assessment of new political projects of the European Community : On the one hand, a discussion of the role of the cultural heritage, the historical traditions of Europe in the formation of a political identity which will and should]d necessarily arise if the projects of "deepening" are to be successful ; and, on the other hand, a discussion about the question : what is the significance of the common cultural and historical roots of those nations which belong to Europe, in view of the "widening" of the Community. Historical reflections, theoretical reasoning, and scholars' analyses can help with the orientation of opinion and decision-making processes, but they cannot replace decisions about political goals. What we are really dealing with is the political identity of a European Union. What it should be has to be decided politically. Problems of clarifying the terms o what is constituting identity ? Every now and then, politicians have talked about "European Identity, but mostly without ever trying to explain its meaning1 ! The term "identity" is used in the context of discussions on European identity as psychologists, sociologists, and students of civilisation apply it o not in the sense philosophers deal with the concept "identity" in logics or metaphysics. Primarily, one talks about the identity, or the formation of identity, or an individual. Can we construct a concept of collective identity just as well ? Perhaps as an analogy. But we must be careful in doing so. For all that : one does speak of the identity of social groups, and there is also the concept of the identity of larger social or historical units, for instance nations. However, we cannot possibly construct the concept of "European identity" in the same fashion as we perceive group identity of Boy Scouts or national identity. These models are not adequate and thus we have to search for a more general definition. Anyone in search of her or his identity will pose the question : "Who am I" ?. With regard to collective identity the questions are : "Who are we ? Where do we come Cf., for instance, the "Document on European Identity, adopted by the Ministers of Foreign Affairs of the member states of the European Community in Copenhagen, 14 December 1973 from ? Where do we go ? What do we expect ? What will expect us ?"2. But these questions really serve to clarify another, more fundamental one : Why and how can we (or must we) talk in the first person plural ? There are two common answers ; one of them sounds as follows : "Because we want it that way !. The other one refers to certain things that we have in common : a common history, common views about our present situation, common projects for our future and the tasks that are facing us there... In the lingo of sociologists, this means : it is the common "definition of a situation" which serves as a mutual link and creates solidarity.3 Identity is thus founded on "spiritual ties", it can be grasped in a "core of shared meanings"4 in sharing consensually a common universe of symbols and relevancies.5 We do not only speak a common language ; we also agree about the things that must be talked about as well as the things that are important without words. This sharing of common values is not hanging somewhere in mid-air over our actual everyday life. Normally there are common societal conditions of life as well. Therefore, we also have to deal with the "sociological dimension" of European common cause. Our common "world of meanings" ("knowing about life") is one thing that we need in order to find our collective identity. Another one is the delimitation as an element of identity. Knowing about myself also implies that I distinguish myself from others ; identity is always based on negations, as Niklas Luhmann shows.6 Collective identity as well needs the distinction between "Us" and "Them". Nothing leads more effectively to the formation of group identity than a common enemy, according to those who do research on small groups. An analysis of nationalism shows that national identity is mostly defined through relating to "counter identities".7 A third element is needed to constitute collective identity in the full sense of the word : the ability to act and to be responsible for one's action. Personal identity includes the capacity of independent action. Collective identity calls for, and implies, authorisation, which enables the collectivity to conduct collective action.8 2 This, by the way, is how Ernst Bloch begins his book "Des Prinzip Hoffnung", Vol. 1, Berlin 1954, p. 13. 3 In this context, the present situation has also a historical depth-dimension, and there is a perspective into the future 4 Cf. Talcott Parsons, Politics and Social Structure, New York 1969, p. 292ff. This concept of collective identity is in accordance to Parson's concept of individual identity being "the core system of meanings of an individual personality ; cf. Talcott Parsons, The Position of Identity in the General Theory of Action, in : Chad Gordon and Kenneth J. Gergen (eds.), The Self in Social Interaction, New York 1968, p. 14. 5 Cf. Peter L Berger and Thomas Luckmann, The Social Construction of Reality. A treatise in the sociology of knowledge, Garden City and New York : Doubreday 1967 6 "Alle Identit?t konstituiert sich ?ber Negationen"; cf. Niklas Luhmann, Sinn als Grundbegriff der Soziologie, in : J?rgen Habermas and Nikias Luhmann, Theorie der Gesellschaft oder Sozialtechnologie, Frankfurt am Main : Suhrkamp 1971, p. 60. 7 Cf. Orest Ranum, Counter-identities of Western European Nations in the Early-Modern Period. Definitions and Points of Departure, ~n : Peter Boerner (ed.). 8 Cf. Burkart Holzoer and Roland Robertson, Identity and Authority. A Problem Analysis of Processes of Identification and Authorisation, ~n : Roland Robertson and Burkart Holzner (eds.), Identity and Authority, Oxford : Blackwell 1980, pp. 5ff., 10f., 18f., 22ff. Aristotle already knew that, by the way : The identity of a P?lis is primarily a constitutional identity, the "polite?a", through which a community becomes a political subject, so to speak. It is founded on the "koinon?a" of knowing about right and wrong (the "d?kaion") as well as about what is beneficial or not (the "s?mpheron"). It rests on the solidarity ("phil?a") of people, and its political manifestation is a general consensus, "hom?noia" as "phil?a politik?".9 Therefore, collective identity in the full sense of the concept implies a political dimension : Collective identity formation tends towards the establishment of a polity. Only against the background of this differentiation between the requirements and dimensions of collective identity, it does make sense posing more exact and detailed questions to find out what European Identity is, what it can be, and what the possible impact of historical, cultural, and sociological components looks like. Some theses and problems have to be introduced and considerable aspects in our context are to be pointed out. The primacy of politics The first task we have to deal with is to find out whether "the European Community will be able to build up a 'European identity'"., namely, under the present "new circumstances, now that the 'old' historical frontiers of the continent are reappearing". This language sounds clear enough ; but the matter itself is rather complicated. The "reappearance" of the "old historical frontiers of the continent".--do we know what we are talking about ? To quote Oskar K?hler "Neither in a geographical sense nor in a historical view, there is a static' definition of Europe".10 A lot has been said about the validity of that formula, "Europe goes from the Atlantic to the Urals". But Willem van Eekelen, the Secretary General of the Western European Union, has recently stated that "the whole of Europe ..." ("Gesamteuropa") reaches "... from Vladivostok to San Francisco", and he is not the only one to say that.11 Statements of this kind do sound as if inspired by the experience of the CSCE process. But the most famous German XIXth century historiograph on European politics, Leopold von Ranke, has already pointed out that America belongs to Europe ; "indeed do New York and Lima concern us much more than Kiev and Smolensk''12 o and we must bear in mind that Ranke, of course, saw the Russian Empire as part of the European system. Other authors took the same attitude ; there is for example the definition of the European system of states as "the connection and interdependence of all European states and empires ... including the independent states that have arisen from the colonies of Europeans in America".13 9 Aristotle, Politics, book I chapter 2 and book III chap. 3. 10 This is the introductory sentence of Oskar K?hler's article "Europa", in : Josef Hofer and Karl Rahner (eds.), Lexikon f?r Theologie und Kirche, 2nd ed., 2nd printing, Freiburg/Br. 1986, colt 1187. 11 Ambassador Henri Fromont Meurice did join him in sharing this opinion, cf. "Europa im Aufbruch. Auf dem Wege zu einer neuen Friedensordnung", Protokoll des 91. Bergedorfer Gespr?chskreises 199 , p. 29 and p. 34. 12 Leopold von Ranke, Geschichte der germanischen und romanischen V?lker (1824), p. XXXIX, cf. Heinz Gollwitzer, Europabild und Europagedanke, M?nchen : Beck 1951, p. 279 13 Karl Heinrich P?litz, Die Staatswissenschaften im Lichte unserer Zeit (1824) cf. Gollwitzer, op. cit., p. 443 On the other hand, there are much narrower definitions. When Winston Churchill held his famous speech at the University of Zurich in 1946, in which he called for the creation of a kind of United States of Europe, he entertained no doubts that Great Britain must naturally be a friend and supporter of this new political entity, but of course not a member. And the author of a well-known book about "The limits and Divisions of European History stated that usually the eastern border of the European community today, both in earlier times and today has always been the Western frontier of Russia".14 This, of course, refers to modern times ; in the Middle Ages, Europe's eastern borderlines were located much further westward. Where do we find those "reappearing 'old' frontiers of the continent ?" The controversy on how Europe is to be defined geographically is, nowadays, hardly touched by the question whether America ought to be included in the European identity ; however, there is dissent whether Europe coincides with the occidental part of the continent, that is, whether the border between Latin and Byzantine civilisation can serve to delimit it, or should do so. Now we have a whole series of problems : It cannot be denied that the schism between "East" and "West Rome" appears to be a symbol for a cultural demarcation. In the West, there was the struggle for supremacy between political and religious authorities, and in the dead corner between both of them the freedoms of the estates and urban autonomy could be developed. As a consequence, the "civil society" had more of a chance to spread out than in the East, where church government was integrated in the Empire, thus perpetuating ecclesiastical rule in the political order, respectively Caesaropapism. This had further outcomes ; but there also had been other preconditions that did contribute to the different course of social and societal history, like small-scale geography and the harbourly-structured landscapes of many of the regions of Western Europe15, as against massive geographical structures of the East, and others. v Surely, there was the great schism ; but there was also suffering that arose from a common consciousness of a fundamental unity--up to the Ecumenical Movement of our days. v Even in the days of Peter the Great, Russians reached out for Europe. Were the East European Westerners of his days and of later times erring in their illusions ? Can we deny that cultural and political identities are open to historical change, and that there have been, already, processes of "widening" of the extent and range of European civilisation ? v And, with respect to social and mental differences between different parts of what has been called our continent, is it not a constituent feature of the cultural uniqueness of Europe that opposites meet here, time and again, turning the task of ever-renewed conciliation into the principle of productive dynamic development ? 14 Quoted from the German edition : Oskar Halecki. Europa Grenzen und Gliederung seiner Geschichte, Darmstadt : Gentner 1957, p 79 15 Hans Georg Gadamer speaks of "einer einzigen gro?en Hafenlandschaft die f?r die Entdeckungsfahrten zu neuen Weiten f?rmlich aufgetan warn", cf. Hans Georg Gadamer, Das Erbe Europas, Frankfurt am Main : Suhrkamp 1989, p. 40. I do not want to say that such "old frontiers" like those between the Latin and the Byzantine tradition are irrelevant. But how far Europe will reach tomorrow, or the day after tomorrow, or in the next century and later, cannot be looked up in a historical atlas of the Antique, the Middle Ages, of the XIXth century, or of the Cold War period in our century either. Besides, the supreme representatives of the CSCE participating states have adopted in 1990, the Paris "Charter for a New Europe", and we can read in this charter that the new Europe extends as far as the reality of human rights and democracy, rule of law and pluralism, economic freedom, social justice, and the commitment for peace is reaching on European soil. We all know, and have only recently again become painfully aware, that there is that discrepancy between what is and what should be, what we want to do and what we achieve. But should it not be our common cause to realise and safeguard these principles of a European political order for all nations whose representatives have stood up for them ? Can we deny this solidarity to those who wish to subscribe to this common European order--wherever they may live in Europe ? And is it possible to denounce the declaration of thirty-four heads of state and government in favour of a new "united democratic Europe. as a mere emptiness, a proclamation that cannot be other than untrue o in view of the fact that even Albania has now joined these 34 ? Certainly, there may be reasons for a narrower concept of uniting efforts that have to be carried on during the years ahead of us o such as political prudence may suggest. In Western Europe, governments and people might ask themselves whether the chances for organising European security within European borders may not be better if one denies responsibility for certain regions. It can be argued that the political and structural requirements for a certain kind of economic or political integration may indeed call for a restriction in certain areas, in order to be optimal. And there are much more such questions and considerations. Just one of these questions is the one we are dealing with. What would be the most favourable historical and cultural conditions for including parts of Europe in the Union-to-be : soon, or later ? But, according to my opinion, it would be unjustifiable trying to avoid all these reflections, not to discuss their ramifications and to shun--or to disguise--political decisions by pointing out old historical and cultural borders. And indeed, if one were to stress that cleavage between ancient Latin and Byzantine culture, then the motherland of European political thought, Greece, ought not to have been accepted into the Community--and the definite stand the Community had recently taken in favour of Yugoslavian unity would have been absurd... Hence, the primacy of politics should not be denied. Options of the European Community When speaking about the "reappearance of old frontiers" in Europe, some other aspects come to mind. What is really new in the European situation, is the disappearance of "less old" frontiers. What allows the states of Central and Eastern Europe to "return to Europe"--as they call it--is in the first place the fact that the fatal barriers, the wall in Berlin, the barbed wire obstacles and iron curtains, are removed, and that people hayed been successful in overcoming totalitarian systems. But along with the end of East-West polarisation, with the termination of the antagonisms of political organisation, some other "old frontiers" and controversies have reappeared. We face again the situation about which Karl Jaspers said, some decades ago, that Europe has got to make a choice between "Balkanisation" and "Helvetisation". "Balkanisation" means a tangle of conflicts and hostilities, whereas Helvetisation" points to the attainment of a political identity across a multitude of national heritages and languages. The beginnings of the formation of the European Community, restricted to the six founding nations of the Coal and Steel Community and later the EEC had been initiated as such a process of "Helvetisation"., as a first step towards a confederation with an identity of its own. However, this policy was determined by some quite specific options. At first, things were started with a small community of states that intended integration ; but it was clear that this community could not identify itself with "Europe". The Community of the "European Six" was regarded by that organisation which considered itself as maid-servant to a union of European states--i.e. the Council of Europe--as a case of establishing "specialised authorities" for specific functional areas. In Strasbourg they thought that all such endeavours should always take place "within the frame of the Council of Europe" and thus being securely bound to the "proper European policy"(as the Council had conceived it). And yet this Council of Europe was in itself limited to only a part of the European states. As a representative of a European identity, it was some "pars pro toto", and the Community of the Six was some "pars partis". This changed in the course of time. In the Treaty of Rome, "the foundations of an ever closer union among the European peoples" (and not only those peoples that are directly involved) are mentioned. And in the Single European Act, the parliament of the Twelve is called the instrument of expression for the endeavours of "the European peoples" ; as simply as that. This implies that the political identity of the Community is to be further developed to become the political identity of Europe as a whole. If this is wanted, one cannot deny any European nation the right to participate in that political identity. Now, if today some 81 percent of the Hungarians, 79 percent of the citizens of the CSFR, and still 68 percent of the Poles have a positive attitude about the creation of the United States of Europe", and affirm that their own nation belongs to this future policy16, than the Community of the Twelve will have to reconsider what is to be done about the Community's own identity. Another decision of the "founding fathers" has been quite important. What the Community was all about originally, was to form an administrative union to manage the common coal and steel production as well as the distribution, notwithstanding the idea to use this union as a lever to promote political integration by creating interdependence of interests. Later, a widening was achieved in more than one dimension : the Community was extended to nine at first and then step by step to twelve member states. And the area of functions and policy fields was expanded, comprising now the whole of national economies and more and more common tasks up to a common foreign and security policy. The reason for this widening of functions and interests lies in the interdependence of policy areas. There is hardly a problem area which is not to be treated on the EC level. On the other hand, the states have not given up their spheres of responsibility, and they are still thinking (or 16 Cf. "Mehrheit im Osten f?r Vereinigte Staaten von Europa", in : Die Presse (23 April 1991), p. 22. dreaming) of their complete autonomy and "sovereignty". Thus, they try to keep under control what is happening. As a result, political processes on community, national and "mixed" levels intertwine. Complex procedures of mediation and grey zones of responsibility evolve. There was talk about "traps of policy tangles"17 and "Eurosklerosis". The reforms initiated by Jacques Delors were aimed at breaking up these entanglements and the sclerosis of the Community. Since the EC is supposed to gain more freedom of action rather than simply retaining its status, this will hopefully end in a strengthening of its political identity. This becomes particularly clear in view of the goal to form a European Union of a federal character. Under this perspective, the Community can no longer be regarded as a system to co-ordinate just the problem management of its member states who so far try to push their own interests rather then bear jointly the common consequences of their interdependence. A federal union cannot be achieved without an established supranational authority to determine a common policy. And this does not only raise the problem of democratic legitimacy but also the question of political identity. Thus, it is not surprising that the question of political identity of the Community, and in particular of the European-Union-to-be, is posed anew. In the first place the upheavals in Eastern Europe raise the problem how the Community intends to define its own purpose with regard to the identity of the whole of Europe--even more than for example the intentions of EFTA states to join the Community. Slogans like "centre of gravitation" or "anchor of stability" are no adequate answers to that. And secondly, "deepening", strengthening the polity character of the Community, transforming it into a "European Union" also implies the necessity to clarify identity problems. In search of a definition of European identity We have to find out what Europe has in common, historically and culturally, in order to define, to articulate and to strengthen its identity. If we are to do that, we should remember what the fundamental dimensions of a possible European identity are, according to the conceptual and theoretical explications I tried to give in the first part of this contribution : -the "spiritual ties" as they are manifested in a common "world of meanings" (a "universe of symbols and relevancies"), as they allow to achieve a consensual "definition of the situation"., and including the three dimensions of a shared "today", "past", and "future" ; -the "delimitation", knowing what is special about "our thing" as compared to other people's things ("nostra res agitur"--not some "res alienorum") ; -the ability to act and bear responsibility through authorisation and, thus, institutionalisation (which means, in consequence, polity building). 17 Cf. Fritz W. Scharpf, Die Politikverflechtungs-Falle. Europ?ische Integration und deutscher F?deralismus im Vergleich. in : Politische Vierteljahresschrift, vol. 26 (1985), p. 323ff. What is primarily called for, is obviously a "political identity in the concise sense of the term--a capacity which enables to institutionalise common action, and a quality which provides an adequately wide and massive basis of consensus and loyalty. It may well be that remembering common historical and cultural roots, and activating consciousness of them, helps to strengthen this basis. Yet one wonders why this historical dimension must be shoved into the foreground when the real issue is what Aristotle calls "hom?nioa" and what in our context might be called "European spirit" or "consciousness of a European common cause". To translate this into educational terms : Can we, should we, make our efforts to form European consciousness only by looking at the past, at our common history ? Would it not be equally important to recall what Europe means today and will mean in the future ? Some hypothetical answer is at hand : the matter is seen in the same way as it was seen in the last century when national identify had to be formed. The formation of a national consciousness, however, came about under remarkably different circumstances.18 When the nation decided to take over the power of government--as in the typical case of France, the main thing was to create the political will and to keep it alive (in the "pl?biscite de tous les jours", to cite the famous formula Ernest Renan found). "Res publica" was to replace "res regis". The case was different if an ethnic or national group wanted to emancipate itself from a supra-national or foreign regime (as in the case of "secessionist nationalism"), or if people which were convinced that they belong together wanted to break up the barriers between constituent states (as in the case of "integrational nationalism). Whereas, in the first of the three typical cases, the state that shall be taken over by the nation does already exist, in both of the latter cases a state shall be created which does not yet exist. The representatives of the people's political will need a "metapolitical" justification. It must be explained that this state should exist. This explanation refers to the existence of a "cultural nation" that now wants and deserves to constitute itself politically. Usually, the "meta-political" justification is given with a reference to history : in the past we, or our ancestors, did descend from one family or tribe ; or we grew together as a spiritual community ; and we shared a common fate even in earlier times. Or even this : history has uncovered a common metaphysical substance which unites us in national identity o Herder's doctrine of "Volksgeist". In political reality, this idea serves efforts of make-believe in the service of a political will.19 It derives from religious doctrines and concepts which are given a new interpretation by transferring into socio-political thinking. To give an outstanding example : the originally theological concept of the "corpus mysticum", that is the 18 The following remarks make reference to Theodor Schieder's triple typology of nation state building in Europe, namely (I) the process of assumption of power of an existing state by the "nation", (II) the process of secession or separation of a "nation"from a multinational empire or state, and (III) the unification of--up to then independent--states, whose peoples regard themselves being parts of one single "nation". This triadic typology, according to my opinion is more revealing than Friedrich Meinecke's famous distinction between "Staatsnation"and "Kulturnation"; but Schieder's idea is able to explain Meinecke's comparison. Cf. Theodor Schieder, Typologie und Erscheinungsformen des Nationalstaates in Europa, in : Historische Zeitschrift, vol. 202 (1966), p. 58ff. 19 Cf. Raymond Grew, The Constitution of National Identity, in : Boerner (ed.), op. cit., p. 31 ff. community of the faithful who find their identity in Christ's "pneuma", in which they eucharistically and spiritually participate, is transferred on the nation, whose members are spiritually bound together by their participation in some metaphysical substance, which Herder called "Volksgeist". It is only later that such notions lose their "mystical" (or mythological, or pseudo-theological) character, so that the nation then (and we might say, "only) becomes a "community by common culture and disposition through having shared a common fate.20 If today a political unification is to be attempted, for instance, a European Union, and if we all, perhaps without much reflection, still see the paradigm for the creation of a political identity in the way nation states were formed, then we must suspect that the idea of a "cultural Europe", which would have the same function as the idea of a "cultural nation", will here be conjured up. I do not want to say that one might dismiss the idea of a European cultural identity and the quest for its historical roots as nothing but ideology, as a mere construction to serve a political purpose, as, for example, Geoffrey Barraclough did.21 Indeed, there is a "fundamentum in re" : there is a European spiritual and cultural identity ; it would lead too far astray if I were to quote the witnesses for that--from Ernst Robert Curtius to Denis de Rougemont, Arnold Toynbee to Hendrik Brugmans.22 But reminding ourselves of the names of such authoritative scholars does not dispense us from the effort to identify at least some substantial contributions to what we might all "European spirit". What is meant to be represented by these centres of experience and of thought ? And what has been further developed from the achievements those keywords refer to ? It is difficult to answer such questions, for several reasons. One of them is the fact that the "fundamentum in re" of European spiritual and cultural identity is characterised by an agreement to disagree, a "concordantia discors", as Jacob Burckhardt called it, a common cause with sometimes lots of antagonism. Yet there are achievements and experiences imprinted in a common memory that constitute common understandings and are in the background of such political declarations as the "Charter of Paris" conjuring so emphatically an identity of spirit and will. There are problems both in principle and in method which have to be faced, if one tries to reconstruct and to explain them : that of the "hermeneutic circle" and of the inevitably subjective and specific perspective as well as that of the criteria for adequate selection of sources, etc. We cannot deal with these problems here in extenso. So we just turn to the "authorities", to the specialists of information. There is plenty of general agreement about the most important and significant issues-- maybe not perfect, but considerable consensus. After all, the historical and cultural identity of Europe has been an interesting topic for a long time, and many have taken part in this discussion. At least, there is an agreement about the most important historical eras, what their message is today and what should be kept alive in the 20 Otto Bauer regards the nation as a "Kultur- und Charaktergemeinschaft", based on common historical experiences ("Erleben und Erleiden des Schicksals") ; cf. Otto Bauer, Die Nationalit?tenfrage und die Sozialdemokratie, Wien 1924. 21 Cf. Geoffrey Barraclough, Die Einheit Europas als Gedanke und Tat, G?ttingen : Vandenhoeck & Ruprecht 1964. 22 See the contribution by Hendrik Brugmans in this volume. "collective memory" of Europeans. In this context phenomena, issues, and essentials like the following ones are named23 : -Extra-European and "Pre-European" achievements that were significant stimulators of European culture, i.e. the impact of ancient Egypt on pre-classical and classical Antique, above all the tradition of the Old Testament. -Classical Hellas : The Greek tradition of the "polis", the "civilisation" of social life and the Greek understanding of politics which had to have such a deep influence all over Europe ; the "discovery of the mind" ; the idea of "paideia" and thus humanness ; the evolution of philosophy--the beginnings of critical cognition of reality, that is, the Pre-Socratic thinkers, the classical philosophers Plato and Aristotle and the creation of the various genres of European literature. Rome as Republic and Empire : The idea of the "res publica", Roman law, the "virtutes", the Roman answer to Greek philosophy (Cicero, for example). Christendom as creative power in Europe : the surpassing of the reality through God's salvatory work ; the idea of the "corpus mysticum" ; the several types of Christian attitudes in the mundane world ; the relativity of secular power, the construction (or discovery) of the concept of "person" in christological thought and dispute the interrelation of religious orientation and secular order, of political power and church authority--with view on the different development in the Latin and Byzantine empires and its consequences for the forming of their societies--, and the importance of Christian social doctrine. The laying of the foundations of "Occidental culture" after the "V?lkerwanderung", the role of Benedictine monkshood, the "Regnum Europae" of Charlemagne. The "Second Awakening of Europe" (Albert Mirgeler) in the Middle Ages ; the controversy between "regnum" and "sacerdotium" ; the struggle for "Libertas Ecclesiae", the intellectual disputes over the recognition of authorities (the establishment of the "studium" as an institution. The rise of scholastic philosophy and of universities), and the rediscovery of the "inner mind" (mysticism). The inclusion of Middle and Eastern Europe in Western European culture. The dawn of modern times : Schism, growth of towns and municipal self- government ; Renaissance and Reformation ; striving for religious freedom, the building-up of the territorial state, development of a bourgeois economy, the construction of a European state system and the growth of its dynamics of power, the expansion of Europe into other continents. The Enlightenment, the emancipation of the middle classes, the great revolutions in England, America, France, and their intellectual foundations : human rights, basic freedoms, civil society, and representative government. The political ideas and movements of the XIXth century : liberal and democratic progressism, conservatism, socialism, and imperialism ; idealistic and materialistic 23 The list of phenomena, issues and essentials is in particular influenced by the author's subjective view. But as it shall be nothing more than an impulse for discussion, it can be done without references o which had to be very extensive o to the corresponding literature. philosophies as well as the new critics of civilisation, society, and the inner life (Marx, Nietzsche, Freud). Finally the movements for emancipation in the dynastic empires. The age of world wars, totalitarianism, and the efforts to overcome it. Once more, there are many questions with respect to such an outline. Do we recognise in this landscape summits of the first, second, and other order ? Are there essentials that are either continuously effective or slowly rising in an evolutionary process ? Maybe with regard to the concept of man (personality, the call to freedom and solidarity). Further on in view of the productive collision of involvement and distance, mundane responsibilities and transcendental calling, harmony and antagonism. And also in ranking individual before cause ; in the development of attitudes of "critical loyalty", broken affirmation, the combination of tolerance with firmness of conscience, and so on... But is it possible at all to present more than subjective opinions or convictions, as far as questions like such ones are concerned ? Furthermore : Is it possible to draw a precise and adequate picture of the relations between transnational developments, structures and movements on the one hand, and of the particular contributions of nations, ethnic or religious groups, and regions on the other ? Does in this sense, a "historical image" exist, truly "European", reflecting indeed the contribution of all nations and groups that make up the Community of Europe, and will this image continue to be understood (at least by the more sensible contemporary minds) as a cultural common obligation ? I think, nobody would be able to present a definite answer to such and similar questions. The meaning of the European heritage and of the living European spirit can only be actualised and made effective through a permanent effort of intellectual realisation of its components and elements. This effort must take place in form of a dialogue and discourse, through which we expose ourselves to the impacts of what we are affected by and called on, in order to widen and deepen our understanding and to activate motivational strength. Integration : colonisation of the world we live in as subversion of identity ? Posing once again the question of the meaning, of the function and of the importance of a "meta-political" identity of Europe today and tomorrow, we do this now in a different perspective. This is the case because matters might be taken too easily by simply identifying common heritages, leaving the business then to the mediators of a European consciousness, say teachers, classbook authors, or journalists. Is all that we have recalled perhaps only a heritage loosing its formative power, as some contemporary theoreticians want us to believe ? J?rgen Habermas has asked whether "complex societies" can anyway form "a reasonable identity".24 He says that this is only possible in a process of 24 J?rgen Habermas, K?nnen komplexe Gesellschaften eine vern?nftige Identit?t ausbilden ?, in : J?rgen Habermas, Zur Rekonstruktion des Historischen Materialismus, 3 ed., Frankfurt am Main : Suhrkamp 1982, p. 144ff. communication taking place under conditions of an "ideal form of life", free of any domination. All other "knowledge" about identity would be unreasonable and could be only a mystification of conditions which one has not to identify with. Niklas Luhmann is disputing Habermas' question. The "intersubjectivity of cognition, experience, and action created by symbolic interpretation and value systems" is, in his opinion, not apt to integrate modern societies. It cannot satisfy the "requirements for the control of highly differentiated societal sub-systems".25 The idea that political order has to do anything with spiritual sharing, that politics receive meaning from the conception of a common cultural heritage is, in his eyes, a totally outmoded notion (a case of "false consciousness"). Habermas insists that a humane life must be governed by "communicative reason". But he diagnoses a fatal discrepancy between the demand for reasonable identity and such trends in modern development which he assumes are manifest especially in the process of European integration. Along with the increasing rationalisation of social life, the integration of societies is more and more carried on "through the systemic interaction of specified functions".26 The control over social processes works through "speechless media of communication", through exchange mechanisms like money in the economy and through mechanisms of power in the sphere of politics. And while these control systems were embodied for a long time in a normative framework according to the "Old-European" tradition of a common weal where there was communication about necessary and appropriate actions in terms of common sense and philosophy, it then came to a "mediatisation" and, finally, the "colonisation of the life-world".27 Those spheres in which individual and collective identity may find themselves and may realise themselves are now occupied and exploited by the politico-economic control organisations using power and monetary incentives in order to get societal life going on. Morality and culture are being robbed of their substance and, thus, cultural identity becomes obsolete. Seen through such glasses, European integration as it has been in process for the past forty years would appear as a gigantic and typical example for the deliberate promotion and acceleration of just such a development : the take-over of power by a rational functioning macro-organisation that combines governmental and economic interests to control interdependencies. Habermas would be able to formulate his diagnosis o primarily made about the modern state o more precisely with respect to the EC system : The utilisation and instrumentalisation of conceptions of cultural identity and public political discussion in order to legitimise that what will be done anyway through calculated interest and power bargaining ; the substitution of democratic decision-making through relations between welfare administrations and their clients ; transformation of rule of law into an instrument of organising interest- controlled systems of regulation ; and finally, "make-believe of communicative relations" in the form of rituals in which "the system is draped as the life-world".28 This might appear as a caricature, and Habermas has indeed met with decided protest. His thesis of the reduction of politics to systems control is shrewd, but is resting on 25 Niklas Luhmann, quoted by Habermas op. cit. 26 J?rgen Habermas, Theorie des kommunikativen Handelns, vol. II, Frankfurt am Main : Suhrkamp 1981, p. 175. 27 Ibid., p. 240, p. 470f. 28 Ibid., p. 472, p. 476, p. 536ff., p. 567. rather fundamentalist premises. If it has been brought to attention here and now, then primarily because of the fact that our discussion may well need a thorn in the flesh so that we do not take things too easy on the subject of cultural identity and the building of a polity out of the EC system. But there is still another reason for taking such theses and discussions into consideration. In spite of all exaggeration, a very senseful question arises, making our special topic particularly relevant : How is it possible to secure the political identity, through which the "meta-political" components and dimensions of identity only obtain their full significance as well as their motivational relevance, while the European Community is developing ? It looks as if political actors or political scientists would have asked us to find the historical and cultural potential, so that we produce and promote European consciousness, because they expect some contribution to the progress of political community-building and polity-formation for the benefit of a European Union which shall be deepened and widened. But a complementary perspective exists, too. In the framework of European integration, it is necessary to strengthen the structures and the processes for the articulation of a truly political self-understanding and for a process of conceiving and comprehending what the tasks which Europe is confronted with are. Only if these processes are going to take place, our spiritual and cultural properties" will play a significant role in our joint endeavours to solve problems and to meet the challenges of our time and of the days to come. Therefore, we need efforts to create a political identity of a uniting Europe. If not for other reasons--then at least in order to encounter trends which tend to make the content and the substance of our metapolitical traditions politically irrelevant. The reality of politics and policies is more than a complex system of functionalist management of socio-economic interdependencies and power relations. It is also a field of communication and interaction between human beings, groups, communises, regions, and nations, on what is important, what is meaningful, and what should be done and pursued. By this process of communication and interaction, a common identity is being formed. This is also true in the field of European co-operation and integration. In the humanistic tradition of our European civilisation, it has been passed on from the philosophers of the Greek "polis" to the outstanding thinkers of our time that politics always means two things : to make possible what is necessary (Paul Valery), and to find agreement on what is real (Hugo von Hofmannsthal). Both of these will help to create, to keep alive, and to perform a European identity. Consciousness of European identityafter 1945 Gilbert Trausch The question of Europe's identity can be looked at from many angles within the perspective of this Forum o that of post-1945 Europe, and, even more specifically, that of the European Community. Sociologists, political scientists and philosophers have all made interesting contributions o highly theoretical, as can be expected, given the academic disciplines in which they work. A theoretical approach is particularly apt for the question of European identity, because, in the final analysis, Europe is a ?construction of the mind' (J. B. Duroselle). However, we must not stifle the voice of history. This is a discipline that is kept in check by two rigorous parameters o time and space. What is true for one region is not necessarily true for another, and what is acceptable at one time is not always acceptable at another. I mention this because historians construct facts from documents of all kinds. The constant need to bear this in mind sometimes clips their wings and stops them getting carried away. Marc Bloch called them ?those nasty little facts which ruin the best hypotheses'. A historical approach to the European identity after 1945 inevitably brings us to the conditions in which the European Community was born. No reasonable person would deny that the sense of a shared identity was and still is a major stimulus in the quest for a closer union. However, the disturbing fact remains that European integration only became a reality after 1945, with the creation of the OEEC, the Council of Europe, the Brussels Treaty Organisation, and, above all, the European Communities (from 1950). Robert Schuman's appeal on 9 May 1950 in Paris was translated into action, while Aristide Briand's in Geneva on 7 September 1929 fell on deaf ears. Both were French Foreign Ministers and therefore influential men, and both addressed their appeals to German politicians at the highest level who were very open to Europe, Gustav Stresemann and Konrad Adenauer. So why did Europe take off in 1950 and not in 1929? The philosopher Jean-Marie Domenach hints at an answer when he says that the European Community was born not of Charlemagne but of European nihilism. He uses Charlemagne to symbolise Europe's identity. Many historians think that we can speak of Europe from the time of Charlemagne, who is referred to in certain documents of that time as ?Pater Europae'. But for Domenach, the jolt which finally induced the Europeans to unite more closely was the havoc wreaked by the two great totalitarian systems of the 20th century: Marxism-Leninism and National Socialism. The Gulag and Auschwitz were seen as the last warnings before the final catastrophe. The figures are clear and chilling. First World War: 10 million dead; Second World War: 55 million dead (including 45 million Europeans). If this geometrical progression were to continue, the next step would be an apocalyptic Thirld World War. In other words, the European Community emerged in response to the challenge posed by two ideologies which were born in Europe from a shared cultural heritage. How can Europeans be united? Basically, there are only two possible approaches: political and economic. And where should we start? This was a question that already exercised Aristide Briand. When, in 1929, he called for the creation of a United States of Europe, he proposed to start with economic unification. One year later, in a memorandum submitted to 26 European governments for their opinion, he shifted his stance and backed a political approach, the reason being the Wall Street Crash which had changed the situation. Briand thus played it by ear, without a precise idea of the path to be taken or the objective to be attained. In this he differed from Jean Monnet, who had clearer ideas on both the end and the means. The same questions arose after 1945. Although it was clear that the two approaches should be separate, it was felt that there was no reason why progress should not be made on both fronts simultaneously. This is what the Europeans did in the years 1947-49 with the OEEC and the Council of Europe. The result was hardly encouraging, even though the two organisations did manage to group together almost all the states of Western Europe, because they were confined to the framework of simple cooperation between countries without any transfer of sovereignty. An attempt to move forward on the economic front o negotiations for an economic union between two countries (France and Italy) or five countries (with Benelux) under the name of Finebel o was to fail (1948-50). In the spring of 1950, Jean Monnet realised that the political path was closed, because the European countries remained strongly attached to their political sovereignty. Having learnt his lesson from the failure of Finebel, and not impressed by Adenauer's proposal for a Franco-German economic union (23 March 1950), Monnet opted for the economic approach, but on a smaller scale: a common market in coal and steel. This option had a number of consequences. Jean Monnet expected that this first ?pool' (coal and steel) would lead to others (agriculture, energy, transport) and hence, gradually, to a genuine common market. This prediction was to end up coming true, but only after forty years or so, which is probably longer than Monnet reckoned. Monnet also believed that this economic approach would eventually be followed by political unification. In this respect, events proved his hopes wrong. The attachment to national sovereignty in the world of politics (security and foreign policy) has turned out to be more tenacious than anticipated in 1950. By launching the process of European integration through the economy, Jean Monnet o no doubt unwittingly o defined its identity over several decades. The European Community which, with its fifteen countries, is starting to represent Europe as a whole, is perceived essentially as an economic entity. However, men (and women), being creatures of flesh and blood, do not easily identify with economic indicators, quotas and compensatory amounts. The failure of all attempts to create a common foreign and security policy (European Defence Community and the planned European Political Community 1951-54, Fouchet Plan 1961-62) and the less than binding nature of the Maastricht Treaty provisions explains why the European Union continues to be perceived by ordinary people as an economic machine. It is difficult, in these circumstances, to see it as the expression of a common destiny. Jean Monnet's proposal for a coal and steel community, put forward by Robert Schuman, was a response to a multi-faceted challenge. Like everyone else, he was aware that Europe could not continue to tear itself apart, or it would end up disappearing completely. Also, Europe's difficulties over the last hundred years had always started in the form of a Franco-German conflict, so it was here that action needed to be taken: to make war between France and Germany ?not merely unthinkable but physically impossible' (declaration of 9 May 1950). This is why the French appeal of 9 May was addressed first and foremost to Germany. The two world wars were to some extent Franco-German wars, at least when they started, and can thus be seen from a similar angle to the 1870 war. This explains the determination of many Europeans to reconcile the French and Germans and bring them closer together. Jean Monnet understood more clearly than others that Europe's future depended on France and Germany. Like it or not, the European Community has been built around France and Germany. If monetary union comes to fruition in the next few years, it will happen again around these two countries. Jean Monnet's game plan o to make the Franco-German axis the motor of Europe o could not be achieved unless Germany played along too, in other words unless it aligned itself with the western political model for good. It had to be kept from the ?temptation to swing between West and East' (Jean Monnet, 16 September 1950) and therefore had to be solidly attached to a host organisation. Neither the OEEC nor the Council of Europe, with their loose structures, could take on this role, but the ECSC fitted the bill. The European Community, along with other organisations such as NATO and the WEO, thus became a way of resolving the German question. The effects of the Cold War The appeal of 9 May 1950 was also a response to the challenge of the Cold War, which created a new situation in which Europe was not so much a player as an object manipulated by non-European players (the USA and the USSR). Jean Monnet had no difficulty in accepting the Atlantic Alliance, which was essential in order to ensure Western Europe's security. However, he felt that it had helped to fossilise mindsets and create a ?rigidity of thought'. Thus ?any proposal, any action is interpreted by public opinion as contributing to the Cold War' (note of 1 May 1950). Monnet believed that a Community as he conceived it could break out of the Cold War mould, which was not the case for the Atlantic Alliance. He thought that the ECSC could incorporate West Germany without raising the question of rearming it, which he still felt (beginning of May 1950) would provoke the Russians. The Korean War (25 June 1950) was responsible for overturning this kind of thinking. German rearmament was put on the agenda. Very rapidly, the ECSC became the model for a European Defence Community. In fact, throughout the first phase of European integration, from the OEEC through the ECSC to the EEC, Western Europe was subjected to a whole set of Cold War- related pressures which had a direct impact on the integration process. There was American pressure, which could be described as positive in that it encouraged the Europeans to unite. American diplomacy pushed the Europeans to come closer together economically and politically, though it was understood that a united Europe must remain open to American influences and products. The pressure was also positive in the sense that it did not impose any specific solution on the Europeans. In the case of the OEEC, for example, the United States would have preferred a more integrated solution than the one finally chosen on Britain's initiative. Similarly, the first British application to join the EEC (1961) owed a great deal to American encouragement. The same cannot be said for pressure from the USSR. It felt it was not in its interest for the Europeans to unite opposite it. Its policy thus aimed to divide the Europeans and to separate Europe from the United States. Thanks to its impressive military apparatus, which its acquisition of atomic weapons in 1949 rendered credible, it was able to put pressure on Europe o indeed virtually blackmail it. In the Cold War climate which set in from spring 1947, the Europeans lived in fear of the USSR, a fear which Paul-Henri Spaak gave full rein to in a famous speech. The Brussels Treaty, the Atlantic Alliance and the WEO, and also the ECSC and the EDC, were a response to the negative pressure from the USSR. The process of European integration is inseparable from the climate created by the Cold War. Throughout its history, the European Community has been very sensitive to international developments. The Korean War had a positive effect on the ECSC negotiations and the beginnings of the EDC, but the death of Stalin and the ensuing d?tente affected the EDC negatively. In the autumn of 1956, the preparatory negotiations for the Treaties of Rome were heading for an impasse after wide-ranging last-minute demands made by France when they were finally saved by the events of Suez and Budapest reminding Europeans how weak they were. In periods of tension, the Europeans close ranks, and in periods of d?tente they loosen their ties. Overall, the process of European integration has to be seen in the Cold War context. To push the image to its provocative extreme, one could say that the European Community is Stalin's baby. Only when they were forced to did the European countries agree to the surrender of sovereignty which characterises the Community. One can imagine only too clearly the consequences that the end of the Cold War may have on European integration. The effects of the Cold War can also be seen in many other areas, particularly that of political institutions. Between the wars, democratic countries suffered a period of profound crisis, which explains the rise of fascist dictatorships and authoritarian regimes (central Europe and the Baltic and Balkan countries). Where democracies did survive, they were weakened and discredited by major scandals. After 1945, however, western-style democracy became the political system par excellence, fully adopted by the nations of Western Europe. The last bastions of authoritarian regimes o fascist or semi-fascist o fell one after the other (Greece, Spain, Portugal). The rule of law and respect for human rights which became established in Western Europe contrasted with the communist model. Confronted by a regime which claimed to have history on its side and to be both politically and economically more successful, European democracy was obliged to furnish daily proof of its excellence and superiority. The example of the Federal Republic of Germany in its face-off with the other Germany illustrates this situation. The East German regime became a foil for the resounding success of the Bonn democracy. The flourishing health of western democracy is not unconnected to the creation of the welfare state after 1945. The social insurance system goes back to the 19th century, with considerable differences between the different countries. However, it is the English model, developed during the Second World War, which was to become the source of inspiration for the other countries of Western Europe. Within one generation it had become the norm, and the differences between the countries diminished, even though the extent of provision was not the same for all countries. The welfare state model stopped at the iron curtain. Beyond it, social protection was certainly well-developed, but the philosophy underlying the system was different. The weakness of the command economy explains the mediocrity of the services provided. Basically, the welfare state is a characteristic of Western Europe, different from both the communist system and the American system. The fact that this model is now under threat, and that some are arguing for the American model, has particular historical significance in view of Western Europe's identity as it has been constructed, in particular through the European Community, over the course of the last forty years. The Carolingian image In its quest to unify in the aftermath of the war, Western Europe was to take various forms based on different institutional approaches and different concepts. There would be the European Community, EFTA etc. Opposite, there was another Europe: the Europe of Comecon and the Warsaw Pact. However, it was the smallest of these configurations, the six countries which formed the ECSC, which was to dominate. Gradually, slowly but inexorably, the Community took on o or usurped, depending on the point of view o the name of Europe. It is easy to understand the irritation of some, such as the Scandinavians or the Swiss, on seeing the word ?Europe' increasingly applied to the Community during the 1960s, a usage which successive enlargements have only reinforced. The Community is thus at the root of one of the concepts of Europe. For 22 years, until the first enlargement in 1972, it was this little Europe of six countries which incarnated Europe's identity. Right from the start one could see the historical imagination set in motion. Very quickly, potential commentators and journalists started talking about a Carolingian or Lotharingian Europe. It is true that the map of the six founding countries of Europe covered exactly the same area as Charlemagne's empire. In both cases, the Elbe formed a border, even a barrier, against the barbarian tribes o or the communist countries. Of course there was no causal link between the two constructions, separated by eleven centuries. This was a mythological projection, but one that was popular for a long time because the historical connection seemed so irresistible. Clearly, calling it a Carolingian Europe stresses western Christianity's role in founding Europe. The force of the image led some people to speak of the Community as a Europe of the Vatican. Be that as it may, the fact remains that the six countries which were the first to launch themselves into the European adventure are still seen as the spearhead or core of the European Union. They seem more committed than the others. And they are destined to be the heart of a future monetary union. It is all the more distressing, therefore, that one of them (Italy) has to stay on the sidelines, forced to by the Maastricht criteria. This essay deliberately leaves aside the question of the European identity in terms of culture and civilisation. Few observers contest the fact that Europe has a cultural identity, formed over the centuries, encompassing the diversity of national cultures. But this identity may not be as clear-cut as some would see it, and it is blurred at the edges: Europe's borders have always been problematic. Beyond this cultural identity, which the elites have recognised since the Middle Ages, but which has not stopped the Europeans constantly and mercilessly tearing each other apart, the period since 1945 has seen the emergence of several Europes, born of the convulsions of the First and Second World Wars. Only one of these Europes has managed to establish a public image - the European Community o and even that took four decades. The Community only really entered into public consciousness in the member countries with the Maastricht Treaty and the public controversy which it generated. European Identity and /or the Identity of the European Union Thomas Jansen When speaking of "European identity" one needs to state what exactly is meant, as each of these words taken individually may be ambiguous and confusing. The "European" identity we are seeking to outline here is that of the European Union, the word "Identity" being understood to mean the spirit of this community, indeed, the very source of its cohesion. In so doing, we assume that both the European Union as an organisation and its tangible manifestations, policies and achievements are expressions of that identity. It is incumbent on the European Union as a political and democratic organisation to ensure that its citizens and peoples not only understand but actually espouse the spirit of the Union if they are ultimately to identify with it. Indeed, the Union's very ability to survive, grow, act and succeed in its endeavours depends on it. The factors of european identity Let me first recall the basic factors of European identity in a broader sense, which even a precise definition cannot dissociate from that of the European Union. For, even if since its inception the European Union has never embraced more than a part of Europe, its vocation still relates to Europe in its entirety. And the historical, cultural, social and political components and factors of European identity which bind the continent together, east, west, north and south, will certainly increase in importance as the Union grows larger. Historical Factors Ever since the early Middle Ages, all political processes in Europe have been interconnected. There gradually arose a complex system of relations between tribes and peoples, dynasties and classes, states and empires, which, in a context of constant change, became ever more intricate and refined. Systems of domination and counterbalance arose and collapsed as a result of recurrent wars only to be followed by fresh attempts to build empires or peace settlements. Just as nations are defined as communities of destiny, it can also be said of Europe as a whole that a shared history over many centuries has given rise to a differentiated yet in many respects interconnected and mutually dependent community of destiny. Proximity and the shared nature of both individual and collective experience have fashioned a special relationship between the peoples of Europe which, whether consciously or unconsciously, has had the effect of forging an identity. Even in places where togetherness gave way to antagonism, where proximity resulted in demarcation or where coexistence deteriorated into rivalry and ultimately war, shared experience has left a deep imprint on Europeans. Likewise, the very causes of the wars in this as in previous centuries sprang from intellectual currents simultaneously at work everywhere in Europe. Cultural Factors The shared historical experience is underpinned by a considerable degree of cultural unity of which, paradoxically, diversity has been a constituent part. This diversity has common roots, i.e. it is the outcome of a combination of the Mediterranean Greco- Roman culture, which contributed the sum experience of the ancient world as a conservative and stabilising element on the one hand, and the continental Germanic- Slavonic culture, which contributed the dynamic, youthful and forward-looking component on the other. The decisive catalyst in this synthesis was Christianity. The European world which emerged from this process during the Middle Ages never lacked awareness of its unity. Likewise, in modern times and even very recently, this awareness has always survived despite the bloodiest of wars waged in the name of national differentiation or opposing nationalist or ideological aims. Social Factors Not least because of its cultural unity, in which any differences can be seen as so many aspects or individual expressions of a shared background, Europe developed into a single area also in social and economic terms. Despite all the typical differences between its diverse regions, a similar pattern of economic development served as the basis on which social life progressed along similar lines everywhere. A significant part was played here by a highly developed trading system involving large- scale exchange of goods, labour and know-how. It formed a large internal market which, despite the restrictions imposed by the upsurge of nationalism in the 19th century, flourished up until the First World War. Symmetrical social development in the regions of Europe was matched by a simultaneity of social crisis and radical change and then in turn the formation of social groupings or classes predisposed towards transnational identification, thus creating the conditions in which the integration rooted in historical developments and a common culture could take hold. A radical break in this movement towards social integration occurred only with the division of Europe into two fundamentally different economic and social systems after the Second World War, a period from which Europe is only now beginning to recover. Political Factors History since the Second World War has shown that the intellectual and cultural strengths of the Old World are far from exhausted. The fact that the Europeans adopted a critical stance towards their history but at the same time opened up to stimuli from the new worlds of America, Asia and Africa and the fact that they ultimately responded to the challenge of Communism also impelled them to develop a new self-awareness. The European identity expressed in that new self-awareness is characterised by a marked drive for organised action which, now that the Central and Eastern European nations in an act of self-liberation are reuniting with the nations of western Europe, is confronted with new challenges. The open democratic societies did not succumb to the threats or enticements of Socialist revolution and its claims to march in step with history. On the contrary, they succeeded in maintaining and developing their attractiveness. They emerged strengthened from all economic, social and cultural crises. In the North Atlantic Alliance, they were able to jointly organise their security. Lastly, in the European Community, a significant group of democratic states created a model of peaceful cooperation, peaceful change and unity which exerts an extraordinary power of attraction throughout the world. National unity of the states and political unity of Europes The European Union is a young and still incomplete community composed nonetheless of old communities. Its Member States still possess a fairly strong identity. It is therefore only natural that, in seeking to define an appropriate way of expressing the European identity that appeals to the public, we should ask how the identity of the Member States expressed itself when (in the 19th century or before) they were still in their infancy. The unity of the Member States as they came into existence was based mainly on : o a common language and culture or common cultural and linguistic bases ; o a common experience of history, which could even encompass the experience of mutual antagonism between different sections of what became now one nation ; o one economic area with neighbourhood markets developing right across the region ; o a shared need for security against external threats ; Similar factors go to explain the process of European integration and the emergence of a supranational European Union : o the experience of history acquired by the peoples and states of Europe both in war and peaceful exchange ; o common cultural bases even if their expression has been diverse ; o economic necessity and shared practical interest within the market which transcend the national and continental framework ; o the setting of limits in relation to an enemy power which poses a threat to freedom and integrity (the USSR with its aggressive ideology and totalitarian regime). Just as the factors referred to with regard to the formation of the nation state did not all affect all participants in equal measure, not all of the population feels equally inspired or convinced by the foregoing justifications with regard to the European Union. It will nonetheless be observed that it is these common factors which, now as then, influence the decisions of the political, social and intellectual elites. And now as then we see amid those same elites sizeable minorities and occasionally even majorities of Luddites who, unwilling to relinquish the past, reject any identification with new contexts and find arguments for their ideas which are heard and believed by a certain section of the population. These are all socio-psychologically explainable transitional phenomena which arise in the definition of a new European identity (including the difficulty of expressing this identity in an appropriate fashion) or in the search for a European awareness which transcends the national awareness. To see them as problems specific to European unification would be to approach them from the wrong angle. For it is clear that changes in political and social circumstances do not always immediately result in a change in awareness. Only when new circumstances are perceived as realities do we adapt our thinking and planning accordingly. The time lapse between the appearance of the new and its perception is attributable to the fact that the old continues to coexist in parallel with the new for a while or even permanently. As a result, awareness continues to revolve around the old and therefore barely notices the new. The debate on the feasibility/non-feasibility of supranational/transnational statehood or democracy offers prime examples here. The lack of identity for the young, new and constitutionally not yet established community known as the "European Union" is also accompanied by certain problems of legitimacy which its institutions in particular have in projecting and asserting themselves. However, if one compares these problems with similar problems of the Member States and their constitutional situation, they can be seen to be quite obviously commonplace phenomena with which all communities have to contend regardless of the level at which they are established. In this respect, the problems at the various levels may perhaps be connected : o the weaker a nation's self-awareness, the less problematic is its European awareness ? o the weaker the confidence in the system of the nation state, the greater the hope placed in the European institutions ? The absence of a consensus on the constitution This is a practical problem and one which confronts politicians with practical tasks. It manifests itself in the deficit of legitimacy with which the authorities have to contend every time they want to make innovations whose advantages are not always immediately apparent given the time it may well take for results to be produced, whereas the disadvantages, whether short-term or medium-term, real or imaginary, have to be taken into account. For any political project to gain acceptance it is therefore important, indeed indispensable, for its meaning to be clear, its components visible, and its effects foreseeable. If the European project is to succeed, then it is crucially important for it to be understood. But what does the European project entail ? A Union organised on federal principles and endowed with a democratic political system which, through its institutions and laws, guarantees internal and external security and which takes on major tasks, beyond the capabilities of individual Member States, in a manner accepted by the public as serving its interests. However, in defining the project, we see at once that the project thus defined does not enjoy the support of all the participants. There are governments, parties, parliamentary factions and important social and cultural groupings which want to achieve a different project. Their European project is based on another idea. For example : Cooperation between a group of states which agree on institutions and procedures to perform jointly defined tasks, case by case, but without submitting to the discipline of a democratic and federal system. In other words, there is no consensus on the "finalit? politique" of European integration and this makes it above all difficult to establish and give expression to the European identity. For the European Union remains the unfinished practical expression of an ultimately undefined project. It is therefore more process than project ; it is the blueprint for a product, the real shape of which remains undecided. Equally undecided is the geography of the Union. Where does it place its borders ? There is no consensus here either. The dilatory treatment of Turkey's desire for Union membership is proof of this, as are the difficulties in agreeing an enlargement strategy with respect to Central and Eastern Europe. And then there is the fact that we have become accustomed to seeing certain challenges as the most important motives for the unification of Europe : establishment of an enduring peace between the participant nations, reconstruction of a devastated continent, reacquisition of a role in international decision-making, defence of freedom against totalitarian Communism, the guaranteeing of a democratic future and greater and more widespread prosperity. As European integration policy achieved results, so these motives gradually faded into the background ; and since the watershed year of 1989, it has become clear that the European Union needs new motivation. This does not mean that all the original reasons and motives for the policy of European unification have become obsolete. They retain, albeit in a different context from before, a certain reality content. This is true even if it no longer carries the same weight as in the 1950s and up until the 1980s because : o the process of rebuilding Europe from the ruins of the war has long been completed ; o the peace between those nations of Europe which took part in the integration process is today guaranteed by the existing set of institutions ; o the Soviet-Communist regime has collapsed ; o democracy has established itself in all European countries and can be regarded as secure ; o the aim of a more widespread prosperity has been achieved to an unparalleled degree ; o Europe can regard itself once again as a leading player and partner on the world stage. The question as to what makes it necessary to take integration further now that the most important goals have been achieved is therefore warranted ; it challenges us to define and explain the new objectives and motives in order thereby to give appropriate and perceptive expression also to the identity of the European Union. The new tasks The new challenges confronting Europeans now and in the future arise from various developments : o the process of unification itself, which has generated a dynamic through which the responsibilities of the European Union have increased substantially and certain reforms of its political system have become indispensable since it will otherwise be incapable of performing the tasks entrusted to it ; o the collapse of the Soviet Union and the accompanying end of a bipolar world order based on two mutually opposed superpowers ; o the technological and industrial developments which are giving rise to new ways of living, working and operating all over the world. Many of the individual measures enacted in the decades since the Second World War can be seen as preludes and pointers to the changes of recent years. However, we are only now becoming gradually aware of their full implications. New situations are arising, which we are attempting to conceptualise when we talk for example of the "globalisation of the economy" or the "information society". In coming to terms with the new situation, Europe will above all have to face up to the following challenges : o the renewal of European society ; o the development of a democratic and workable constitutional order ; o the enlargement of the Union to include the countries of Central and Eastern Europe ; o the creation of a new world order in line with technological, scientific and social change. The European nation states cannot rely on their own discretion and devices to carry out these tasks alone. For the challenges involved are directed at the entire Union. They can therefore only be properly addressed through the combined effect of contributions by the individual states to the united action of the Union of European states and the added value of joint effort. The Renewal of European Society There are in Europe various competing models for the most effective and fairest social order. They are inspired by differing national concepts and traditions of social organisation and social life ; even regional characteristics can be discerned, finding expression for example in the differences between the Northern European (more Germanic Protestant) and Southern European (more Roman Catholic) societies. And neither must we ignore the influence which ideological and political convictions have exerted on societies in the individual European countries : Conservative and Liberal, Socialist and Christian-Social ideas have all left clear, distinguishable traces. And yet, we can now ascertain that over the decades, thanks to a common cultural foundation, a broad consensus has formed on a model which corresponds more closely than others to the vital needs and circumstances of Europeans. The differences between this European model and that of American society are striking, not to mention the models which underlie the societies of certain East and Southeast Asian industrialised market economies. What are the main features of this European model of society ? Its central feature is what in Germany is called the "Soziale Marktwirtschaft", i.e. a "social market economy" which allows market forces full scope whilst subjecting them to a framework of rules designed to prevent abuse, satisfy basic social needs and provide a minimum of social security. The consequent solidarity and stability also makes for greater freedom of the market ; the efficiency gained as a result makes it possible to supply the necessary resources for social welfare and security. This model is being called into question and is now in jeopardy. More precisely, the excessive growth of the social security system over the years has disrupted the balance between individual responsibility for the whole and society's responsibility for the individual. On the other hand, the pressure of competition accompanying the globalisation of the economy and communication has meant that to safeguard jobs in "Enterprise Europe" substantial cutbacks have had to be made in the social security system together with radical reforms in the way they operate. Ultimately, this twofold threat to the European model represents a virulent attack on the philosophy which underlies it ; the motives behind the attack are partly ideological, partly conditioned by interests and its aim is to eliminate the social dimension. The European Union would lose an essential component of its identity if it failed to withstand this attack. The agreement on social policy between the Member States (with the exception of the United Kingdom) appended to the Maastricht Treaty was a first important step. The Commission White Paper entitled "Growth, Competitiveness, Employment" endorsed by the Union in the autumn of 1994 contains a programme for the safeguarding and reshaping of the social and economic order of the Union. The aims of this programme are likewise served by the proposal for an Economic and Monetary Union, in particular its establishment in stages and the definition of a sound financial situation as a preliminary requirement for the introduction of a single currency and the consolidation of the single market in the large frontier-free European economic area. The reform programme which underlies the policy of the European Union is sustained moreover by the confidence that the peoples of the old world who have emerged from the tribulations of repeated fratricidal wars and the humiliation of totalitarian repression have lost neither their capacity for innovation and creativity nor their historical and cultural experience and therefore possess all the assets needed to remain competitive in the global context. The Development of a workable constitutional order The identity of a political community finds its noblest expression in its internal order, i.e. in its constitution. However, it is precisely in this respect that the European Union is defective. The first item on the agenda for the years to come is therefore the revision of the treaties in which the institutions, procedures and rules of the Union are rooted. It is generally agreed that the Intergovernmental Conference entrusted with the reform (of the treaties or constitution) should serve to bring the European Union closer to the people by making it operate more efficiently and openly. The Union should raise its profile and its activities should become more understandable. It is clear that the expectations placed in the Intergovernmental Conference, which must be measured in terms of the major developments dependant upon its outcome (enlargement, monetary union, etc.), can only be fulfilled if the conference aims at the establishment of a federal and democratically legitimate structure. Federation could give expression to what is inherent in the European Union : namely unity in diversity. At the same time, as a prerequisite for the definition of identity, this would answer the unresolved question of the "finalit? politique". Given the complex circumstances of the integration process in the Union, only a democratic order offers the possibility of tackling the pressing practical and political problems with any hope of success on the one hand, and of giving meaning to what we call Union citizenship on the other. The Enlargement of the European Union The historical watershed of 1989 confronted the European Union with a new task which will keep it occupied until well into the next millennium. After initial reticence, attributable to widespread unease about the new uncertainties as well as to misunderstandings and a resultant distrust between the partners, there is a now a general consensus on the fact that every effort must be made to incorporate the states and peoples of central and eastern Europe as Members of the Union as soon as possible. There are many justifications for this, historical, moral, social, and not least, the fact that this is the only way of ensuring lasting economic and political stability and peace in this region. The Union already treats the states of Central and Eastern Europe as future members and more and more systematic efforts are being made to achieve what in previous decades has been no more than a dream : namely the unification of all of Europe in peace and freedom. Indeed, the establishment of the conditions for the enlargement of the Union is in full swing, : in the individual applicant countries as well as in the Union itself. A strategy of preparation for membership has been drawn up in cooperation with the governments concerned. Important stages on this road of standardisation and harmonisation are the association agreements with the Central and Eastern European countries which, through these agreements, have moved politically closer to the Union. The economic and trade provisions and the connected assistance arrangements afford them the material and practical wherewithal needed to prepare for membership. If, however, the future members of the Union have to be capable of accession, then the Union itself must become capable of enlargement. Thus, if it is to remain open to all European nations which can claim a historical and cultural right to belong to it, it must also solve the problems connected with a major enlargement from 15 to foreseeably 27 and perhaps even 30 Member States : the political and institutional problems, the economic and social problems and also the financial problems, the solution of which will demand a substantial additional solidarity on the part of the Union's present Members. A considerable leap in self-awareness could be made if this process of political deepening and geographical enlargement could be handled successfully, also because the name "European Community/Union" has always suggested the encompassing and representation of Europe in its entirety. The closer this ideal comes to being achieved, the easier it will be to bridge the credibility gap. The Establishment of a New World Order Lasting economic and social stability is also vitally important for the European Union from the point of view of the Mediterranean area. It is therefore in the Union's interest, indeed it is its duty, to help create the conditions for peaceful development in this region. The Mediterranean Conference of November 1995 provided the impetus for a new inter-relationship based on partnership which not only satisfies the present requirements but marks a fresh start compared with the centuries of cultural and religious conflict which have characterised relations between Europe and the Mediterranean region in the past. The readiness of the European Union to face up to its responsibilities with regard to the Mediterranean area and Central and Eastern Europe (moreover in relation to Russia and the Commonwealth of Independent States) is substantiated by large-scale development aid and development cooperation in the Third World. It indicates a growing role for the Union as an actor in the international order. It has the capacity to do this thanks to : o its success in establishing its own order representing, historically and structurally, an international order pacified in a lasting manner by democracy and federalism ; o the strength which it derives from the united action of its Members. More unity, and above all more unity deriving from democratic decision-making procedures, will lend the Union greater weight and greater credibility in this role ; to achieve such unity, further advances need to be made in the establishment of its internal order and the strengthening of its capacity for external action. The establishment of the European Community nearly fifty years ago was also a contribution to the creation of a more just and peaceful world order. Its endowment with democratic institutions and instruments for the common definition and implementation of policies in an increasing number of areas, but in particular its development into a European Union with a common foreign and security policy and a single currency, only becomes really meaningful if it is understood as a structural component of a "world federation", i.e. of a process which leads, via the organisation of large continental groups of states and a radical reform of the United Nations, to a world order based on subsidiarity. That does not mean to say that the integration of the European states and societies is not in itself also a high-ranking objective for in the past it has led to the pacification and reshaping of Europe, increased economic prosperity and guaranteed social progress ; in the future, through the corresponding effects of enlargement to Central and Eastern Europe, it will also develop in those parts of Europe which have hitherto been unable to take part in this development. At the same time European integration remains the basis for the effective discharge of all the major cross-border tasks entrusted to Europe. However, in the context of world history, the unification process in Europe is aiming further than the construction of a Union.. More precisely, the stability achieved through the process of building the Community, together with the instruments of peace devised in this process and the prosperity existing here, are all factors which oblige Europeans to assume responsibility in and for the world. This involves more than development aid and active concern for human rights or the protection of the global environment. It also involves the shaping of an institutional and legal framework for world progress, a worldwide economy, worldwide transport, worldwide communication, the ecology of the world and worldwide politics in its various branches. The European Union will be in a privileged situation in being able to submit and implement proposals to this effect on the basis of its own experiences, if in the years to come it succeeds in giving expression to its identity by successfully defending its societal model through renewal, giving an effective form to its political system and at the same time finding optimum solutions for its geographical enlargement. A contribution from political psychology Tom Bryder "Europeanisation", meaning the political unification or integration of Europe, as we have recently come to think of it, is a relatively new phenomenon. More precisely, it refers to attempts at creating a European federal union, a distinct entity in relation to its surroundings. To the surroundings, such as people in the former colonies, or in the United States, "Europeanisation" has a different meaning from that revealed by the integration perspective. Edgar Morin (1990, p. 20) says that "Il est difficile de percevoir l'Europe depuis l'Europe." From the outside it is often associated with expansive tendencies such as "European cultural imperialism" (in the former colonies) or "Cultural snobbism" (in the United States), that is, a colonialisation of the minds of people outside Europe, both in Africa, Asia, and America. Somewhat paradoxically, it is difficult to distinguish "Europeanisation" as such from what we, in Europe, sometimes call "Americanisation" or "American cultural imperialism." The difference for the political order, however, seems to be a matter of quantity and authenticity. Critics of "Europeanisation" so conceived, such as of the francophones and German visionary intellectuals like T.W. Adorno, search for a European identity free of such connotations. Apart from this ingroup-outgroup aspect of "Europeanisation", we must deal with ongoing processes of how European identity evolves o if it exists, or whether it is emerging. How is it created, sustained, and dispersed ? To which extent and in what respect can we characterise the formation of a European political identity as an outcome of learning, memorisation and information retrieval processes ? To some people, particularly the contributors to the French intellectual debate on the future of Europe, the contradiction between technocracy and meritocracy on the one hand, and democracy on the other ("Eurocrats" versus "Europe des citoyens"), poses the major challenge to the process of a politically unified Europe.29 It is, for example, presented as the end of minority rule in general by Wolton, who says (1993, p. 95), "Le passage de l'Europe technocratique ? l'Europe d?mocratique signe la fin du r?gne de la minorit?." It is an expectation resembling the classless society expressed by Marxism. Wolton (1993, p. 232) says that this debate is more widespread than claimed here : "Le th?me de la "technocratie europ?enne" est omnipr?sent dans tous les pays. Conceptualisations and definitions Let me first mention some definitional issues that might be helpful in a search for appropriate conceptualisations of identity. According to Webster's : 1a = sameness of essential or generic character in different instances, or 1b = sameness in all that constitutes the objective reality of a thing, or 2 = unity and persistence of personality, or 3 = the condition of being the same with something described or asserted. Le Nouveau Petit Robert (1993, p. 1122) is somewhat more exhaustive : 1. Caract?re de deux objets de pens?e identiques, Identit? qualitative ou sp?cifique. ? similitude. L'identit? d'une chose avec une autre, d'une chose et d'une autre. Identit? de vue. ? communaut?. 2. Caract?re de ce qui est un. ? unit?. 3. PSYCHOL. Identit? personnelle, caract?re de ce qui demeure identique ? soi-m?me. Probl?me psychologique de l'identit? du moi. Crise d'identit?. o Identit? culturelle : ensemble de traits culturels propres ? un groupe ethnique (langue, religion, art, etc.) qui lui conf?rent son individualit? ; sentiment d'appartenance d'un individu ? ce groupe. ? acculturation, d?culturation. PAR EXT. ?sommier. Psychologists and psychoanalysts say that identity equals "The sense of one's continued being an entity distinguishable from all others" (Rycroft, p. 68). As Rycroft also says (ibid.) : The sense of identity is lost in fugues and perverted in schizophrenic delusions of identity in which, typically, an underlying sense of nonentity is compensated for by delusions of grandeur. A fugue designates a process by which an individual loses her or his sense of destiny and location. In psychoanalysis, fugues are classified as instances of hysterical behaviour and cited as examples of dissociation of consciousness. They typically arise out of role confusion when an individual cannot cognitively handle the information she or he faces. A transposition of psychoanalytical concepts to a figurative political language, I believe, may create some fruitful associations which can assist us when we try to explain, for example, disintegrative processes in central and south-eastern Europe, or integrative processes in Western Europe. Taking a preliminary view of what identity is from the psychoanalytic description, we may consequently look at "identification" as : The process by which a person either (a) extends his identity into someone else, (b) borrows his identity from someone else, or (c) fuses or confuses his identity with someone else. In analytical writings, it never means establishing the identity of oneself or someone else. (Rycroft p. 67) The expression "to identify with" bridges an individual identity and a shared identity ("I", "me" and "we", "us"), that is, some kind of "social" or "political" identity. The place of identity in modern political research In modern political science (Cf. Lasswell, 1965) identity is usually treated as an element in a "political perspective," the other major components being "demands" and "expectations." Probably influenced by sociological role theory (which is wider in scope than psychological identity theories, since it incorporates behaviour as well as thought and emotional process), some authors seek a solution to identity uncertainty in the concept of multiple identities. But who should determine what these identities should be like ? The concept of identity cannot be patented by any traditional political-sociological group. It is not part of the traditional ideological quest for a distinct political vocabulary, as revolutionary socialists tended to believe before World War I. As Wolton says (1993, p. 48) : L'identit?, la nation, la tradition ne sont pas des valeurs de "droite", elles appartiennent ? toutes les familles politique et il y a un conformisme eurocratique ? diaboliser ces mots. As a matter of fact, the dynamism of a pluralistic and democratic conception of political identity presupposes that multiple identity pragmatism need not be present at the individual level of analysis at all, but only at the social level in the form of choice options. (Wildawsky, 1987). From a theoretical point of view, the lack of hierarchical priorities of identity objects may lead to the kind of psychological state called fugues, previously described. Mixed or uncertain political role conceptions are not the same as cultural pluralism and may eventually lead to hyper- vigilance (psychological distress), decision evasion and paralysis. Territory, language, ideas, culture, and history may all serve as objects with which we wish to establish notions of political identity. But which objects are of primary, of secondary or of lesser importance to the citizens of Europe ? Which objects are necessary and which are sufficient for the establishment of a notion of European identity ? In the French debate, the opposition between objects of identity is basically seen as a conflict between "modernism" and "voluntarism," not between social classes or party alignments. Modernism is seen to be creating a link between identity and nationalism, and "voluntarism" is seen as creating a link between identity and history. Moreover, the construction of the new Europe, according to the French debate, does not simply mean a democratisation of the technocratic Europe which has been the foundation of previous attempts to integrate Europe politically, economically and culturally, but a radical break away from both the modernistic and the voluntaristic "paradigms" (Wolton, 1993, p. 67). The cardinal issue revolves around the opposition between democracy and totalitarianism. This issue re-emerged when the Communist menace disappeared around 1990. Which, then, are the attitudes of the general public towards the European Common Market of yesterday, as it was usually referred to in the 1980s, and the European Union of today and tomorrow ? Should decision making in Europe be confined to the approximately 50.000 Eurocrats, or to the 343 million citizens ? If the Eurocrats, as a caste, are indispensable in the process of European integration, how do we ensure that they are made accountable to democratic institutions and that they take considerate attitudes to the citizens of Europe ? What should the role of national parliaments and the European parliament be in the future ? With the present tendency to transfer power from government(s) to markets, what will the scope, weight, and domain of political power in the political system of Europe be in the future ? Let us first take a look at the objects of identification, and see if they provide us with adequate criteria for choice and commitment. Geographical criteria What first comes to our minds when trying to outline what it means to be a European is, perhaps, Europe as a geographical unit. Political systems such the Italian political system, the French political system or the Danish political system all embrace a notion of territory. So important is this that Max Weber made territory a major component of his definition of what a state is. But how do we establish where the boundaries of Europe are ? Should Greenland be included if we look at the map before it gained autonomy (Hjemmestyre) ? The Faeroe Islands ? Madeira ? The Canary Islands ? Cyprus ? Malta ? Uzbekistan ? Linguistic criteria In France it is sometimes maintained that (Wolton, 1993, p. 84), "Le fractionnement linguistique est... constitutif de l'identit? europ?enne." At the same time, the practical problems of the language barriers are realised (ibid.) : "Le principal probl?me de l'Europe est l'absence de langue commune avec d'insolubles probl?mes de communication, notamment ? Bruxelles et au Parlement. D'ailleurs sur 13.000 fonctionnaires ? la Commission, il y a 1.700 traducteurs soit 2 traducteurs pour 13 fonctionnaires". Many people see this lack of linguistic unity as an indication of how difficult it is to unify Europe : L'Europe est aussi un carrefour de langues, puisque quarante-trois langues y sont parl?es, ? des degr?s divers. (Wolton, 1993, p. 17) What about English ? Many people in most European countries, however defined, speak English. But so do many people in America and Australia, and as a native language of a European state, English is not spoken by as many people as is, for example, German. Moreover, French, Italian and Spanish are strong competitors within the European context. So language cannot easily be used as a common denominator for establishing a unified sense of European identity. Still, as Edgar Morin points out, English may very well be used as a working language without the creation of an Anglo-Saxon cultural hegemony (1990, pp. 23233): L'Europe ne court aucun risque culturel ? ce que l'anglais y devienne langue principale de communication. N'a-t-il pas constitu? la langue de communication entre les diverses cultures et ethnies indiennes sans les corrompre, sans d?valuer les langues r?gionales, sans surimposer l'identit? anglaise sur l'identit? indienne ? L'utilisation de l'anglais, accompagn?e de la connaissance de deux autres langues europ?ennes, aurait en outre l'avantage de faciliter les communications avec le reste de la plan?te.30 Cultural-Ideational criteria One can, of course, assume life styles, traditions and behavioural patterns within some European territory, more or less arbitrarily defined, constitute a "European culture." But even within nation states it is dubious to speak of specific political cultures, since other criteria such as class, urban versus rural, north versus south, and similar criteria tend to give more explanatory power to the notion of "political culture." The political culture of the British working class is definitely different from that of the middle class and the gentry, the political outlook of farmers in rural Holland definitely differs from that of city dwellers in The Hague, Amsterdam and Rotterdam, and northern Italian conceptions of politics are very different from those held by the population of Sicily and Naples. And as the two World Wars in this century have shown, Marx was definitely wrong in believing that the working classes of the world had so much in common that they would prefer class to nation as a chief object of identification. 30 Others like Wolton (1993, p. 162) are more cautious and less optimistic : "L'identit? postnationale est le moyen de construire cette identit?, reposant sur l'adh?sion ? des cultures politiques d?mocratiques, communicationelles, qui attribuent une influence certaine ? l'?change et font notamment l'impasse sur le probl?me de la langue. Comment communiquer des exp?riences sans langage commun ?" Analytical criteria If a political perspective reflects aspects of political cultures, and if identity is a necessary element of a political perspective, then it follows that we must give further consideration to political culture. At a somewhat high level of analytical abstraction, Wolton argues that one can intuitively speak of culture in three senses. In the first place, as an opposition to nature, that is, as the results of human labour. In the second place, culture can be seen as that which unifies a people or ethnic groups and which allows us to distinguish cultures from each other. In the third place, finally, culture can be seen as "high culture," as implied when we speak of being cultivated, familiar with literary traditions and art, etc. In Europe, all three notions have always co-exited at the same time. (Wolton, 1993, p. 312). Yet there were dynamisms and developments as Laqueur has pointed out (1970, p. 344) : With all its vitality, post-war European culture faced grave problems. The stultifying effects of mass culture, the standardisation of the mass media, the commercial production of cultural goods, constituted an insidious danger which in this form had never existed before. At the other extreme there were the futilities of an esoteric, precious, often sterile ?high culture', divorced from real life and from people, a dead-end rather than a narrow pass on the road to new cultural peaks. Culture had become less spontaneous and far more costly... Trying to relate these common sense notions to the debate on European political culture, Wolton says that empirically there are three national approaches with ingredients borrowed from these notions : v Le premier sens, "fran?ais" insiste sur l'id?e d'oeuvre, de cr?ation. Il suppose une identification de ce qui est consid?r? comme culturel, en terme de patrimoine et de cr?ation, de connaissance et de savoir. v Le deuxi?me sens, "allemand", est proche de l'id?e de civilisation. C'est l'ensemble des oeuvres et des valeurs, des repr?sentations et des symboles, du patrimoine et de la m?moire tels qu'ils sont partag?s par une communaut?, ? un moment de son histoire. v Le troisi?me sens, "anglo-saxon", est plus anthropologique au sens o? il insiste sur les modes de vie, les pratique quotidiennes, l'histoire au jour de jour, les styles et les savoirs quotidiens, les images et les mythes. (Wolton, 1993, p. 312) Historical criteria To the extent that we wish to speak of a common European historical destiny, we would find that there are more competition, rivalry, strife, war and other forms of nonco- operative behaviour than forms of co-operative behaviour. In an attempt to summarise the results of a historical survey of Europe's origins, Morin (1990, pp. 2223) says that : L'Europe se dissout d?s qu'on veut la penser de fa?on claire et distincte, elle se morcelle d?s qu'on veut reconna?tre son unit?. Lorsque nous voulons lui trouver une origine fondatrice ou une originalit? intransmissible, nous d?couvrons qu'il n'y a rien lui soit propre aux origines, et rien dont elle ait aujourd'hui l'exclusivit?. In this sense, it seems inappropriate to speak of the long-term historical origins of a European identity, which o according to both Webster, Le Petit Robert and the psychoanalytical definition o would have to denote a form of sameness. In the period before World War II, the term Europeanisation tended to express the effects on Australian, Asiatic, American and African cultures and civilisations of the peculiar civilisation that grew up in modern Europe o including what we today call Eastern and Central Europe o as a consequence of the Renaissance, the Calvinist and Lutheran Reformation and, later on, the industrial revolution. As George Young wrote in the 1934 edition of The International Encyclopedia of the Social Sciences (1937, p. 623) : Europeanisation may be expressed politically by imposing the idea of democracy, in the sense of parliamentary and party government, or of sovereignty, in the sense of suppression or subordination of all government organs to the sovereign state, or of nationality, by creating a semi-religious solidarity in support of that sovereignty. It may be expressed economically by imposing ideas of individualistic capitalism, competition and control on community enjoying more elaborate and equitable, but less productive and progressive, collectivistic or communal civilisations ; or industrially by substituting the factory and the foundry for the hand loom and home craft. Subjective versus objective criteria Should we satisfy ourselves with just noting that "European" is what one is, if one says so ? If we reason along this line, National Socialists and Arab Socialists would be "socialists," National Democrats (that is, Neo-Nazis of the 1960s) and representatives of the former "People's Democracies" would be Democrats. If political science equals the creation of political clarity rather than confusion, a purely subjective approach seems inappropriate. For reasons of expediency, I would suggest that we opt for something like a minimalist objective approach. For a person to be "European" she or he would at least have to : o be a citizen of a state, located by stipulation, to be geographically within a geographical entity called Europe ; o speak a language which is officially accepted as one of the official languages of that state ; o share a historical destiny with other people, within that state, speaking the aforementioned language ; o share a cultural pattern with other such people, where the cultural pattern is seen as consisting of similar cognitive, evaluative and emotional elements. Citizenship is a legal criterion. An Australian citizen would not qualify even if he had lived for a long time in a European state, neither would aspiring immigrants or refugees. Language is somewhat weaker as a criterion variable, as I have already mentioned. Shared history is also a weak criterion : What about people living in territories that historically have been contested such as South Tyrol, Alsace-Lorraine, Slesvig-Holstein, parts of the former Habsburg empire, or the former USSR ? What about the Basque separatists and Catalonian nationalists, not to forget the Balkan states ? With respect to a notion of European identity, as opposed to the national identities of Europe's constituent states, peripheral territories will constitute problems since Europe is a peninsula, rather than a continent. Hence we have had problematic notions such as the old "cordon sanitaire" which was invented between the two World Wars to define a buffer zone between the Soviet "dictatorship of the proletariat" and the rest of Europe, and the "Partnership for Peace" within the new world security order. Shared culture also seems insufficient when we wish to create a distinction between European and non-European identities and, besides, cultural criteria seem to overlap with the other criteria, as I have already mentioned. Since culture can be based on any of the three previously mentioned elements of a political perspective (identification, demands, and expectation), we run the risk of exposing ourselves to definitional circularity if we use that as an exclusive criterion. Three kinds of motives Some people tend to perceive themselves ("to identify") on the basis of what they think they are and have been, and draw their political conclusions on this basis : "I am a Danish farmer or Danish farmer's son, so I must vote for the agrarian party." They are characterised by their "because-of" motives. Other people tend to conceive of themselves in terms of what they want : "In order to promote a free society I will vote for the liberal party." These people are characterised by their "in-order-to" motives. Still others perceive themselves on the basis of what they expect : "Activism is required if I wish to gain what I want or preserve what must be preserved ; in order to live a good life." "Fatalism or free-riding will be better for me than activism." This third group can be characterised by their "optional-choice" motives. The first requirement for a political identification to occur is the recognition of a "self" distinct from others, i.e. "them". This is "identification" proper. What is distinctive about being European today, if we compare it with being, say, Australian , Canadian, or Mexican ? What are the significant characteristics of being European today in comparison to being, say, European before and immediately after the Second World War ? The accumulated efforts of Schumann, Adenauer, de Gaulle, Monet, and Delors have all made a difference, but will it continue ? In the second place, there must be a recognition that this "self," this "identification" is in opposition to "them." This is regrettable for those who advocate world federalism and continued responsibility toward the Third World. In order for an identity to thrive there must be a challenge, a recognised competitive edge or conflicts of interests. The political self-recognition and the recognition of opposition between the "self" and "others" tend to reinforce each other, as in Marxist theory which claims that the class in itself (Klasse an sich) becomes more distinct as it fights for its interests against other classes, so as to emerge as a class for itself (Klasse f?r sich). As the social psychologists Hans Gerth and C. Wright Mills say in Character and Social Structure (1979, p. 288), "It is in controversies that symbol systems are tightened up". Although we may recognise a competitive edge and a conflict of interest with "non- Europeans" with respect to, say, economic issues, Europe is still integrated in a wider global community through GATT, the United Nations and NATO, etc. So despite attempts by the European Union to create a separate identity for Europeans, not unlike the Marxist notion of a "Klasse f?r sich," there are other centripetal and centrifugal forces at work to create wider as well as more narrow political identities. The third step in the establishment of a separate political identity involves a cognitive simplification of the world, where most events are interpreted in dual categories such as "European" versus "non-European." The cognitive simplification process has two explanations, each of which is equally valid. Man faces great and complex problems but has limited capabilities to process information. In order to focus attention and regain perceptual control, aspects have to be disregarded, otherwise chaos follows. Politically this is also necessary, because the audience of the politically active must be influenced by simplified images that reach down to everyone. When it comes to speaking about the identification of Europeans, such a simplified "black-and-white" perspective is probably (and hopefully) not an enduring characteristic of the electorates of Europe. Black-and-white thinking and stereotyping tendencies seem to have more in common with the kind of totalitarianism propagated within the ranks of the German Republikaner, the French Front National, Vlaamse Blok in Belgium and a few more marginal groups o perhaps inadequately described as "totalitarian" o such as the Danish Fremskridtspartiet and the Ulster nationalists. Not even the neo-fascist Italian MSI (now calling itself "the National Alliance") and its sub-organisations can be accused of such xenophobia and single-mindedness as that which goes into simple cognitive dualisms. Lowell Dittmer describes the process of identification when he says (1977, p. 573) that, "The process of political identification involves generalisation from objective perception to subjective wish-fulfilment...". However, Wolton (1993, p. 82) says that it is possible and even desirable to accept the old distinction of out-groups versus in-groups, but that it must be given a new content : L'Europe se trouve donc aujourd'hui confront?e au m?me enjeu : retrouver une figure contre-identitaire, ou inventer un nouveau mode de structuration identitaire. This new figure of contra-identification, according to the French intellectuals, should be anti-democratic political tendencies and sentiments. The fourth and final requirement concerns expected and desired goals. Such goals can be elaborated as utopian systems or models, like the federalist and confederalist conceptions of a new European political, economic or security order, or as partial working solutions to pragmatically felt needs, such as those postulated by neofunctionalists. There are at least six, more or less overlapping, contradictory and/or supportive models one can discern in the current debate on the integration of Europe and the development of a European political identity : v The great Europe model o a confederal model, with an emphasis on external relations ; v The united nations of Europe o a federal model, with an emphasis on internal relations ; v The community model o a model for inventories of what has already been achieved as a result of so called neo-functionalist initiatives ; v The Europe of the nations (de Gaulle) o a model which focuses on definitions of what should be included and excluded, and which would not necessarily include all European states in their geographical extensions ; v The minimal Europe o a liberal model in which market forces are given priority, but in which political and monetary issues are played down ; v The Europe of "espace publique" o a democratic model for Europe to be shaped, which ignores the traditional cultural cleavages and focuses on the democratic versus totalitarian modes of identity. Dominique Wolton says that these models have the quality of "ideal types" about them but that (p. 218) : En fait, l'Europe n'est pour le moment, et sans doute pour longtemps encore, ni une Europe des r?gions, ni une Europe des nations, mais une mosa?que de mod?les et de responsabilit?s gouvernementales : supranationales, nationales, r?gionales, locales, municipales, o? la souverainet? est partag?e entre les diff?rents niveaux de gouvernement. This is a reasonably pragmatic conclusion since it allows for the theoretical debate about European political identity to continue, and this debate is in itself a major source of political identification. Conclusion and some practical proposals It makes a difference whether we speak about plural identities or a plurality of choices when we look at the fears and hopes for a new Europe to be built. Plural identities are not necessarily "good" from the point of view of psychology, since they may cause distress, paralysis and confusion. The French intellectuals seem to believe that when using different criteria as identity objects, one should not focus exclusively on geographical units, since the national state is unlikely to be perishing anyway. When they advocate multiple perspectives they say that political criteria must be used, and that way the debate is being transformed into a debate about the future of European democracy, a debate with firm roots in European federalism. Since the establishment of the European Coal and Steel Community and the other European Union "pillars" there has been a change in the extent to which people regard themselves as European. This can be seen in the Eurobarometer surveys which show that the sense of being European is greater among citizens of Member States that have been members of the EEC from the beginning than among the "newcomers". But even if this is so, it may be misleading, because such "identification" may be based on parochial expectations of economic and other gains for the national unit to which one belongs, as for example in the case of Belgium, where European integration is demanded, but on the basis that the European politicians will further Belgian interests in the first place, rather than European common interests. What, then, can be done to further the idea of a common European identity tomorrow if the pace up till now has been slow and uncertain ? The answer to this question will greatly affect the future of the European Union. Since it is impossible to mention all possible projects that may contribute to a greater inner strength of the European project, I will confine my attention to some rather basic ideas which are within the scope of practical realisation. It is now more than half a century since the end of the Second World War, and we have now seen the downfall of totalitarian Communism. But we still have traces of totalitarianism among us everywhere in the form of racism, bureaucratic arrogance, and even leftover sentiments of Communism, Fascism and even National Socialism in Europe. We have concerns about a sustainable environmental development and corruption among politicians, irresponsible bankers, and remote representatives in the Europe to which we belong. These are just a few issues to which many young people pay attention but it is far from all who actually pay attention. If we can support those young people who feel concerned, and give them reasons to be grateful for what the European Union does to combat totalitarianism, racism and economic fraud, we may win over the next generation for the European project and make them feel more European than the older generations have felt. As the President of the European Union, Jacques Santer pointed out in his speech at a previous carrefour arranged by the Cellule de Prospective at the University of Lund in 1995, the great change in attitudes towards Europe will come with the next generations, those who know foreign languages and those who have lived abroad. This leads me to the practical conclusion that all of us who wish to strengthen European identity should promote travelling in all its forms all over Europe, especially by subsidising continued Inter-rail travelling among the young during the summer holidays and whenever else it is possible. Since the birth of the European Union, through the implementation of the Single European Act in the early 1990s, many airlines have shown their good-will and launched cheap travel programmes for both adults and young people. But more can be done in this area. For example, arrangements can be made with the youth hostel organisations in Europe so that travelling and accommodation will not be confined to only those who are well off, have employment, and have received grants from various study programmes. Efforts can be made to maintain and enlarge already existing exchange programmes of students and teachers that are already effective, and an effort can be made to establish summer camps, where young people from all over Europe can come together for three to four weeks to learn more and discuss problems of concern to them, including their immediate concerns about youth unemployment. If possible, they could even work directly in projects of common concern to us all, such as the rebuilding of roads and villages in the former Yugoslavia, when it is safe to do so again. The positive role of such initiatives for the strengthening of a European identity will depend upon the role played by the European Union. This role need not be too directly linked with our European institutions as they are today, and the most important thing is not to pour a lot of money into such projects, but let the beneficiaries know where the support comes from. I envisage that the European Union could play the role of the empowering agent to institutions which already exist. We could awaken an interest in a European youth hostel movement, in a European Interrail Travel System, and in European Summer Camps for young people. Such projects could send a positive signal to all European adolescents, employed or unemployed, students, trainees and working class youngsters, a signal which says : "If you wish to know more about life in other European countries and if you wish to participate in furthering the goals of the new Europe, we are there to support you." Through such measures we can not only strengthen and build a future European identity, we can also make sure that the achievements of the past are safeguarded. Reference literature Bryder, Tom (1989). ?Political Culture, Language and Change'. Paper presented at the ECPR, Paris. Christiansen, Bj?rn (1959). Attitudes Toward Foreign Affairs as a Function of Personality. Oslo : Oslo University Press. Deutsch, Karl W. (1953). Nationalism and Social Communication. An Inquiry Into the Foundations of Nationality. London : Chapman & Hall. Dittmer, Lowell (1977). ?Political Culture and Political Symbolism : Toward a Theoretical Synthesis'. World Politics Vol. XXIX, July 1977, No. 4, pp. 552 - 583. Eckstein, Harry (1988). ?A Culturalist Theory of Political Change', in American Political Science Review, 82, 3, (1988), pp. 789 - 804. Gu?henno, Jean-Marie (1993). La Fin de la D?mocratie. Paris : Flammarion. Inglehart, Ronald (1990). Culture Shift in Advanced Industrial Society. Princeton : Princeton University Press. Keohane, Robert O & Stanley Hoffmann (Eds.) (1991). The New European Community. Decisionmaking and Institutional Change. Boulder : Westview Press. Laqueur, Walter (1970). Europe Since Hitler. Harmondsworth : Pelican Books. Lasswell, H. D. (1965). World Politics and Personal Insecurity. New York : The Free Press, (orig. 1935). Mackenzie, W J M (1978). Political Identity. Harmondsworth : Penguin Books. Minc, Alain (1993). Le Nouveau Moyen Age. Paris : Gallimard. Morin, Edgar (1990). Penser l'Europe. Paris : Gallimard. Nettl, J. P. (1967). Political Mobilization. London : Faber and Faber, 1967. Rycroft, Charles (1972). A Critical Dictionary of Psychoanalysis. Harmondsworth : Penguin Books. Simon, H. A. (1985). ?Human Nature in Politics : The Dialogue of Psychology with Political Science'. American Political Science Review, 79, (1985), pp. 293 - 304. Weiler, J. H. H. (1983). ?The Genscher-Colombo Draft European Act : The Politics of Indecision' Journal of European Integration, 6, (1983), pp. 129-154. Wildawsky, Aaron (1987). ?Choosing Preferences by Constructing Institutions : A Cultural Theory of Preferences'. American Political Science Review, 81, (1987), 1, pp. 3-21. Wolton, Dominique (1993). La Derni?re Utopie. Naissance de l'Europe d?mocratique. Paris : Flammarion. Young, George (1937). ?Europeanization.' In Encyclopedia of the Social Sciences, Vol. 5. New York : The Macmillan Company. pp. 623-635. What is it ? Why do we need it ? Where do we find it ? Edy Korthals Altes Identity has to do with the individuality of a person or - in this case - of the European Union. What are the specific characteristics ? In what ways does the European Union discern itself from other international or national agents ? Identity in the sense of ?being yourself' is closely connected with the relation to others, ?seeing the other'. In this sense, Cardinal Lustiger could state that ; ?solidarity with those who die for lack of bread is an essential condition for Europe to stay alive'.31 The classic response to the question of European identity is : unity in diversity. Ethnic background, culture, religion and history are certainly important factors for the European identity. Decisive at this stage of the European integration process is however the question : what do we want to do together ? The answer depends on the perception of the need for a common response to the challenges of today's world. This is not an academic question but a matter of survival ! Identity is subject to change. It is not something ?static', given for all time. It is something that grows or withers away. Just as with individuals there is a process of development ( circumstances, events, inner growth). The present identity of the European Union is not robust but rather confusing. It resembles a Picasso portrait, conflicting lines, different levels not the unity of a human face. Or if we want to put it in diplomatic language : "the European Union is going through an identity crisis". It is still uncertain about its place in the surrounding world. Internal and external aspects of European identity Structure (decision making process : efficient /democratic/ transparent) ; Policies : (agricultural, regional, social ; just/unjust, greater inequality, exclusion) ; Economy : what are its objectives ? To serve man and society, to enable all people to live a decent existence. Or is it just the other way round : man and society sewing economics ? Accepting the tyranny of the iron laws of economics as an absolute, a given reality, something that cannot be changed. Economy as a goal in itself growth, maximisation of profits and power, etc.). Environment. Something to respect / to manage with great care and responsibility, or something to exploit whenever we feel like it. Jean-Marie Lustiger, Nous avons rendez-vous avec l'Europe, 1991, Paris, Mame. If we consider the economic aspects, the European Union looks quite impressive : large internal market, major trading partner on a world scale, strong industrial base, great financial power, among highest GNP per capita, good infrastructure, seat of multinationals, impressive number of cars and TVs, etc. A realistic vision is however obscured because of the highly unreliable way of assessing what is really going on (inadequate measuring instruments, poor definition of GNP). Counting ourselves rich at the expense of well being. The European Union is one of the greatest polluters in the world, among the greatest consumers of energy and raw materials. About 20 million unemployed, many poor. An increasing commercialisation of society, progressive deterioration of social and medical care, a degradation of education and universities (result-oriented, relevant for economy but at the detriment of education). The opinion Europeans have of their own identity does not necessarily correspond with the perception of non-Europeans. While we may be indulging in the ?civilising role' of Europe in a largely ?underdeveloped world', other nations, e.g. in the south of the Sahel or in the Pacific, may be inclined to curse the European Union (or some of its member states) for its selfishness (Common Agricultural Policy) or arrogance (nuclear tests). For the perception of our identity, deeds (actions/policies) are more relevant than words (declarations). African cattle-growers and local food producers suffer more from the negative effects of dumping of the European Union's agricultural surpluses than they benefit from fine words about the vocation of the European Union in this world. The same applies to import restrictions, policies on debts etc. And what about the striking contrast between the commitments made in Rio and Copenhagen and the slowness of action of the European Union ? It should be clear by now that a drastic revision of extravagant production - and consumption - levels in the highly industrialised nations is a prerequisite for a sustainable world society. There will be no hope for an effective control of the environmental crisis without far- reaching adjustments in the modern world. The position of the European Union is here of particular relevance. A common foreign and security policy : Picasso's portraits provide a good illustration of the present chaotic state of affairs. The unity of a well-integrated external policy is still a long way off. Several Commissioners are responsible for different aspects. Efficient policy-making is not possible with the present set-up under which the hands of External Affairs are strictly tied by the Council members ! A common security policy is not just around the corner ! And what will a common defence policy ultimately look like ? Will the European Union adopt an offensive stance with a nuclear component and a large military establishment or will it be content with a police function preferably in the context of the UN ? In the latter case, much more emphasis than to be found in the preparatory notes for the Intergovernmental Conference should be given to conflict prevention also in the economic sphere. This would also lead to a strict limitation of the production of and trade in arms and ensure careful scrutiny of R&D activities in the military industry. In search of the ?heart and soul' of Europe Reflecting on the structure of the European Union, its capabilities and policies, is certainly very important as a preparation for practical propositions on how to express the European identity. But at this critical moment of a deep existential crisis both in our nations and in the world, it seems to me that serious attention should also be paid to the deepest motivation of our acting. In other words, we should tackle the fundamental spiritual crisis, so manifest in modern society. Jacques Delors' pertinent question as to the ?heart and soul' of Europe has to be answered. Indeed, what are we doing with this huge Brussels machinery wielding so much power ? What are we heading for ? More power and material well being, especially for the stronger elements ? Or do we have a vision of a sustainable and just society ? A society in which all people, whatever background (religious/cultural/national), have a right and the possibility to lead a decent life ? A society in which people have respect for life in all its forms. A European Union with a balanced relation between the individual and the community, sustained by citizens who realise that each individual has a unique value that may never be reduced to an object for exploitation. Man, freed from the yoke of a one-sided fixation on economics, rediscovering that he is infinitely more than the homo economicus to which he is now being reduced by the apostles of greed in a materialistic culture. Man, part of a greater whole, knowing that he does not live by bread alone ! Man, with a destiny, a meaning of life, sharing life and goods in a responsible way with human beings in a global world. Of crucial importance for identity is the spirit providing the deepest motivation for action. One rightly stresses the importance of Christianity for Europe. But what is this spirit now ? If we look at today's reality, it will be difficult to maintain that we are living in a Christian Europe. Secularisation, materialism, hedonism and individualism are dominating modern culture. For many people, the sense for the transcendent has evaporated. (Many people have lost the sense for the transcendent ?) The ?horizontal' approach with its emphasis on a so-called autonomous "l " has taken its place. This has far-reaching consequences for our relations with mankind (ourselves, fellow- beings, the other and future generations) towards things and towards nature. In all three fields, man has lost his orientation, his bearings. Vacl?v Havel has made some relevant observations on this loss of the sense of the Transcendent (loss of spirituality ?) and the many problems of today as well as the incapacity of politicians to solve these (to come up with solutions). Spiritually, the European Union is in a rather poor (desolate) state. Impressive technological and economic achievements abound, but a very meagre spiritual basis. Crisis of meaning is widespread, psychological problems, crime, drug abuse, lack of respect for life with an annual death toll of 50,000 in road accidents, although (this number could be drastically reduced through the implementation of certain measures. Television programmes of a deplorable quality, etc. We need a european identity Only an effectively structured European Union (internally and externally) will be a relevant factor on the international scene, where the final real decisions affecting directly the life of all Europeans will be taken. v No European State is any longer in a position to meet the challenges of the modern world (ecological crisis, unemployment, poverty, rise of world population, armed conflicts, the spectacular increase in the destructive power of modern arms). v The dynamics of power relationships (nations as well as multinational companies). Affected are therefore not only countries like the USA, Japan, Russia, China, East- Russia, etc. but also the major international players in finance and business. v The serious threat to a ?social market economy' caused by overwhelming global forces call for a common ? Answer.32 We cannot go on with our present rate of production / consumption/ destruction of the environment. If we want a sustainable and just society, we must make progress in the direction of ?enough is enough'. We need to accept an upper limit and pay much more attention to the unsustainability of the present economy. We know that our planet cannot cope with a similar rate of economic expansion on the part of all other nations. We know that 4/5 of mankind is in urgent need of development (aid ? ?) in order to enjoy a decent standard of living and to escape from hunger and starvation. We must therefore strive for a reduction of our impact on the environment if we are serious about a basic sense of humanity. This cannot be achieved by technological means, fiscal and other measures alone. A fundamental change in mentality, in basic orientation, is needed. The obvious response to the global challenge should be a worldwide decision to set course towards a sustainable future. Heading off a collective disaster by managing the planet's scarce resources and environment in a responsible way. This will however take time - too much time. But why shouldn't the European Union - with its considerable economic leverage - take the initiative with a step-by-step approach, making it clear to the world that the one-sided emphasis of ?unlimited material growth' at the expense of real well-being is a fatal error ? Recognising that other areas may be in need for further economic development but that we have reached the stage of ?enough is enough'. That we are no longer victims of the false ideology that man has endless unlimited ? material needs which have to be satisfied. After all, it is from Europe that the industrial revolution and the expansion of our economic system started. A convincing European Union signal, illustrating a decisive turn in our economic approach might trigger off similar reactions in the US and Japan. Politically speaking, this deliberate change of course will not be easy. It could be greatly furthered if the European Commission entered into a creative relationship with 32 Michel Albert, Capitalisme contre capitalisme, 1991, Paris, Seuil. those egos that promote a similar course of action. There may be a greater concern among many people about the loss of ?qualify of life' than many politicians think. One of the challenges is, as we have seen before, the rediscovery of the great spiritual resources that have been at the origin of the European civilisation. There will be no renewal of the European society without a fundamental reappraisal of man's place in the Universe. The relation with the Ultimate. As we live in a multireligious Europe, this is a shared responsibility not only for Christianity but also for other religions. In the present situation of a morally disoriented Europe, a simple appeal for ?norms and values' will not be enough. Much more is needed. Values without deep spiritual roots will not stand up in the present harsh reality. For example : the threat to the social model. It would be an illusion to think that it will be possible to maintain the ?social market' - now under great pressure - without a strong spiritual basis. Europe urgently needs a radical change from its one-sided materialistic - horizontal approach to an attitude towards life which opens up towards transcendence. Christians throughout the ages have discovered in the cross of Jesus Christ the ultimate symbol - and reality - for this meeting of the horizontal and vertical lines. Jews and Muslims have other ways of expressing the reality of the transcendental experience. Where to find it ? The great temptation is to look for ?identity' in the structure of the European Union, its institutions, regulations, acts and policies. And may be even among its declarations. Ultimately, the European Union identity depends on the political will of member states and the way the European Union uses its competencies. But political action of states is highly dependent on public support. Whether there will be sufficient understanding for necessary ?painful policies' depends on the motivation of citizens. It is thus a question of the spirit. What moves (activates ?) people nowadays ? The spiritual desert in which many people live is well illustrated by the statement of a Dutch cabinet minister (environment) that ?the car cannot be touched because it is an essential element of the identity of a person' ! I doubt whether the European Union could ever develop its identity on the basis of this narrow materialistic concept of human nature. The European Union identity will not be found in wonderful words about our common history and common sources of inspiration. Not in digging up long forgotten treasures of the past but in acting together. On the basis of adequate policies, meeting the present challenges. Just three examples of missed opportunities - all in areas on our doorstep : 1. The end of the Cold War and breakdown of the communist system provided a unique occasion for a visionary approach of the new reality : a large-scale well-integrated economic co-operation programme addressing the actual needs. 2. The handling of the crisis in ex-Yugoslavia. 3. The creation of an all-European security system in the spirit of the Paris Charter. On these historic occasions, action would have given a greater impulse to the development of a European Union identity than a thousand seminars and numerous solemn declarations of politicians. Unless the European Union develops an adequate structure enabling it to deal effectively with the challenges of the modern world, we will not discover our common identity. It is up to the member states to take a hard look at reality and decide to break the impasse of the present "Impossible Status Quo" !33 Some practical and some more fundamental suggestions v Continue and expand the excellent initiative on the Carrefours d'Europe. If necessary even under more modest circumstances ! v Bring spiritual and cultural leaders together with politicians, managers, journalists etc. Strive for an equilibrium between bureaucrats of institutions and ?independent' Europeans. v Consider the possibility of a substantial increase of inter-European exchange programmes for students and scholars. v Bring forcefully to the attention of Council members and public opinion that the European Union has now really arrived at a crucial point which will be decisive for its future : whether it will develop an identity or become a non-entity. making clear that the latter option will inevitably also lead the proud member states on the same road towards oblivion. v Deepening of the European Union should have an absolute priority over enlargement. The danger of further diluting the identity is great. v Translating the recognition that the spiritual factor is crucial for the European identity in an active support of all those religious and cultural forces that can contribute to the spiritual revival in Europe. A new spirituality will liberate us from the dominance of economics, breaking the spell of the golden calf ! This would pave the way for a humane and just society, offering the possibility to lead a full human life in which values such as love, beauty, truth and goodness together with human rights, solidarity and justice are guaranteed for us and coming generations ! 33 Club de Florence, Europe: L'impossible Status Quo, 1996, Editions Stock. European identity and political experience Mario Soares Let me make two points clear to start with. Firstly, Europe is not just the European Union ; secondly, I have no doubt that a European identity does exist. When my country embarked on the process of joining the European Community it did so for very specific reasons, namely to consolidate our newfound freedoms. Portugal, like Spain, had just emerged from nearly half a century of dictatorship, and it was essential to consolidate our democracy to prevent any resurgence of military power. We could only counter this threat by turning to Europe. This is not to say that we did not consider ourselves to be a European country before. Let me remind you that the Portuguese were the first Europeans to export the culture of our continent to the Indies, Japan and America. We were also the first to bring back to Europe the riches of the civilisations and cultures we discovered there, which were still completely unknown here. We have always regarded ourselves as Europeans, even if our country is on the periphery of the continent and faces the Atlantic and Africa. I have mentioned the importance of sporting European colours to consolidate democratic institutions that were still in their infancy. But there was another reason for Portuguese membership : we were very late in embarking on decolonisation. Having been the first colonial empire in the world, Portugal was also the last. But once our colonial empire had finally disappeared, fifteen years later than those of our neighbours and in difficult circumstances, and we found ourselves face to face with new sovereign states such as Cape Verde, Guinea, Angola and Mozambique, we felt that integration in the European Community was the natural counterweight to this change. We joined the European Community at the same time as Spain, in June 1985. At that point, it was not yet the European Union. Since then, we have seen the collapse of the Communist world and many profound changes. The European Community had two objectives : the most obvious, founded on Franco-German friendship, was to preserve peace on the continent. The second was to keep up with the United States and the Soviet bloc. With the end of bi-polarism, the Community found itself plunged into a completely new situation. This was when Europe rediscovered its own values and escaped from the geographical and historical confines imposed by the Cold War. We realised that Europe was much larger and started to ask ourselves what we should do with the "rest" of the continent. We realised that we had a duty to reintegrate this "other Europe" into our Community onow a Union. But of course it is no easy matter : what will become of a Europe that was difficult enough to run with just 10 or 12 or 15 members when it expands to include 21 or 22 members in a few years' time ? This is a problem for the European institutions but it also touches on the very future of the concept of the European Union. Europe cannot just be the European Union within the frontiers as they stand today. Hungary, Poland, Bulgaria, have the right to join our Community : their history and their contribution to the European identity fully entitle them to membership. They have contributed as much to the European ideal as we have. From these countries, I hear the same arguments that Spain and Portugal used for joining the European Community : we have freed ourselves from dictatorship, we have become a democratic country through our own efforts, without Europe's help. We also had the right to democracy at the end of the Second World War, because Great Britain and France had defeated the dictatorships and Germany counted for nothing in the immediate post-war period. Who allowed the dictatorships to reemerge in our country, if not "democratic Europe" ? Though it pains me as a socialist, I have to say that if there was one champion of the rehabilitation of the dictatorships at that time it was the British Foreign Secretary. Driven by fear of Soviet pressure and fear of Communism in Western Europe o in both France and Italy o the democratic states of Europe took the view that it was more sensible and served their own interests better to overlook the fact that there were two dictatorships on their doorsteps. From 1945 to 1974, we continued to live under a dictatorship because of this sort of indulgence, because of the treachery of the democracies. They did everything to perpetuate the dictatorship in our country. It was the easier option : it was either that or risk letting Communism in through Spain or Portugal or somewhere else. This was the main consideration. Once we had rid ourselves of these dictatorships, our first concern was to assert that we were democracies and that you bore a share of the responsibility for our period of fascist or authoritarian rule. This gave us every right to sit at the same table as you, particularly as our contribution to Europe has been every bit as important as yours in the past. This is what we said to the European states. It is what our friends in Central Europe are saying to us today and it remains equally valid. They too can claim the right to sit at the Europeans' table. Economic reasons cannot stand in the way of this right. This is why it is our duty to find ways of dealing with the current situation and welcoming these states into the Union. The question of greater Europe is not confined to Central Europe alone : Europe is also linked to the Mediterranean Sea and the Mediterranean basin. It is linked to what happens in Eastern Europe. Where does Europe end ? On the Russian steppes ? Is Turkey part of Europe ? I was in Turkey quite recently and found that those who want to modernise the country proclaim their Europeanness. And rightly so. They have reasons for doing so. Is the European Union to be a club reserved exclusively for Christian countries o Protestant, Catholic and Orthodox ? Are countries with a predominantly Muslim population not allowed to join ? Is there some sort of religious bar to membership ? I do not think so. But the problem of Turkey is a serious one for Europe. How are we supposed to deal with an unprecedented situation like this on the institutional level ? On this point, my mind is made up : I agree with Chancellor Kohl that the construction of Europe is a vital matter for the next century, a matter of war and peace. Even if we had no problems of identity, if we fail to move towards a stronger European Union, if we fail to move rapidly along that road, Europe will find itself without a voice and will lose the importance it once had in the world. It is not just a matter of being heard throughout the world, but of having the strength to impose certain models which we believe encapsulate so many of the ideas which this ancient continent has produced over the years. The European model reflects serious humanist concerns based on fundamental human values : values of liberty and reason, solidarity and social justice. They are values without which the human race cannot successfully enter the XXIst century. In other words, Europe's interests are not limited to Europe alone. It is not a matter of simply asserting Europe's position in the world, but of going further and making a contribution to the world as a whole. If we fail to make this contribution, something will be missing, and we will fail to explore the paths that are most rational and most conducive to human happiness. This is how I see the situation and why I believe in Europe. I may sometimes criticise Europe with other pro-Europeans, but I do so because of my love for Europe. I do so because I am not afraid of the march of European progress, quite the contrary. I do not think there can be a solution which would unite 20 or 30 European countries but leave these essential values as the individual concern of each State. They must be pooled and managed collectively o and this is true of security policy and foreign policy as much as anything else. But it cannot be done without supranational European institutions. It cannot be done unless we move towards a united Europe, towards a measure of European federalism. I know that this word makes some people uneasy. But I have no qualms about using it. Like the founding fathers of Europe, I favour a structure which does not have to be identical to the one that already exists. It should be a new and original design. Others have already said as much in this seminar. It should evolve towards a United States of Europe, along more or less federal lines, perhaps with its own original touches, but basically federal, with a certain common direction. This requires the sacrifice of certain elements of states' traditional sovereignty, the pooling of national sovereignties. Without this, we cannot build Europe. There may come a moment when we have to say to those Member States that do not want to go all the way that they have no right to stop the others from going further. This path offers the best solution available in the short term. When I speak of Europe in such glowing terms, it should be clear that I do not mean Europe to be simply Europe of the free market, the single market, economic and monetary union and the single currency. Of course I support this, but only if we build a genuinely political Europe as well. Because if it is to be only an economic and monetary Europe I will withdraw my support. That is not the sort of Europe I am interested in. I am in favour of an economic and monetary Europe if it goes hand in hand with a political Europe and coordinated foreign policy : a Europe which defines its own security collectively, a Europe that is also a social union, a Europe of the people, a Europe with popular participation. I want to talk about the participation of Europe's regions, which is as important as that of the states. I want to talk about the participation of the cities, the people, the NGOs, the general public. This pluralism, this diversity, is the key to achieving a multi-faceted Europe capable of fulfilling its role in the world. This role is essential for maintaining equilibrium in the world and creating a new international order, without which disaster beckons. We are concerned about human destiny, about the environment, drug abuse, unemployment, the problems that preoccupy the younger generation. These are very serious problems which also affect the United States and, even more acutely, Japan, to mention just two important countries. But when we look at the countries of Southeast Asia, it is clear that their prosperity is based quite simply on slave labour. I was in China a few months ago, where I had the opportunity to meet various leading Chinese figures. My impression is that China is heading for an explosion that will be completely out of control, an explosion even more dramatic than the one that tore apart the Soviet Union, because things cannot go on as they are. You cannot maintain such a level of capitalist exploitation ; you cannot have a city like Shanghai with a very high level of development and staggering wealth and at the same time have public officials earning a pittance. A street trader in Europe would not accept such a meagre salary. Such inequality can only be sustained by high levels of corruption or crazy distortions which I am convinced will lead to social upheaval. As I see it, the world is completely deregulated at the moment. We are all well aware of this. The United States cannot run the world on their own, even if they want to. This is why it is important that Europe carries out its allotted task. It is a major challenge for Europe and for us Europeans. We must be ready to respond boldly. Unfortunately, we have not seen any great leaders stand up to defend this sort of point of view loudly and clearly. For electoral reasons, political leaders find themselves conditioned, tied by the rules of normal democracy, the rules of parliamentary democracy. They want to please and respond to the immediate present, with the result that they cannot rise to the responses they are called on to make. They cannot provide answers to a much more serious problem which touches on the deepest aspirations of the individuals and societies of today. This is why we sometimes find ourselves deadlocked. We can see that concern is becoming widespread in Europe : there is disenchantment about Europe in the countries which joined the European Union most recently. It is clearest in Sweden, but is not limited to Sweden. The same disappointment is to be found in Germany, France, Spain and Portugal, not to mention Great Britain. What is the cause ? There is a mistaken idea that the European Union is a bureaucracy based in Brussels which concerns itself with the details and tries to regulate the life of the ordinary citizen instead of allowing him a voice and the chance to do something for himself. I believe this to be completely false, but this is how things are perceived. Matters are made worse by the fact that the situation for young people is very difficult : unemployment, delinquency, drugs, social exclusion, AIDS are all problems which particularly affect young people. The solutions proposed tend to have an economic or technocratic slant : they are not the answer to these human problems. This is what young people feel and this is why people are pessimistic and suspicious about Europe. Europe has to be relaunched. The European identity has been described as a changing concept and this is true. It has to incorporate the great values and aspirations of the different nations. If we could do that, if we had the courage to do it and to speak the truth when dealing with the big problems, if we were able to resolve these problems by taking steps towards closer European integration, Europe would begin to respond positively to the great challenges of the day. These great challenges may be stated in very simple terms : either we are able to understand and create a true political, economic, social and cultural union, which, for all its diversity and pluralism, remains Union on a grand scale, or we are unable to do this and we take a step backwards into the outdated nationalism and disorder of the past. For me, this is one of the most worrying prospects, not only for Europe, but for humanity as a whole. How to define the European identitytoday and in the future? Ingmar Karlsson The European identity is often described in a somewhat high-flown manner as having its foundations in antiquity ; free thought, individualism, humanism and democracy had their cradle in Athens and Rome. On the other hand, neither Greek nor Roman civilisations can be described as European. Both were Mediterranean cultures with centres of influence in Asia Minor, Africa and the Middle East. When Alexander the Great set out to conquer the civilised world of his time, Egypt, Persia and India, he had no idea that he was acting on behalf of Europe. Christianity, with its roots in Judaism, was also a Mediterranean, non-European religion. Byzantium was a Christian power which marked the limit to Roman claims of sovereignty, as did a large part of post-Reformation Europe. The result of the schism between Rome and Byzantium was the development of another culture in Russia and south-eastern Europe. Following the Reformation, a large part of continental Europe was preoccupied for several centuries with religious wars and rivalry between Protestants and Catholics. More recently, historians have played down our antique heritage. European ideals are traced back to the Renaissance instead and the concept of the individual as the smallest and inviolable element of society. The Enlightenment and the French Revolution contributed to the demand for freedom, equality, fraternity, democracy, self-determination, equal opportunities for all, clearly defined government powers, separation between the powers of church and state, freedom of the press and human rights. The ideas that are triumphant in Europe today are those of market economy and democracy. By definition, this also includes the USA, Canada, New Zealand and Australia as European powers. However, Europe does not only represent modernity and tolerance but religious persecution, not only democracy but fascist dictatorship as well - Hitler was the first to use the idea of a European house - for the collectivist ideals of Communism, colonialism and racism disguised in scientific terms. In other words, European identity cannot be defined on grounds of cultural heritage and history, and even less can it be used as the basis for European domestic and foreign policies. The explanation is as simple as it is obvious. Economic and political integration between European nation-states has not yet progressed so far that it is possible to speak of coincidental interests. It is possible that they have diminished somewhat with the collapse of communism and disappearance of a common threat. Instead, there is a growing need for a national identity and sovereignty in proportion to the increased levelling of European politics and economy. The greater the sense of diversity being under threat and that standardisation is rising, the greater the antipathy to projects that promote integration. The European Community is already a reality as far as production and consumption is concerned, but there is popular opposition to a culturally standardised community. The more blurred and controversial the future of a common Europe appears to the common man, the more the nations will mobilise themselves against Europe. In the interests of not becoming counterproductive, a balance must be struck between enthusiasm for the European project and awareness that European Union legitimacy will be in short supply in the foreseeable future. This view need not paralyse efforts towards integration, however. The phrase "an ever closer union between the peoples of Europe" could instead be useful in its general vagueness. There may also be some validity for European integration in Edmund Burke's wise words that political order cannot be created at a drawing board but has to emerge gradually. This, in turn, means that politicians and bureaucrats must concentrate on immediately essential and clear issues and on measures the consequences of which can be judged by citizens themselves. Every new European competence must therefore be explained in concrete terms in order to achieve acceptance. Consequently, the issues should be carefully examined that require a European solution and which withstand centralised interference, particularly because an incorrect decision on, say, the agricultural policy, can have far-reaching consequences and undermine the credibility of Union projects. A stable foundation of legitimacy for the European Union will only be achieved when Europeans perceive a European political identity. This does not imply that they would no longer feel themselves to be Swedes, Finns, Frenchmen or Portuguese, but that the sense of a European common destiny was added to these identities. Even after four decades of European integration, this development is still in its infancy. Nation-states evolved after a long period, often filled with conflict. They are ideological constructions and a national identity is ultimately a political standpoint. A prerequisite for a strong national identity is that citizens have a sense of loyalty to the state because it redistributes social resources and provides education, infrastructure, a legal system etc. The same prerequisites hold true for the creators of Europe as well. As in the process that led to the creation of European nation-states, the European Union will also be an elite project for the foreseeable future and the European identity an elite phenomenon. To be sure, the technocrats and bureaucrats in Brussels are a new European elite but are they representatives of European culture or merely an international "civil service" who, with the passing of time, increasingly alienate themselves from the people whose interests it is meant to serve ? Is there not a danger that institutional loyalty will become stronger than "European awareness" which may spread among the elite of member nations ? The problem becomes more aggravated when these people arouse negative stereotype reactions among citizens. Eurocrats are not regarded as the first among Europeans, but as overpaid bureaucrats interfering in matters that do not concern them. The creation of national symbols and myths and the rewriting of history were also part of the process by which European nations were formed. First came the state, followed by the formation of a national community within the territorial framework by means of gradual integration and cultural standardisation. The architects of nations emerging in the XIXth century used such means as national conscription, compulsory education and the supra-regional spread of the growing mass media to create contact between the centre and periphery and seemingly natural boundaries on the basis of geography, language, ethnicity or religion. Above all, the arrival of national educational systems and mass media contributed to the sense of belonging to a national community, expanded cultural horizons and getting away from provincial narrow-mindedness. Efforts to create a European identity Brussels appears to have had this in mind when taking the decision in 1984 that the EC would improve contact with its citizens and, so to speak, create a European identity, centrally and from above. At a summit meeting in Fontainebleau, the European Council found it "absolutely essential that the Community fulfil the expectations of the European people and take measures to strengthen and promote the identity and image of the Community vis-?vis its citizens and the rest of the world". The Adonnino Committee was set up for this purpose, with the task of starting a campaign on the theme of "A people's Europe". This work would be based on a quotation from the preamble to the Rome Treaty on "an ever closer union among the peoples of Europe", and on the Tindemans Report of 1975 which recommended that Europe must be close to its citizens and that a European Union could only become reality if people supported the idea. An outcome of the work of this committee was the decision that the EC should have its own flag. When the flag was raised for the first time at Berlaymont on 29 May 1986, the EC hymn - the "Ode to Joy" from the fourth movement of Beethoven's ninth symphony was played for the first time. Thus, by means of a flag and European national hymn, the Union acquired the attributes of a nation-state. A European Day was also established. The choice fell on 9 May, the date on which Robert Schumann held a speech in 1950 that resulted in the first community, the European Coal and Steel Community. Consequently, the Adonnino Committee appears to have assumed that a European identity could be created on the initiative of politicians and bureaucrats. In 1988, the European Council decided to introduce a European dimension into school subjects such as literature, history, civics, geography, languages and music. Legitimacy for future integration would be created by invoking a common history and cultural heritage. This has resulted in a book, "Europe - a history of its peoples", written by the French history professor, Jean-Baptiste Duroselle, which, to quote the author, covers a period from 5.000 years ago to tomorrow's news. The European Union is thus attempting to create a European identity from above. A common European frame of reference is being created by means of a standardised set of symbols and myths. A European driving licence already exists and an European Union passport, although it took ten years to agree on its colour and appearance. The Maastricht Treaty introduced the new concept of a citizen of the Union, although his/her rights and obligations have still to be defined. These activities are incompatible with the often-recurring theme that European integration must be a natural process and not imposed from above. Every European people has its more or less genuine historical myths, experiences and view of history. There is no European equivalent to the Acad?mie Fran?aise, Bastille, Escorial, La Scala, Brandenburger Tor or the opening of Parliament at Westminster. There is no European Unknown Soldier. Jean Monnet rests at the Panth?on in Paris. The fame of Robert Schumann's resting place at Scy-Chazelles cannot compete with Colombey-les-Deux-Eglises, where General de Gaulle lies buried. Common history has been experienced by many as against and not with each other in the great European wars. The main task of the "Europe-makers" cannot therefore be to provide Europeans with a common identity originating in antique or medieval times but to develop political self-confidence and ability to act in line with the role of Europe in the XXIst century. This will not happen by elevating the European Union to a free trade zone in accordance with British ideas, or into some kind of American- style United States of Europe which is imposed on people against their will. Basis for European patriotism and identity Only long-term, patient growing together will provide the basis for a democratic Europe comprised of its citizens. For many decades, the EC was a practical community. We are only now en route towards a community of destiny and experience. If anything is be learnt from European history it is that Europe as an entity can only be completed in agreement with and not against the will of the nation- states and what they consider to be their legitimate interests. At present, regionalism and nationalism undoubtedly have another strength than paneuropeanism. Perhaps Europe needs some ?multi-national shocks' in the form of an aggressive Russia, a new Chernobyl catastrophe or Gulf crisis to show our total dependency on the USA in conflicts that affect vital European interests. Other problems will also arise that call for joint action and which in due course will aid the establishment of an identity, such as for example : o the necessity to use our common strength to meet the technological challenge from Japan and the USA and, in the not too distant future, the "new tigers". o common action to overcome environmental problems, pressure from immigration and to handle international organised crime. A successful European policy in these and other areas could help in the development of "constitutional European patriotism" in the same way that "loyalty to the Constitution" ("Verfassungspatriotismus") became a reality in the Federal Republic of Germany , replacing the nationalism that no German was able to feel after the terrors of the Hitler period. An absolute precondition for developing a common political culture and constitutional patriotism in the European Union is that its citizens are informed about and participate in the super-national decision-making process. A European public opinion must emerge before there can be talk of a European citizenship. As stated above, the European identity has no historical reference. European trade unions do not exist at present, nor other interest groups nor, above all, trans-boundary European parties and a European general public. The Maastricht Treaty brought this deficiency into focus, negotiated as it was by experts in a European code incomprehensible to its citizens. As a result, the reputation of the European Union was further diminished. A prerequisite for a solid European identity is therefore the development of European parties, or at least a party network, and political debate on trans-boundary issues. When employer organisation and trade unions begin to meet at a European level to look after their members' common interests, we will have taken the first steps because politics will have reached beyond the national level. The optimum we can achieve at the end of such a process would be a European "constitutional state" and European Union citizenship that is felt to be genuine and not an artificial construction. The way is both difficult and long, however, and more likely to be curbed than speeded on by enlargement eastwards. It has proved difficult enough to bridge the cultural and linguistic differences between Catholics and Protestants, Latins, Germans, Anglo-Saxons and Scandinavians in Europe. The task of integrating the Baltic, Slav and Orthodox Europeans will be infinitely more difficult. The larger and more heterogeneous membership becomes, the greater the need to differentiate between various member states and a Europe moving at different speeds and where the political union, monetary union, common security and defence policy and inner market will not extend over the same geographical areas. A union of up to 30 members at varying stages of economic development can only function if it is organised along multi-tracks and at different levels. Efforts to create a Europe around the hard core of a monetary union with the Euro as a magnet could be counterproductive. Magnets work in two ways, either drawing particles towards them or pushing them away. There is a clear risk that a monetary union will not only have a magnetic effect but the reverse as well. Cultural diversity o obstacle or prerequisite for a European identity ? European political oratory often maintains that Europe can only be defined through its unique heritage of diversity and lack of conformity and that, paradoxically, its very diversity has been its unifying principle and strength. However, European linguistic diversity is probably the greatest obstacle standing in the way of the emergence of a European political identity and thus the European democratic project. While multilingual European democracies certainly exist, the prime example is Switzerland, which has elected to remain outside the European Union. A democracy is non-existent if most of its citizens cannot make themselves understood to each other. Rhetoric apart, not even leading European politicians are today able to socialise with each other without an interpreter, and very few can make themselves understood to a majority of European voters in their own language. Not one European newspaper exists, except elitist newspapers such as "The European". There is no European television programme apart from Eurosport, and most of its viewers watch matches between nations. In short, there is no public European debate, no European political discourse because the political process is still tied to language. The question of language is basically one of democracy. Political discussion would be divided between A and B teams with many excluded because of their lack of linguistic knowledge if only English and French were the official European Union languages. At the same time, the problem of interpreting is becoming insurmountable. Over 40% of the European Union administrative budget is already spent on language services. Eleven languages make 132 combinations possible in the translator booths. The addition of another 10 Eastern and Central European languages brings this figure to 420 and 462 if Maltese is added. Some form of functional differentiation will therefore be necessary, making some languages more equal than others. although this would have a negative effect on European public opinion in the small member nations. At present, an average 66% of European Union citizens are monolingual while 10% speak at least two foreign languages. Ireland is at one extreme with 80 and 3 % respectively, while only 1% of the population in Luxembourg is monolingual and no less than 80% speak at least two foreign languages. In order to function as Europeans and safeguard our interests, we Swedes must become tolerably fluent in at least one other major European language apart from English. Swedish remains the basis of our cultural heritage and domestic political discussions, but in order to play a constructive part in Europe we must develop into citizens of Luxembourg as far as language is concerned. Consequently, Europe is neither a communication- nor an experience-based community, to use German expressions. Both factors are indispensable in the development of a collective political identity. This is created by sharing experience, myths and memories, often in contradiction to those held in other collective identities. They are, moreover, often strengthened by the comparison with those that are distinctly different. Not just Robert Schumann, Alcide de Gasper, Jean Monnet and Konrad Adenauer should be counted among the fathers of European integration, but Josef Stalin as well. The Cold War enabled a sense of unity to be mobilised among Western Europeans, but who can play the role of opposition now in order to provide Europeans with a common identity ? The USA is part of the same circle of culture. Japan is of course a homogeneous and different society but is too far away and does not constitute any political or military threat. And its economic strength is directed primarily at the USA. There is an inherent danger that Europe will choose to define itself vis-?-vis its surrounding third world neighbours and that the Mediterranean will become the moat around the European fort. The creation of a pan-European identity risks being accompanied by a cultural exclusion mechanism. The search for a European identity could easily take the form of demarcation against "the others", a policy which leads to a racial cul-de-sac while at the same time the mixing of races continues to rise in Europe. A European identity must therefore be distinctive and all embracing, differentiate and assimilate at the same time. It is a question of integrating the nations of Europe, with their deeply rooted national and, often, regional identities and to persuade citizens to feel part of a supra-national community and identity. Can half a continent with 370 million citizens and 11 official languages really be provided with a democratic constitution ? In the ideal scenario for the emergence of a European political union, the European Parliament must first be "de-nationalised" and this assumes a European party system. Secondly, it must have the classic budgetary and legislative powers. The Council of Ministers must be turned into a second chamber and the Commission should be led by a "head of government" appointed by Parliament. National parliaments would consequently lose their functions. They could be transformed into federal parliaments in smaller states, as in Germany, and would thus have the same position vis-?-vis Brussels that they have today. It is easier said than done to abolish the democratic deficit by giving greater powers to the parliament in Strasbourg, because the dilemma of representation versus effectiveness would immediately come to a head. If every parliamentarian represented about 25.000 citizens, as is the case in Sweden, the gathering at Strasbourg with about 15 member nations would have to be increased to 15.000. If in the name of effectiveness, the number was reduced to 500, with constituencies of more than a million citizens and everyone was guaranteed an equal European vote, Luxembourg would not be represented and Sweden would have a maximum 13 representatives in the European Parliament. It might be capable of functioning but could not by any means claim to represent a European electorate. The democratic deficit would continue. Europe as an entity can only be achieved with the help of and not against the nations and their special characteristics. European integration will not be completed because of some natural necessity but only if enough political energy is brought to bear. The future of the European Union rests therefore in the common interests of member states and not on the political will of a European people for the simple reason that such a thing does not exist. Regional and national identities will grow in importance in a world that is becoming every more difficult to oversee and which is ever more rapidly changing. Citizens will be living more and more in a state of tension between several loyalties, their home district, state, nation, Europe and the international community, increasingly required to think globally but act locally. New ancient regimes and new regions are emerging everywhere in Europe. By actively supporting the process of regionalisation, Brussels and individual capital cities can show that European Union is taking its institutions closer to its citizens and thereby creating greater scope for cultural and linguistic diversity than the nation-states have been capable of doing. By contributing to a new vision - the Europe of diversity and regional government based on subsidiarity - the idea of Europe can be made more comprehensible and attractive. In this way, the regional identity can strengthen the emergent European identity. Now that regions are increasingly turning to the European Union in their fight for resources for regional development and to attract investment, Brussels and the European Union will be seen as the friends of the regions rather than their national capitals. The nation-state is thus being nibbled at from two directions. At the same time, we will experience a renaissance for nation-states and regions and their gradual merger in a transnational community. Those who support the region and nation must not necessarily reject Europe, but the traditional nation-state with community-based traditions, identity and loyalty will remain indispensable as a strength and source of political stability. Nation-states are therefore essential in order to legitimise a new European order but structural asymmetry, conflicting interests and unexpected courses of development will lead to relations between the nation-state and European integration that are difficult to manage and oversee. Europe will continue therefore even in the future to be squeezed between what the German philosopher Karl Jasper called "Balkan and Helvetian tendencies", i.e., between Yugoslav and Swiss development models. Nations are not great once and for all, but are created. They are what Benedict Anderson called "imagined communities". The idea of a European community cannot arise from the German concept of "Blut und Boden", or from the idea of a European "Volk" or a European "cultural nation". Nor can the European identity be created through central directives from Brussels or member nations' capital cities, or by being conjured forth at seminars and conferences but rather through the citizens of individual European states knowing that they personally have something to gain from integration and they hereby say yes to the European Union in their daily referendum. As we have already experienced, a forced unifying process produces counter reactions in all the member countries. A European identity is possible only where there is a community of interests among the citizens. If this is missing or not felt to be sufficiently strong, the European Union will have a democratic deficit irrespective of what new competence is given to the European Parliament. The single market will bring about trans-boundary mobility and thereby albeit slowly contribute to the emergence of a European identity but it will be one of many relativised by different national and regional identities (such as, for example, Benelux, Ibero-Europe, the Nordic countries). Immigration will strengthen the multicultural component that is indispensable for a new sense of identity. At the same time, it will nourish the social tensions and racist and nationalist comments, but can also lead to political mobilisation and the insight that these problems can only be solved at European level. A European ?supra-nationality' will be accepted first in situations where there is no hierarchy of national, regional and supra-region identities but when every individual knows about them as self-evident and as part of their daily life. A policy for preserving diversity will thus be a precondition for creating a European identity that neither should or would become a replacement for a national identity but which can create support and strength for political institutions that are neither national nor the framework of a European superstate. Questions of cultural policy, education and a historically deep-rooted social system and values must therefore remain the concern of nation-states. It is thus a case of render unto the nation state that which belongs to it and to European Union that which is the European Union's ; a security and foreign policy structure, the single market, a common crime, asylum and immigration policy. The hitherto clear links between state and nation will thus grow looser. European integration from this point of view will not mean that a new superstate will appear but that power is spread out. Cultural identities will remain rooted at national level but will spread further down to ever more distinctive regional identities. We will have neither a new European superstate or sovereign nation-states. Nations will not disappear but we will have nations with less state and national cultures with softer outer casings. Relations between European and national identities could take the shape of a foreign and security policy in the wide sense as a fundament of a common European political identity, a "nation" to which one feels a sense of political belonging without the need to feel part of a European "Volk" or a European "cultural nation". The German concept of a nation would endure at national level although in its original form as conceived by Johann Gottfried von Herder in which a nation need not necessarily express itself as a state. By standing on secure and solid cultural ground, every people with their own distinctive character and cultural capacity achievements can contribute to an international community. Cultural nations will thus become divorced from a territory. People will have a sense of belonging to a special area and its cultural and political history but this area need not necessarily be linked to a nation-state with defined territorial boundaries. The European political identity could emerge in this way while at the same time leaving the cultural national or regional identity intact while European diversity will not only remain in place but grow as well. The democratic deficit can never be abolished unless this kind of development takes place, nor would the project of a European Union be realised. European identity - A perspective from a Norwegian European, or a European Norwegian Truls Frogner Norway is a part of Europe, but not a member of the European Union. We are integrated in many ways, and for practical and economic purposes (EEA Treaty) we are close to membership. The road to full and political membership is to be found in our visions and roots, both part of our European identity. In this respect, the Norwegian challenge is similar to that of all other Europeans. Since Europe has many countries on its fringe, the approach towards European identity could start from one of them. Even the opponents in Norway said before the referendum in 1994 ; "YES to Europe, but no to the Union"... Membership of the European Union can never be more than the means to achieve other and higher goals. Integration as an instrument of cooperation is necessary, yet not sufficient. Institutions should reflect the dreams and needs of the population, and transform them into practical solutions of which they can approve. The forthcoming "Citizen First" campaign may succeed in reminding the people of Europe of what has already been achieved during the four decades since the Rome Treaty was signed. Still it seems to many people that politics on the European level is something different and remote from national politics at home. And worse, sometimes national voices blame "Brussels" for unpopular measures, without giving credit for the positive impact of European decisions. Does the European Union suffer from a scapegoat syndrome ? There are at least two answers. It is necessary to normalise European politics. To work for European solutions is a part of a general struggle for values and visions on the individual, local, regional and global level. In this perspective Europe is not something special, but the bridge between near and far. Europe is the gate to the big unknown world and the port when coming back. The second answer is to develop a consciousness of our own European identity and the common ground of European values and history. The point is not to cultivate something European which is different than national, local or global, but to compose some ideas, sentiments and values as a platform, as an inspiration, for taking part in facing common challenges. The Norwegian "naysayers" cleverly connected their opposition against the European Union with a combat for positive ideals. But the supporters also fight for higher values, a better society and a sustainable development, however not yet communicating this message with the same one-sided self-confidence and conviction. Maybe because real Europeans have ambiguous minds ? Belonging to the European Community is often said to be the major reason why the supporters are in favour of membership. It has to do with a cultural and geographical identity, also shared by many Norwegians. (Remember that rejection may be an indirect affirmation ; as the 5-year-old boy asked his mother : "Do you think God knows we don't believe in him".) Identity is not free from contradictions. A lot of people are fond of their village or party without accepting all aspects. The alternative to a poor marriage is not necessarily divorce, but a better marriage. European identity does not exclude criticising the European Union. The next question is always : what is your proposal or alternative ? The dual critical and constructive approach represents the dialectical dynamism of European history o compromise after crises. Safety is related to belonging. It is a positive feeling of security in itself and with others, in contrast to lacking individual faith and confidence in a greater community. The security in NATO, which almost all Norwegians rely on, is an example of a historical acknowledgement that no nation can or should stand alone to protect peace and prevent war. Only a binding international cooperation can offer the security of being treated equally in accordance with common rules, to avoid occasional infringements. Security in Europe is an idea which pervades our approach to political, economic, social, cultural, environmental and other issues. Solidarity is, according to Andr? Malraux, an intelligent form of egoism. In a European context this means it is in our own interest that outside countries, groups, regions etc., should be helped to develop their human, social and economic resources. We should have learned that too deep differences create instability with the potential for upheaval, conflict and war. Solidarity in Europe is about taking care of each other across national borders, demonstrated in practice by supranational measures for cohesion. European solidarity includes the rest of the world. The next debate in Norway may illustrate a shift from the last campaign. A possible ?yes' to the European Union next time can not mean better prospects for economic benefits for a prosperous nation in a Europe enlarged with poorer countries. It would demand an obligation and commitment to higher values, a safer society and a sustainable development in a broad Europe. Then, as part of European identity, we find the classical political values. Democracy was invented and developed in Europe, further developed in America where the most democratic constitution at the time was established in 1776. Thereafter, new democratic reforms emerged and the idea of government by and for the people spread throughout the world and gradually, or after revolutions, unfolded in a variety of forms, within the framework of the nation-state. But democracy is still not fulfilled anywhere, due to the fact that the idea of democracy is a relative concept, a complex concept and a political concept. The relative concept of democracy implies that it is related to something outside the reach of voters and their representatives. Those who oppose federal, supranational democratic initiatives are without proper answers to the challenges from transnational companies, international capital movements, cross-border pollution and abuse of national sovereignty, for instance nuclear tests, suppression of ethnic groups and aggression against neighbouring states. From this, we realise that identity is closer to interdependence than self-determination. Identity is more a social, less an individual, phenomenon, but still both exist in Europe where the (im)balance between collective and personal responsibility has been a driving force in society. National independence does not have the same importance and impact as before. Now and in the future, nation-states have to find democratic ways of cooperation which preserve the positive dimensions of independence and limit its negative elements. Paradoxically, the notion of supranationality was also accepted by major parts of the opponent movement in Norway, in spite of their exaggerated belief in national self-determination. They approved of supranational regulation of national independence linked to peace, defence and security matters in the UN and NATO, at the same time they refused supranational regulation of national independence with almost the same countries in the European Union on civilian and political issues ! The Union is not in opposition to the nation. Supranationality is the strife to unfold the potential of the nations which they are unable to fulfil within their borders. A strong Union can not depend on weak nations. A strong Union strengthens its parts. Identity is not only unity in itself, but also a unity of contradictions. A political Union is how to bridge contradictions and the arena where different forces can do so. Identity and democracy are both complex concepts, and consist of an inner power balance between different components. Both include dynamic processes. Neither identity nor democracy can stand still. It is a question of live or die. For a European it is important that each political institution has limited power, and nevertheless is capable of achieving political goals, while simultaneously securing an appropriate balance among representatives from Member states and the people of Europe. Democratic cooperation among many countries, some hundred parties and 280 million voters is more complicated. However, democracy must not only be dependent on small-size communities to function. Large-scale democracy becomes increasingly important to avoid close political bodies becoming local theatre. On the other hand, distant democracy presupposes information and dialogue, transparency and control mechanisms in order to avoid the danger of living its own life. Nothing is more fitted to stimulate attention to a distant political structure than conflicts stemming from disagreement on how to solve the real problems of today and the future. From this fact follows a need to abolish unanimity and expand QMV. This will not be in contradiction to the need for consensus and respect for vital national interests. A European Union has in place of final goals, some common visions of peace, prosperity, social cohesion and partnership with nature. The Union is nurtured by the struggle between, and from, various interest groups working for their visions. Europe is indivisibly connected to its cultural, Christian, humanistic, scientific, social and professional values, o the identity of Europe in our heart and minds. Europe has a magnificent heritage of art and science, architecture and philosophy, and a abundance of ideas and religious schools within a system of tolerance and legal protection, which make our continent attractive, exciting and challenging. Without expelling the tragedies and catastrophes Europe has brought upon herself and other continents, it should be permitted to remember that the cultural and political ideas have conquered, and will continue to overcome, prejudices, xenophobia, racism and other discriminating and suppressing powers. And without degrading anyone, it is also convenient not to forget that the Nordic and European model of cooperation and conflict solving in the labour market, is advanced from a global point of view. Of course, there will always be nuances between various interest groups concerning the balance between politics and market, labour and capital, public and private sector, tradition and modernisation, men and women etc. But nobody should claim their interests to be superior to those of others or to suppress fundamental democratic and human rights. A European House should be built on pluralism and equality, as the European wants for him- or herself. And as we are changing and enlarging this house, we strive for the good life today and tomorrow. European identity must be found in something we already know. Identity is recognition. To be a European is coming home to my own house. European identity an anthropological approach Maryon McDonald Questions about European identity and about the future symbolic and practical content of ?Europe' are questions about the meaning of Europe : what does Europe mean, and what could it mean, to those who are its citizens ? Questions or worries of this kind were not paramount when the EEC began. Between the late 1960s and the present day, however, questions of ?legitimacy' and ?identity' have come increasingly to the fore. Legitimacy There have been two principal periods during which questions of legitimacy have been raised. First of all, concerns were voiced in the late 1960s o a period when it was first noticed that the original, self-evident legitimacy of the Community, defined against a past of war, was losing relevance to a new generation. Amidst demographic changes, increased studentification, and the re-invention of the category of ?youth', a new ?generation' was self-consciously establishing itself in contra-distinction from its parents. Old certainties such as modernisation, progress, reason and positivism, many of which had informed the EEC project, were put in question. This was a time when cultural diversity was invented, a time of civil rights marches in the US, a time of decolonisation and counter-cultures, a time when the alternative worlds of regionalism, particularism and relativism appealed. The world was de-naturalised, and the ?West' was re-invented as a category that the young might affect to despise. For this new generation. ?Europe', far from being the triumph of civilisation over irrationality, tyranny and violence, easily slipped into synonymy with this new ?West' to become another metaphor for post-imperial castigation and self-castigation, or one from which the authenticity and difference of alternative realities might be measured. The response of the EEC at this time was to try to draw young people, against the prevailing current, back into the ?European' fold through youth programmes, largely exchange schemes, and then much later on through the active ?conscientisation' programmes of the ?People's Europe' project. The structural funds also developed, partly in response to the economism of the EEC. The second period, which launched new worries about legitimacy, has come about since the launch of the Internal Market. This unprecedented flurry of perceived ?interference' from Brussels (however sound the original intention), with more directives in a shorter time than ever before, was bolstered and coloured by two other sets of events. On the one hand, the Berlin Wall fell, and many old certainties fell afresh with it. On the other hand, the Maastricht Treaty was negotiated and seemed to threaten national identities in a context in which, with the Internal Market, Brussels ?interference' already appeared as established fact. Going beyond, nationalism had once seemed morally right in the years after the Second World War, but now this was widely perceived as a moral and political threat. Not surprisingly, referenda results sent any certainties still surviving in Brussels diving for cover. Identity The ?People's Europe' project of the 1980s enlisted the old package of XIXth century nationalism to try to re-create Europe and European Identity o to make people feel European. But this old package is heavy with problems : Firstly, the package that nationalism used to invent nations, a package of language- culture-history-people-territory, is not available in all its elements to Europe. Europe cannot easily construct itself, or be imagined, through this package, therefore, and be convincing. It will also seem to be competing with nation-states. Secondly, the time span for the construction of European identity has been relatively short (mere decades where some nations have had two hundred years) and the construction process highly visible. Where the nation may feel ?natural', Europe is inevitably going to feel ?artificial'. And for those from national backgrounds which lack a historiography of self-conscious construction of the nation (such as Britain and Denmark, for example), some aspects of the self-conscious construction of Europe easily appear to be little more than propaganda. Thirdly, the old package for identity construction was born of certainties that no longer pertain in a world of diversity and relativism. Europe is now often more easily identified with a capacity to question apparent certainties rather than with the old certainties themselves. Fourthly, the old package assumes identity to be monolithic and culture to be a homogeneous, clearly bounded entity. However, identity is contextual and relational o and self and other, or sameness and difference, are constructed relationally in the context of daily imaginings and encounters. And fifthly, it is easy to lapse back into the full racial force of this old package o with the boundaries of Europe unrelativised and read as the boundaries of ethnic flesh. The freedom of movement of ?persons' is then rightly confronted with more uneasy reflection on the definition of a ?person'. History History was an important element in the nationalism package, and many histories of Europe have been encouraged as part of the ?People's Europe' project o apparently in the hope of appropriating the tool of history for the creation of European identity. However, we might say that there could, within current models of historiography, be two main ways of writing the history of Europe. Firstly, there would be the old, historicist model, in which Europe might be assumed to exist from Ancient Greece, say, up to contemporary European Union. This is the historiography that nationalism used and that the histories of Europe now tend to use also. All the ethnological bric-a-brac of the classical world become virtual flag- wavers before the Berlaymont, and contemporary ideals are read back into the classical world and onwards to the present day. This historicism, which worked for nationalism, is the style of the vast majority of officially sanctioned ?History of Europe' texts ( whether sanctioned by the Council of Europe or by EC funding). In this history, a continuous litany of features deemed to be inherent to Europe is paraded : this would include Christianity, democracy, citizens' rights and the rule of law, for example. This litany was especially important when it was first constructed, after the Second World War and then during the Cold War, in opposition to the East, but its appeal is not always self-evident now. The second kind of history of Europe would involve a history of the category of Europe. If we were to trace the history of the category of ?Europe' from, say, Ancient Greece to the present, we would find ?Europe' travelling through different conceptual systems, finding new meanings, becoming a different reality as it did so. The geographical boundaries expand and contract, the salient conceptual relations change, the moral and political frontiers and content shift considerably, and Europe is invented and re-invented accordingly. This is the kind of historiography that postmodernism would readily encourage, and it is one in which o unlike in the historiography of nationalism o the simple clarity of being on the right side of history is ideally and deliberately lacking. Moreover, this historiography would not allow any simple continuity to be read back into the past o whether of territory, culture or ethnic flesh, for example. The advent of postmodernism does not mean that we have to throw out the old history altogether. We can put certain key aspects of these two kinds of history together in a productive way. Elements from the old historiography which gave Europe its moral and political content o such as democracy, citizens' rights and the rule of law, for instance o can become important elements in a new understanding of both ?Europe' and identity as relative or relational. Without lapsing into any old historicism, such historical elements o or any one of them o can simply be drawn on or cited as the occasion or context demands. In other words, history becomes self-consciously part of the present, and the history of Europe is no longer historicist litany but part of our critical self-awareness. History is then an awareness of the changing and discontinuous contexts in which ?Europe' has been created in the past, and offers elements in the present that we might now choose to assume relationally in order to assert things ?European'. Europe in action If identity is constructed relationally, the clearest identity is in conceptual opposition. You know most clearly who you are through what you are not. It is relatively easy to feel ?European' when visiting Japan, for example. External relations might seem the obvious area in which a European identity can be constructed and expressed. However, this is also an area in which national identities are deeply embedded. Nation-states have in many ways been defined by their external relations, and Europe does not have the now dubious advantages of war and empire, or of clear external threat, to help to define itself. It is perhaps readily understood that international linking systems help to avoid old fault-lines reappearing, but steps towards some notion of European representation in this area, or of more fundamental institutional reform, have to carry with them the same critical self- awareness that there is no better way to re-create and re-invigorate national identities and differences internally than to be seen to impose decisions from outside (?Brussels'). For most purposes, we are now in a Europe that can, in an important sense, be more relaxed about its identity. The stuff of a European identity is available in the policies and issues which the EU (whether all of it or part of it) creates : in environmental questions, in equal opportunities, in the market (where it most obviously both follows and creates globalisation), in the social arena, in Trans-European networks, in food and health, and so on. Many of these areas have been re-thought (equal opportunities is no longer about the ?women's rights' of the 1970s, for instance, but about issues such as gender and the new family etc.) and others still await re-juggling and rethinking. Any one area of policy can, for better or worse, contextually enlist people to a ?European' self-consciousness (as we have seen recently in the BSE scare, with different sections of the British population suddenly calling for European compensation and solidarity). It is in its policies, in practice, that European identification comes alive. No one is ?European' all the time, just as no one is Spanish, Portuguese, British or French, and so on, all the time. There are moments when being a father, being a businesswoman, being a tennis-player, or being from Coimbra (etc.) are the salient identifications, and these identities would normally occupy much of one's waking life. The overarching ambitions of an older European -identity-construction-kit do not take this into account. Just as the certainties once inherent in the symbolism and narratives of large political parties are having to change and even give way to single-issue politics, so a post-federal Europe has to look for recruitment through the contexts of issues and practice. So, Europe exists. ?Europe' and ?European' exist as categories and people are contextually recruited into them, and there have been many successes of identification. Europe exists in action o in the contextual identification of people with specific policy-areas. Bargaining and compromise are acknowledged as the means to achieve desired policies internally, and the achievement of desired policies makes people feel better about being European, and more ready to compromise elsewhere. And so on. Europe, for many, is not a project, and the old narratives can be alienating. The future symbolic content of European identity resides in practice and action o requiring carefully re-thought policies, and the very European capacities for questioning and reflection, for self-criticism, and now the acceptance o without any naive federal model of a Europe des ethnies and without cultural fundamentalism o of diversity both at home and elsewhere. European identity and citizenship Massimo La Torre I do not intend here to deal, even tangentially, with the questions of God, the meaning of human life, transcendence, or universalism. What we are concerned with o if I am not wrong o is not "identity" in metaphysical terms, or either in anthropological, or mere cultural terms. Nor o I must confess o do I think that Europe without further qualifications is a useful category for political thought. By the way, the question "what is European identity" is also a trifle too broad and vague to find an appropriate answer. I assume that what interests and intrigues us is that identity which is relevant and needed for the construction of a political community at the European level. The identity to which I shall refer will therefore be that which derives or which is equivalent to membership to a polity. It has been said that an identity can be built either from above or from below. This is, I think, quite correct. But I have some problems if one starts identifying top-down procedures with whatever legal measure, with law, and democratic down-top mechanisms with historical processes. Now, I think that the opposite is often the case, i.e. that history has an authoritarian character and law, a libertarian one. History, if seen as a collective process, something given by an intrinsic immanent "telos" of human events, excludes the reflective intervention of individuals on the direction of their social life. Destiny even if shared in a community is never democratic. On the other side, law is not necessarily a sum of authoritarian or repressive provisions. Law is conventional, whatever the legal doctrine says or affirms about it ; it is made by reflective and more or less explicit processes : as a matter of fact a custom becomes a legal practice only when it is contested and is reaffirmed either by collective majoritarian behaviour or by judicial decisions. Law should be contestable in order to direct human conduct. But if the law is made, the real question will be whether it is made by one, the few, or the many. We are then called to choose the system of law we prefer. If we are liberal-minded, we would certainly have to opt for the rule given by the many, in a way that the law will no longer be authoritarian, that is, elitist, the artefact of the one or of the few, but will become the solid pillar of a democratic polity. I therefore dare to suggest that there is no political identity from below without democratic law. Once the question of identity is reformulated in terms of political identity, that is, in terms of membership to a European polity, the main problem for us will be that of a European citizenship. In fact, it is citizenship what marks the political belonging, the membership to a polity. European citizenship and democracy I would like to argue for a strong concept of European citizenship. This is fully justified from an internal legal point of view, since article B of the Treaty of Maastricht holds as one of the main purposes of the Union "to strengthen the protection of the rights and interests of the nationals of its member States through the introduction of a citizenship of the Union". We may also recall a decision taken by the European Court of Justice in Commission v. Council (May 30, 1989), confirming the full legality of the Erasmus Programme, which is then justified with reference to the "objectifs g?n?rant de la Communaut?, tels que la r?alisation d'une Europe des citoyens". A strong concept of European citizenship, characterised by a wide and rich range of rights ascribed through it and with independence from national citizenships, could powerfully contribute to solve at least partly but nevertheless effectively the democratic deficiencies of the European Union. A democracy is not only a representative or parliamentary political regime, but also and above all an association of equal citizens who are defined as such directly, that is without referring to intermediate social and political groups ; democracy is not only or even mainly given by the majority rule applied to political decisions, but eminently by the existence of a public domain of free discussion. But in order to have this, some requirements have to be satisfied : a feeling and a sphere of common concern, first of all. One could and should decide on matters which can affect more or less directly one's own life. Autonomy, which is an ideal principle presupposed by democracy, and expanded by this into a collective practice, makes sense only if it is exercised within the individual's scope of interests and action. Beyond this scope there is no right of autonomy ; even worse autonomy, as individual decision and action, can be transformed into its opposite : heteronomy, disruption of others' private sphere and life plans. This holds a fortiori for an extension of the principle to collective entities, that is, for democracy. A democratic decision cannot go beyond the area of interests which are at stake within a specific scope of (collective) action, that is, beyond the area constituted by those individuals who are the holders of the right of democratic decision. Now, citizenship as membership to a body politic, even if conceived only in formal legal terms, can contribute to create the idea of a common concern, the concern which is common to persons who bear a same legal and political status. To have a public sphere of discussion another requisite should be fulfilled : that of having procedures which allow a fair discussion. But in order to have a fair public discussion we need to assume that people when entering into that discussion share at least a few and "thin" principles : contra negantem principia non est disputandum.34 We need to assume that people recognise reciprocally the autonomy (the possibility of a rational and independent action, in this case discussion itself) and therefore, the sincerity and dignity of their opponents or fellow discussants. We should thus assume that in a public discussion discussants have equal rights.35 Citizenship (and European citizenship is no exception) is just the sum of rights which allow subjects to take part in a political deliberation and to discuss in order to arrive at a reasonable and well pondered decision. 34 See A. SCHOPENHAUER, Die Kunst, Recht zu behalten. In achtunddrei?ig Kunstgriffen dargestellt, ed. by F. Volpi, Insel, Frankfurt am Main 1995, p. 38. 35 See R. ALEXY, Theorie der juristischen Argumentation, 2nd ed., Suhrkamp, Frankfurt am Main 1991, pp. 238 ss. This can mean that in order to promote democratic progress in a society, we can first create statuses granting common and equal rights among its members, and then proceed to find out a viable institutional device to render visible and effective the public discourse which has started with the ascription of those statuses. In the terms of the present political and institutional situation in the European Union we can therefore plausibly believe that we can have European democratic citizens even before having at the supranational level institutions endowed with effective powers of political direction governed by democratic procedures. If we have a European citizenship as an independent status granting rights such as political rights (rights to vote and to be elected) both at the supranational and infranational level (see articles 8b and 8c of the Treaty of the Union), or rights such as the right not to be discriminated as an alien against a national (see article 6 of Maastricht Treaty), or rights such as freedom of movement to and through any member State and freedom of residence in them (see article 8a), then, even if the European parliament is not a fully developed democratic institution (because of the limited range of its current powers), we shall have a society of democratic citizens which will represent a better condition for developing democratic decision making at the supranational level. Of course, to this purpose the rights which we have mentioned should be fully deployed in all their potentiality, and break the limitations which articles 8a-8c still impose upon them. When democratic institutions are deficient, democracy can also be developed through democratic citizens. In particular, in the European Union whose member States actually are all democratic regimes what is fundamental is not to maintain a nationalist or ethnical view of democracy. We need a free sphere of public concern and the sense of participating in a fair cooperative scheme. A stronger and richer concept of European citizenship can be extremely helpful in this direction. Citizenship and ?demos' "Es gibt keine Demokratie ohne Demos" o says Josef Isensee, a well-known German constitutional lawyer36 o, whereby he means that democracy is built upon a collective subject pre-existing to it, endowed with a proper intense life, that is, a people seen as a homogeneous cultural and ethnical body. Moving from this premise the German lawyer then draws the conclusion that there is no possible legitimisation basis for a European democracy (that is, for the European Union), since there is no European "demos", that is, a European folk. It may also be remembered that the same author has successfully fought against the introduction, in the Freie und Hansestadt Hamburg, of an aliens' right to vote for the election of district councils, endowed of indeed poor competencies, with the argument that State officials and representative bodies (at whatever level and of whatever size) enjoy of democratic legitimisation only and only if they receive their mandate from 36 J. ISENSEE, Europa o die politische Erfindung eines Erdteils, in Europa als politische Idee und als rechtliche Form, ed. by J. Isensee, Duncker & Humblot, Berlin 1993, p. 133. For a more sophisticated but in its core quite similar view, cf. D. GRIMM, Does Europe Need a Constitution ?, in "European Law Journal", 1995, p. 295 : "Here, then, is the biggest obstacle to Europeanisation of the political substructure, on which the functioning of a democratic system and the performance of a parliament depends : language". According to Grimm the European Parliament, even reformed and fully empowered as a legislative assembly, could not be considered as a European popular representative body, "since there is as yet no European people"(ibid., p. 293). the "People" in its entirety, that is, from the "German People". The German Federal Constitutional Court unfortunately accepted Isensee's argument,37 thus reformulating the concept of "people" mentioned in article 20 of Grundgesetz ("Alle Staatsgewalt geht vom Volke aus") into that of German people38 and twisting this into an ethnically defined community of fate39 which has constitutional relevance even before the drafting of the constitution itself. Democracy o says the German Court o should not be seen as "freie Selbstbestimmung aller", free self-determination of all (as was formerly held by the Court itself)40 but as a power which derives from a unique and unitary entity whose individual members as such have no constitutional right of participation to collective political decisions ; they can exercise democratic self- determination only jointly, only if considered as indivisible group.41 The idea that democracy means the right for the people (in the plural) concerned by the laws to contribute to their deliberation and enactment is dismissed.42 Now, this is indeed a peculiar concept of democracy. It is based on a romantic idea of "people" or "nation, which has represented a reaction against the originally liberal concept of democracy, based on two basic pillars : individuality and public reason.43 In the romantic protest against liberal democracy, the very concept of political representation is deeply modified : representation is no longer expression of the concrete will of concrete individuals, but is rather expression of the existence of a community. In this second acceptation of representation, connected with a people idealised as a compact, tight and uniform ethnical entity, which has been cherished by "democrats" such as Carl Schmitt,44 even a dictator can "represent" a community, and in the end even a dictatorship may be legitimately be considered a... democracy. If, to have democracy what is required is on the one side a folk and on the other a special existential (ethnical) link between the folk and its leaders (this being the proper Repr?sentation of the folk), then it is not at all contradictory to have an authoritarian and even a totalitarian leader and nevertheless "democracy".45 37 "Das Volk, welches das Grundgesetz als Legitimations- und Kreationssubjekt der verfa?ten Staatlichkeit bestimme, sei das deutsche Volk"(BVerfGE 83, 60 [65]. 38 See also BVerfGE 83, 37 : "Das Staatsvolk, von dem die Staatsgewalt in der Bundesrepublik Deutschland ausgeht, wird nach dem Grundgesetz von den Deutschen, also den deutschen Staatsangeh?rigen und den ihnen nach Art. 116 ABS. 1 GG gleichgestellten Personen, gebildet". 39 Cf. BVerfGE 83, 37[40] : "Das Bild des Staatsvolkes, das dem Staatsangeh?rigkeitsrecht zugrunde liege, sei die politische Schicksalsgemeinschaft, in welche die einzelnen B?rger eingebunden seien. Ihre Solidarhaftung und ihre Verstrickung in das Schicksal ihres Heimatstaates, der sie nicht entrinnen k?nnten, seien auch Rechtfertigung daf?r, das Wahlrecht den Staatsangeh?rigen vorzubehalten"(italics mine). 40 See, for instance, BVerfGE 44, 125 [142]. 41 "Das demokratische Prinzip l??t es nicht beliebig zu, anstelle des Gesamtstaatsvolkes jeweils einer durch ?rtlichen Bezug verbundenen, gesetzlich gebildeten kleineren Gesamtheit von Staatsb?rgern Legitimationskraft zuzuerkennen"(BVerfGE 83, 60). 42 See BVerfGE 83, 60 [72]. See also BVerfGE 83, 37[42]. 43 Cf. D. GAUTHIER, Public Reason, in "Social Philosophy and Policy", 1995, pp. 19 ff. 44 See C. SCHMITT, Verfassungslehre, 3rd ed., Duncker & Humblot, Berlin 1957, p. 209 : "Repr?sentation ist kein normativer Vorgang, kein Verfahren und keine Prozedur, sondern etwas Existentielles. Repr?sentation hei?t, ein unsichtbares Sein durch ein ?ffentlich anwesendes Sein sichtbar machen und vergegenw?rtigen" (emphasis in original). 45 "According to this view, democracy and dictatorship are not essentially antagonistic ; rather, dictatorship is a kind of democracy if the dictator successfully claims to incarnate the identity of people" (U. K. PREUSS, Constitutional Powermaking for the New Polity : Some Deliberations on the Relations Between Constituent Power and the Constitution, in Constitutionalism, Identity, Difference, Indeed, in a democracy the people is not given by a "authentic" demos, but by its citizens, that is, by those individuals who publicly share a common concern and adhere to the fundamental principles by which the democracy defines and builds itself. In a democratic perspective "people is rather only a summary formula for human beings".46 As a matter of fact, there is no "demos" without democracy, that is, without individuals who recognise each other rights and duties. A people in political and legal terms (a "demos") is a normative product : "populus dicitur a polis" o wrote Baldus de Ubaldis in the XIVth century ; it is not there to be found before one starts the difficult enterprise of building up a polity. A people in political and legal terms is the outcome of political and legal institutions : it christalises around them ("civitas sibi faciat civem" o said Baldus' master, the great Bartolus de Sassoferrato). A people in democratic terms, a demos, the people of a democratic polity, makes thus itself in as far as it aggregates along the rules of democracy. We can recall a famous phrase of Kant where he defines a constitution as the act of general will whereby a multitude becomes a people ("den Akt des allgemeinen Willens, wodurch die Menge ein Volk wird").47 The story going on between people and democracy is more or less the same as the one of the egg and the chicken. Which came first : the egg or the chicken, demos or democracy ? Now, as far as the latter pair is concerned, we can confidently solve the enigma : they were just born together ! In short, es gibt kein Demos ohne Demokratie. This is another reason, and a fundamental one, why European citizenship is so important : because it is a stone, and a founding one, in the building of a European democracy. Democracy needs at least two poles : decision-making authorities and citizens towards whom those authorities are called to account for their decisions and the corresponding behaviour. If we have democratic citizens, persons endowed with a rich patrimony of rights, we should then have democratic political authorities. If we have democratic citizens, we already have a demos. And to have citizens in legal and political terms is only a question of common rights and duties. In the organic view of democracy, we are confronted with a dangerous confusion of the notion of public opinion with that of ethnical and cultural homogeneity. This confusion unfortunately seems to be perpetuated in the "Maastricht Urteil" by the German Federal Constitutional Court. "Demokratie, soll sie nicht lediglich formales Zurechnungsprinzip bleiben, ist vom Vorhandensein bestimmter vorrechtlicher Voraussetzungen abh?ngig, wie einer st?ndigen freien Auseinandersetzung zwischen sich begegnenden sozialen Kr?ften, Interessen und Ideen, in der sich auch politische Ziele kl?ren und wandeln und aus der heraus eine ?ffentliche Meinung den politischen Willen verformt. Dazu geh?rt auch, da? die Entscheidungsverfahren der Hoheitsgewalt aus?benden Organe und die jeweils verfolgten politischen Zielvorstellungen allgemein sichtbar und verstehbar sind, und ebenso da? der and Legitimacy. Theoretical Perspectives, ed. by M. Rosenfeld, Duke University Press, Durham and London 1994, p. 155). 46 B.O. BRYDE, Die bundesrepublikanische Volksdemokratie als Irrweg der Demokratietheorie, in "Staatswissenschaften und Staatspraxis", 1994, p. 322. 47 I. KANT, Zum Ewigen Frieden. Ein philosophischer Entwurf, in ID., Kleinere Schriften zur Geschichtsphilosophie, Ethik und Politik, ed. by K. Vorl?nder, Meiner, Hamburg 1959, p. 128. wahlberechtigste B?rger mit der Hoheitsgewalt, der er unterworfen ist, in seiner Sprache kommunizieren kann".48 I find it correct to affirm that democracy, in the sense of majority rule, presupposes some fundamental pre-legal conditions as much as some fundamental normative (moral and political) principles, a vigorous and open public discussion and an influential public opinion. Democracy as a political institution needs, in other words, a civil society. But first, a civil society does not necessarily need to coincide with some Schicksalgemeinschaft, a homogeneous ethnical and linguistic community. (Suggestively enough when the German Court tries to establish a clear-cut separation between national citizenship and European citizenship does not find anything better than making recourse to their different level of existential tightness : "Mit der durch den Vertrag von Maastricht begr?ndeten Unionsb?rgerschaft wird zwischen den Staatsangeh?rigen der Mitgliedstaaten ein auf Dauer angelegtes rechtliches Band gekn?pft, das zwar nicht eine der gemeinsamen Zugeh?rigkeit zu einem Staat vergleichbare Dichte besitzt").49 And, second, a civil society becomes a "people", in the sense of the sum of a polity citizens, only by interacting with constitutional rules and institutions. This point is clearly expressed in the following statement by Ulrich Preuss : "Neither pre-political feelings of commonness o like descent, ethnicity, language, race o nor representative institutions as such are able to a create a polity, be it a nation-state, a multinational state or a supranational entity. Rather, what is required is a dynamic process in which the will to form a polity is shaped and supported through institutions which in their turn symbolise and foster the idea of such a polity".50 Sure, a common language among citizens and between civil society and political institutions is needed in order to have public discussion and thus public reason. However, a common language can be a conventional or an artificial one. To be citizens, individuals should be able to communicate with political authorities : they should be able to understand each other. But this does not imply at all that to this purpose individuals should use their own mother tongue. Any other language will do, provided it is common to the parties. It may be the case that in the European Union, we do not still have such a common language. Nonetheless, such a language can be found. We can think of a lingua franca emerging in the ongoing process of European integration or of a net of various national or regional languages employed each at a different level and for a certain occasion but allowing a continuous flux of information.51 Moreover, the common language does not need to be in any occasion the same. We could perhaps apply a kind of subsidiarity principle to the use of the different languages, choosing the one or the other according to the context and the dimensions of the issue at stake and the 48 BVerfGE 89, 155 [185], italics mine. For a powerful criticism of the constitutional Weltanschauung of the German court as expressed in this decision, see J. H. H. WEILER, Does Europe Need a Constitution ? Reflections on Demos, Telos, and the German Maastricht Decision, in "European Law Journal", 1995, pp. 219 ff. 49 BVerfGE 89, 155 [184]. Italics mine. 50 U.K. PREUSS, Problems of a Concept of European Citizenship, in "European Law Journal", 1995, pp. 277-278. Italics in the text. 51 See what J?rgen Habermas opposes to Dieter Grimm's defence of cultural homogeneity as legitimation for democracy (J. HABERMAS, Comment on the paper by Dieter Grimm : 'Does Europe Need a Constitution ?', in "European Law Journal", 1995, pp. 303 ff.). people (and the languages) concerned. "Zweitens o as was pointed out by Edmund Bernatzik, a leading public lawyer of Austria Felix o kann man ja eine fremde Sprache lernen".52 In any case successful European experiences such as for instance the Erasmus Programme or the European University Institute in Florence (a university is an institution for which communication is of utmost relevance) show that it is possible at least to have a European university even without a European folk. Europe admittedly is not a nation, European citizens as such either. It is high time perhaps that the one (Europe) and the others (European citizens) combine their plans, leaving the nation to its old-fashioned nightmares of blood and soil. Belonging to a European polity I am not so much concerned about the sociological evidence supporting the romantic thesis according to which peoples and nations are homogeneous ethnical and cultural entities. My stance towards this thesis is quite radical. Should it be true, should nations be Volksgemeinschaften, that would not still be a legitimisation ground for a genuine democratic polity. Since democracy is based on intersubjective discourses and representation, any process which would work without an explicit reference to individual and interindividual will formation, would not be appropriate to offer any democratic legitimisation to a polity. The demos of democracy certainly is not ethnos. Yet, in order to defeat the foolish resistance, we might recall a historical fact : that in most cases the so-called Schicksalgemeinschaft is the outcome, an artificial product, of the State or of other reflective political processes.53 This was recognised in 1933 by Hermann Heller, he himself a strong defendant of nations as Schicksalgemeinschaften (and therefore quoted in the "Maastricht Urteil"),54 when he is confronted with the rise of the Nazi regime. "Weder das Volk noch die Nation d?rfen als die gleichsam nat?rliche Einheit angesehen werden, die der staatlichen Einheit vorgegeben w?re und sie selbstt?tig konstituierte. Oft genug war es [...] umgekehrt die staatliche Einheit, welche die "nat?rliche" Einheit des Volkes und der Nation erst gez?chtet hat. Der Staat ist mit seinen Machtmitteln durchaus im Stande selbst aus sprachlich und anthropologisch verschiedenen V?lkern ein einziges zu machen."55 Peoples in the cultural sense, in some cases at least, are not prior but posterior to the State's (sometimes brutal) intervention. The "ethnical" homogeneity 52 E. BERNATZIK, Die Ausgestaltung des Nationalgef?hls im 19. Jahrhundert, in ID., Die Ausgestaltung des Nationalgef?hls im 19. Jahrhundert o Rechtsstaat und Kulturstaat. Zwei Vortr?ge gehalten in der Vereinigung f?r staatswissenschaftliche Fortbildung in K?ln im April 1912, Helwingsche Verlagsbuchhandlung, Hanover 1912, p. 27. 53 Cf. what is said by Oswald Spengler, an author certainly not to be suspected of any "abstract", "formal", "thin", universalist liberal political views : ? Die "Muttersprache" ist bereits ein Produkt dynastischer Geschichte. Ohne die Capetinger w?rde es keine franz?sische Sprache geben[...] ; die italienische Schriftsprache ist ein Verdienst der deutschen Kaiser, vor allem Friedrichs II. Die modernen Nationen sind zun?chst die Bev?lkerungen alter dynastischer Gebiete ? (O. SPENGLER, Der Untergang des Abendlandes. Umrisse einer Morphologie der Weltgeschichte, DTV, M?nchen 1986, p. 779). 54 See BVerfGE 89, 155 [186]. Cf. the sharp critical comments by Brun-Otto Bryde (B.O. BRYDE, op. cit., p. 326, note 37). 55 H. HELLER, Staatslehre, 6th, rev. ed., ed. by G. Niemeyer, Mohr, T?bingen 1983, p. 186. of Pale in Bosnia could never be claimed as the outcome of an organic process of communitarian growth. On the other side, as far as a European "demos" is concerned, we might affirm that, in spite of the lack of one (and only one) common language, there is something like a common European cultural identity. A common history, common tragedies and sufferance, common values, common "myths"o if you like56o have made of the French, the Italian, the German, etc., a common "people". Though a Sicilian can manifest some perplexity in front of a guy dressed in leather pants and a feathery hat drinking litres of beer, she will still identify him as a European like her, with more things uniting than dividing them. In a democracy to be a citizen, to develop a sense of belonging to a democratic polity, one should overcome one's own rooting in unreflective communities, and be for a moment naked, a mere human being. Moving from this nakedness, one can then freely decide whether and how one wishes to cooperate. Only from this nowhere will persons be able to build up fair terms of co-operation, since in that hypothetical condition there will be no room for discriminatory grounds. Democracy as a polity of equals, should presuppose a kind of "transcendental" nakedness : "Democracy is a system of government according to which every member of society is considered as a man, and nothing more".57 The European identity meant as membership to a European polity can only be the outcome of a reflective adhesion to an institutional body ruled by democratic rules and offering a rich comprehensive set of rights. Thus, the European identity we are in search for passes through the consolidation of a meaningful European citizenship. 56 Cf. F. CHABOD, Storia dell'idea d'Europa, Laterza, Bari 1995. 57 W. GODWIN, Enquiry Concerning Political Justice and Its Influence on Moral and Happiness, ed. by I. Kramnick, Penguin, Harmondsworth 1976, p. 486. From poetic citizenshipto European citizenship Claire Lejeune If I dwell from the outset on the fact that is in this reflection a matter of motives peculiar to a woman, a poet moreover, it is precisely because the citizenship of women and that of poets has been, at the very least since Plato, an object of exclusion. This is to say that both have on European culture and identity, or simply on identity, the other view, a different discourse which has difficulty in making itself heard in public debates. And yet, without it, there is no possible democratic dialogue. Should we not begin by calling into question the very sense of the word "culture" ? The globalisation of the market economy has made culture a commodity, an object of production and consumption. To revitalise it is to give it back its function of well- thought-out action. In the situation of rupture that we are living at the end of the XXth century, cultivating one's mind means not only enriching one's knowledge of the heritage (patrimony) to be able to enjoy it and be able to transmit enjoyment of it, but chiefly becoming capable of generating a societal project of giving birth to the future ; literally, delivering our mentality of the XXIst century that it bears so painfully but which it is, however, alone in bearing. I am not of those who believe that "History has more imagination than men" : to trust history to invent the future is necessarily to go in its direction, let oneself go from upstream to downstream, in other words, leave it to its fatality, its determinism, whereas any creation supposes that thought resists the force of the mainstream, that it climbs back against the tide of the course of History, that it thinks about itself from downstream to upstream, that it returns to the sources of patriarchal History, not nostalgically to re-immerse itself in it, to find in it the ideal original purity, but with the ethical intention of bringing to light the foundations of this fratricidal civilisation the endless agony of which we are living through. A desire for Europe All those, men and women, who are set asking about the future of the planet through the collapse of the socio-political systems, the sudden growth of fundamentalist and nationalistic perils, agree that it will not happen without our mentality and behaviour undergoing genuine transformations. We know that the only power capable of undermining and undoing o from the inside o the supreme reign of money can only be born from the intensive development of conscience lagging frightfully behind that of science and techniques, in other words making each citizen aware of his/her responsibilities. That said, what is left is to put this awareness in hand, create and organise this network of resistance to generalised mercantilism, which culture will necessarily have to be in the XXIst century. European citizenship does not exist when it is legitimised by a Treaty only ; when it has no other body than the lifeless body of law. For it to become lively and active it must be desired, it must be rooted in memory's emotional depths where desire reigns. Creative citizenship cannot do without the order that comes from law any more than without the energy that comes from desire : it becomes of our daily aptitude to embody dynamically the conflictual relationship that the logic of reason and that of passion keep alive in us. From the beginning of our reflection lucidity obliges us to recognise that if Europe does not lack a body that makes law, it generally lacks the desire that makes sense, in other words passion, the emotional motivation that it needs to build itself. Without wanting to psychoanalyse our relationship with Europe, we shall have, for it to come to life, to feel it, to think it out in terms of feeling, and this feeling will have to find the words to spread. Between the murderous hatred of Europe that nationalism and the platonic love which inspires its bigots testify to, it is a matter for qualifying, embodying, humanising this European citizenship which is yet only an indispensable fiction. The question is that of the existence in us o real or virtual, latent or revealed o of a "desire for Europe" (I say "a desire for Europe" as one would say "a desire for a child") with which the legislator has hardly been concerned up to now. If this desire exists, what does it correspond to in our imagination ? Does there exist between the strata of the individual unconscious a European unconscious as one says that there exists an African unconscious ? Does an initiation into European citizenship necessarily come through discovering the places of this unconscious, through a recalling of the European mythological sources (Judeo-Christian, Greco-Roman, Celtic, Germanic) ? We know today that myths are, in the memory, where high human energy is focused, the complex and very real places of a violence capable of both destruction and creation. Knowing the deep psychic manipulations that fundamentalism and nationalism, religious and political totalitarianism operate through the unequivocal and dogmatic interpretation of myths and symbols, what work of searching and critical analysis must be undertake, to become aware of these occult sources of history ? What work of deciphering and enlightenment to turn them into sources of creative energy for a transnational, transpolitical, transcultural Europe ? How shall we go about it so that traditions cease to be the prisons of thought ? How to open them ? What must we do for them to become the very sources of freedom, of the fertility of thought, i.e. of something truly new ? Everything leads us to believe that it is to the deep level of myths and symbols that we must go back, with open eyes, to free European imagination o the entire imagination o from its historical conditioning, to put it in a position to desire both the diversity and community of its destiny, in other words to motivate conscience to invest its high energies into the construction of a decompartmentalized society. It is clear that if this work of deconditioning, of genuine secularisation of mentalities, is not done, planetary solidarity is doomed to remain a utopia. Humanisation is not self-evident : it is the fruit of working continually on oneself, in other words a culture in depth, in the most down-to-earth sense of the term. No doubt even, it could be said that one is not born human, but that one becomes so. It is of this work without respite of thought on the contents of memory that a society of persons with unlimited responsibility can become. At the time when women allow themselves o how painfully o to speak on the scene of political disaster, one must know that this speech is new, that its forms of legitimacy are still to be invented. A woman's citizenship does not have to recreate a political- cultural space, but quite simply to create it for itself, for the first time, from the ruins of a History where she ever had only the right to speak in the name of the Father and of the Son, in the name of a sex which is not hers. What seems vital to me today is to rethink the very concept of identity, to understand that the identity principle is also the principle of third-party exclusion. The identity logic is that which invalidates any truth stemming from the crossing of the thought of I and that of the other. It is on the rejection of disturbing strangeness, of impurity, of all that is not white or black, masculine or feminine, dominating subject or dominated object, that the xenophobic History of Nation-States was based. To want to rebuild on the same foundations would be wholly irresponsible. The human resources that the history to come can rely on are those which were inhibited, doomed, sacrificed by patriarchal History to ensure the stability of its Order. If life on the planet stands a chance of saving itself, of over-arising, it is in the repressed part of patriarchal History that it is buried. What twenty-first century thought is going to have to set off, i.e. cultivate to re-generate oneself, is precisely what Patriarcate has excluded, gagged, burnt to ensure the continuity of its domination. This treasure of the possible is buried in our memory. It is up to each of us (woman or man) to work on their own mental field, to cultivate it for it to become an oasis of true life, a space of creation o both of projection and reflection o in the pervading cultural wilderness. To sex the question of identity Faced with the ravages of the evil of exclusion, the most universally prescribed remedy in this fin de si?cle is, of course, communication between people, sexes, ethnic groups, and cultures. The term "communication" enjoys a theoretical fortune without precedent and, on the other hand, the technical means that are available to us to attempt to communicate are today fabulous. But never, no doubt, has the hiatus between the virtual and the actual of communication been more gaping than today. It is in the daily passage to the act of communicating that communication breakdowns are the most flagrant, that powerlessness is the most tangible, the most dramatic. But it is nevertheless there where we strongly experience the difficulty of communicating, where we suffer from it personally in our flesh and in our heart, that desire lies, i.e. the chance of delimiting and overcoming it. If communication is overabundantly provided with theory and technology, we are obliged to note that its ordinary practice is still in its infancy, that its language is still to be invented. The logos of communication are not the mono-logos, they are the dialogos, i.e. the language that is conceived and developed from the interactive knowledge of I and of the other. Dialogue is the communicating language, the interacting language, that which foils the principle of exclusion towards the impure third party. Now, dialogue o the dia-logic of included third parties o forms the subject of no initiation, no learning at school. It is increasingly manifest that a male or female citizen's dialogic capacity o his/her capacity of opening to the other o is the b?te noire of all forms of religious and political fundamentalism since the latter can only reign through exclusion, through division, to begin with, between the sexes. To create spaces where this aptitude to opening, exchange, dialogue which is responsible citizenship is to create conditions indispensable to the advent of a real democracy. It is no longer defensible today not to sex the identity question, to stay deaf to the nascent speech of the other subject who is the I of feminine gender. In the light of the recrudescent fundamentalism it becomes impossible to ignore that the very matrix of any xenophobia is gyno-phobia, and not the reverse. Now, gyno-phobia is not only the work of men. We have to note that women themselves can be the patriarcate's worst accomplices. The fear- alas understandable o that they have of being themselves, i.e. different ; to dare think, say and act otherwise than men is still far from overcome ! If the common stake is the advent of a society of persons with unlimited responsibility, this supposes that an end be put to the childish moral codes based on making one another guilty. An adult feminism can only be a matter of solidarity not only of women among themselves, but between lucid women and men, in search of a happy outcome from the patriarchal impasse. It has not been well enough seen how much the health of the political depends on abolishing the rule of linguistic clich?s, in other words, on the male and female citizen's aptitude to form one body, sexed body with his/her language. Everyone agrees that the great remedy to the ravages of hatred and indifference is love. Yes, but how can love be reinvented ? How to free Eros from the murderous empire of Thanatos ? It must first be understood that the eroticisation of the social body necessarily passes through the eroticisation of the body of the language and that the eroticisation of the body of the language necessarily passes through its sexuation... In the architecture of a construction there is always o more or less conscious o a logic at work : a logic of closing or a logic of opening. A socio-cultural space is built according to the identity principle (xenophobic exclusion of the impure third party) or according to the solidarity principle (logic of inclusion of the impure third party). According to "eliquishness" or "workshop spirit". This is to say that an effective democracy can only come from an alliance without precedent of consciences where the humanity of the solidarity principle has prevailed over the inhumanity of the identity logic. It is at the level of the founding principles that the true political cleavage lies. We shall come alive out of this societal crisis, which most agree to qualify as structural, only if thought of the political becomes deeper o not without vertigo o till it gets right to the bottom of its rational and irrational foundations. A cultural act, if ever there is one ! In the current debates on the European Union's political structure, attention is often focused around the word nation. How can we revive a form of nation which is not a people's defensive withdrawal into itself, which is not in advance undermined by the demon nationalism ? The rights and duties of the European Union citizen will not be those defined by the Nation-States patriarchal History. The idea of a European fatherland must be given up. Europe will be a brotherland or will not be. To pass from a closed nation to an open one, of the fatherland-nation to the brotherland-nation will only be done through recognising the effective existence of two equal and different human genders, irreducible to each other. We no longer ignore that we are all bisexual, all impure, all half-castes. A woman's femininity is not a man's as a man's virility is not a woman's, which means that in the relationship between a man and a woman, there are four sexes in continuous interaction. If women's thought proves other than men's it is due not only to the cultural memory difference (the feminine has not crossed history as dominating subject but as dominated object) but also to the difference in body memory. While a man keeps indelible the physical and emotional memory of having had a mother, he is deprived of having been the belly required by the generation of the other, this place not only of conception and gestation, but of expulsion of another. The link to otherness fostered by feminine identity is undeniably different from that fostered by masculine identity. To recognise this difference in natural and cultural memory between men and women, find words and images to make it noticeable and intelligible instead of continuing to ignore it does not go without giving thought what it needs to regenerate itself. A postscript In deciding to develop as a postscript to my introductory text the words that I spoke during the "Carrefour" I want to testify to that wave which passed among the participants and which Marcelino Oreja called : "the Coimbra spirit". For the European that I am, clearly there will henceforth be a pre-Coimbra and a post- Coimbra. In the reflection document that Thomas Jansen has drafted, we are reminded that determining the "political finality" of the European Union must be done on in the perspective of a project of "world federation". I feel my European citizenship as an interface, as the indispensable mediator between my awareness of belonging to a local, regional community and that of belonging to the world community. European identity can make sense for an individual who wonders about what it means only in relation to a project of planetary citizenship. Even then this individual must of course be motivated to do so, which first assumes that he is wondering about the meaning of his own existence, in other words that he has reached a certain degree of maturity. The concrete forms of the we can only be implemented if desired, imagined, thought up and meant by a multiplicity of I. A pluralistic world can only be built by communities of responsible singulars. Real inter-nationality, and inter-culturality can only be conceived and expanded on the basis of a well-thought-out and theorised practice of intersubjectivity which would be at the very basis of education. The mental revolution which can be expected to lead to the advent of a world democracy is occurring at this moment within the family, school and university. The numerous signs given by the current mutations are still being perceived and interpreted as negative signs of disarray, signs of the monological order of the patriarchate collapsing. The high psychic energies o repressed by history o which these mutations are delivering will be translated by acts of destructive violence as long as they do not have available the tools of thought which enable them to transmute into creative power, as long as they have not found the language that actualises the strangeness of each man and each woman (the real object of xenophobia) as the very essence of their universality. Over a century ago, Arthur Rimbaud wrote in : La Lettre du voyant : "to find a language ; o besides, every word being an idea, the time of a universal language will come ! /.../ This language will be soul for the soul, epitomising everything, scents, sounds, colours, thought catching thought and pulling. The poet would define the amount of unknown waking up in its time in the universal soul : he would give more o than the formula of his thought, than the annotation of his march to Progress ! Enormity becoming norm, absorbed by everyone, he would truly be a multiplier of Progress." The first condition of the advent of an adult Europe, responsible both for her own future and that of the planet, is that she worries not only about informing but about forming her citizens, not only about their access to the multiplicity of knowledge, but about their initiation into the act of thinking for oneself. Finding a language to express the strangeness, the continual newness of this self-generating thought, necessarily passes through the capacity of imagining, the development of the resources of the personal field and the collective field of our imagination. One is not born a creator, one becomes one. Even then one must discover the logics, the dialogics of creation and communication, the tools of interactive thought and learn to use them. Only a culture of intersubjectity will enable us to overcome the spiritual and affective handicap of modern mentality distorted by the exclusive reign of scientific objectivity. What the thought of our times most evidently lacks is neither faith nor reason, it is vision. But visionary speech is the fruit of that logic of creation o logic of the included third o which po?etics is (po?ein : to make). To H?lderlin's question : "Why poets in times of distress?" I would answer first : because the poets who think the world pre-see how a utopia destroys itself, an "ideal City" which excludes them, and how a "real City" which integrates their turbulent presence can be built. "Poetry will be in front", and "the poet will be a citizen", writes Rimbaud when prophesying the real democracy he longs for. Blind faith in a "lendemain qui chante", without that visionary lucidity of which Ren? Char tells us that it is "the wound closest to the sun", can never lead humanity to anything other than a utopia doomed sooner or later to collapse. I hasten to say with Lautr?amont, another prophet of real democracy, that "poetry will be made by all", which means that everyone will have to awaken in themselves the poet that western civilisation has excluded to found its order. There is only one vision of the future which can pull us out of the belly of the past, project us ahead of ourselves. We must be able to imagine the future ; see it revealing itself (in the photographic sense of the word) on our inner screens ; we must be able to give birth to a picture of the common future which is specific to us, which is particular to us for it to mobilise our deepest energies. Spiritual foundation Re-enchanting the world at which a pedagogy of creation and communication aims o a pedagogy of the inter-and of the trans-passes through questioning thought about its tools. What makes the difference between the logic of the divisional (of excluded third) and that of a visionary thought (of included third) is the coordinating conjunction of opposites. For the logic of knowledge and power I can be only I or the other (the inter is interdicted, i.e. unsaid) ; but when I enter the field of creation and communication, I am both I and the other, co-existing in an analogical relation which underlies their dialogical link : I am to you as you are to me. According to the principle of pure reason, A is A and B is B : I cannot be another. Between identity and alterity, all impurity, all ambiguity, all common ownership o all strangeness o has to be deprived of active citizenship. Europe, said Husserl, cannot forget her spiritual foundation which takes root on the Greek soil of philosophy. I believe that the very notion of "poetic citizenship" cannot be grasped and shared but by the double reference to Plato who excludes it and to Rimbaud who predicts its resurgence. Let us first remember that Plato, in the name of the principle of reason, sees it as his duty to put the poet out of his republic. Like women, children and lunatics, the poet is excluded from taking part in the business of the "ideal City" ; his (magical) thought is deprived of legitimacy, i.e. of citizenship. This is what Plato writes in The Republic : That was, I went on to say, what I meant, returning to poetry, to justify myself for previously banning from our republic so frivolous an art : reason made it a duty for us to do so. Let us also say to it, so that it may not accuse us of harshness and rusticity, that the dispute between philosophy and poetry does not date from today. Notwithstanding, let us protest strongly that if imitative poetry which has pleasure as its object can prove for some reason that it must have its place in a well-ordered society, we will bring it back into it wholeheartedly. As to Rimbaud, he predicts the return of the poet in that prophetic letter which was called la Lettre du voyant : Eternal art would have its functions, as poets are citizens. Poetry will no longer punctuate action ; it will be ahead. These poets will be ! When the infinite bondage of woman is broken, when she lives by her and for her, man o so far abominable -, having given her the sack, she too will be a poet ! Woman will find things unknown ! Will her worlds of ideas differ from ours ? o She will find strange, unsoundable, repulsive, delicious things ; we will take them, we will understand them. Cross-checking these two texts, twenty-three centuries apart from one another, the one the founder of our civilisation, the other predicting its end, is, in the current sociopolitical context, prodigiously enlightening. That you invited me to speak among you, I, poet and woman, both delights me and makes me feel hugely responsible. I must find the images and words capable of expressing my own vision of the world being born, knowing that this risks disturbing yours, but at this cost only does it have a chance of acting, of inciting you to find whatever words and images will express yours and contest or meet mine. He who comes into the world to trouble nothing, says Ren? Char also, deserves neither consideration nor patience. For a real dialogue, an effective democratic game, to occur there have to be at play two different speeches and two different listeners, who affect, respect, greet one another, who cease being indifferent to one another. A conflictual relation can start generating a trans-personal, trans-cultural, trans-national, trans-political thought only by means of this quadrivocal dialectic which prevents communication from getting bogged down in the rut of consensus, from being trapped in the homogenisation where what it is to-day agreed to call "la pens?e unique" thrives. Perception of oneself (conceiving of oneself) I attempted to make you see in my speeches how links between my poetic citizenship and my European citizenship are woven ; so, these speeches are of the order of the testimony. Thirty-five years ago occurred in me the illumination o the poetic experience o where I was initiated into my own existence and into the vital need to find a language to express that disturbing strangeness which suddenly served me as identity in a basically xenophobic and misogynous society. The instant before this literally apocalyptic instant (of revelation), I was present neither to myself nor to the world. The instant after my patriarchal imagination was in ruins, I had, on pain of death or madness, to build another one, a dynamic, self- generating imagination. Starting from the desire to become who I really am, I had to re-create for me a love imagination, a family imagination, and a social imagination. To put it in other words, I had, by means of visionary thought and of the work of writing, to save myself from chaos : no saviour would do it in my place. Let us say, briefly, that my spiritual dimension o verticality made up of height and depth o was born of this wild initiation into the genesis of consciousness. Conceiving oneself is experiencing the primeval consubstantiality of space and time, of I and the other, of both the woman and the man that I am ; it is reaching the lightning nucleus of SELF of which Andr? Breton said that it is : the POINT of the mind from where life and death, the real and the imaginary, past and future, what can and cannot be communicated, top and bottom, cease being perceived contradictorily. He added: the point in question is a fortiori the one where construction and destruction can no longer be brandished one against the other. I understand that the Europe of today too is seeking to know herself, to know her soul, to become aware of who she really is in relation to the world and to put in a token appearance in it ; to express her project of post-modern future. In other words, Europe is more or less confusedly seeking to become an adult we, i.e. a community of persons and nations with full and entire responsibility. It is indispensable for us to produce symbols, images, metaphors, said J?r?me Vignon. We must give tools of communication other than conceptual but which can be linked to the conceptual to revive it, to re-nature it, to re-humanise it. We must bring into the world o beyond the great hardships of History o a new understanding of the real. Of this post-modern thought born of the reconciliation of poetry and philosophy, I like to say to myself that it is post-socratic, in the sense that it recalls that prodigious presocratic thought which was current before they split. Thinking as a poet is being able to put oneself into the other's place, being SELF (consubstantially I and the other), but also being able at the same time to embody, from the smallest to the largest, all the circles of collectivity I belong to. What would be the soul of a people other than the one their poets gave them ? If I think Europe as a poet, I identify with her, I espouse her cause, I form one body with her present, I lend her the strength of my visceral resistance to all forms of totalitarianism ; thus I commit myself personally in her quest for a non-fatal outcome to the unprecedented impasse where she is at the end of this century. Let us say that I see my own experience of emancipation as an illuminating metaphor of the trying search for herself which Europe is pursuing today. I draw from this analogy not only my motivation but the daily energy that is needed to provide this project of a transnational, trans-cultural Europe with a body of writing radically other than her "body of laws" which will never have anything but a set language ; that is to say, with a poetic existence without which it will stay a dead letter. Only the influence of an adult poetry o in the sense that it has freed itself from the condition of minor thought where western philosophy confined it -, irresistibly confident in its real power to change life, could truly re-enchant the world. If, like Ariadne, I undertake to pursue the metaphor to better understand all that was exchanged thanks to the Coimbra forum, I say to myself that Europe will get out of her crisis of growth, will become a big adult woman only if she dares to call into question the dogma of economism which threatens any moment to "topple her over from the market economy to the market society" : a striking formula which Zaki La?di gave us of the peril which is threatening us. Put differently, Europe will not recover from her disaster unless she appropriates the freedom of self-determination, the freedom to choose the model of globalisation to which she wishes to belong. We heard Mario Soares tell us forcefully that he "does not want a Europe exclusively determined by economic and monetary demands but a political, social Europe, a Europe of citizens, a Europe of participation" ; not a Europe that we would have to suffer, but a Europe that we have to make happen. From the moment that the European Union knows not only what she does not want, but what she wants to be, she must change her history, i.e. her relational logic ; she must pass from the identity principle based on the exclusion of the third's strangeness which determined the building of xenophobic Europe, to an interactive dia-logic based on the integration of this strangeness, on the actualisation of all mediation between identity and alterity. Only the development of a such a citizenship in the process of building the Union can save Europe from the twofold peril which threatens her : homogenisation or atomisation. It is urgent to understand that a democratic space can only be built from a mentality structured by the "solidarity principle", a principle of the interactivity of opposites. Subject A is to object B what subject B is to object A : I am to you what you are to me; logical translation of the principle of Christian charity : love (respect) the other as you love (respect) yourself : a universal formula of a laity to which any religion of love can rally without betraying itself. The game of interactivity As soon as we understand that there are really at play in any human relationship at least two subjectivities and two objectivities, two identities and two alterities, i.e. four elementary truths, the problem of thought is completely transformed. "Telling the truth" supposes from that moment that our four truths recognise one another, interfere, interact, that a dialogical language is invented able to translate not now the duplicity but the quadruplicity of the real. In this great dynamic game of interactivity, all horizontal, vertical, diagonal relations are authorised. What disappears in the dynamic structure of real democracy is the inevitability of exclusion. All aesthetic, ethical and political revival, all possible regeneration of the social body, will proceed from this metamorphosis of the structures of our relational imagination. The identity of the European Union should appear as that of a societal model which not only succeeds in safeguarding entitlements, but in integrating the great historical, political, economic, technological and ethical upheavals. Which supposes the conception and the implementation of a logic of construction which is a logic of integration of differences. The image that I have of the Europe to come is less that of a continent in search of an intellectual leadership able to face up to the rise of the (economic, political and religious) fundamentalisms than that of a living and thinking organism, capable of metabolising what has happened to it and what continues to happen to it for better of worse ; so as to be able to build for itself a great contagious health the influence of which works not only to relieve but to heal the extreme misery from which the world is suffering. I see the European building site as the main, if not the only, chance our planet currently has of saving itself from the perils which threaten it, of building with new tools of thought its first "real City", its first adult democracy, its first trans-national phratry on the ruins of xenophobic patriarchate. L'identit? europ?enne comme engagementtransnational dans la soci?t? R?diger Stephan European identity as a transnational commitment in society A terminology debate is of no value in the face of the real challenge. The term "identity" is merely a starting point. Psychologists would say that it covers both continuous identification with oneself and permanent adherence to certain traits of character which are specific to a group. On the one hand, identity appears as a criterion for acts intended to provide a synthesis of the self, while on the other hand it signifies a feeling of solidarity with a group. Thus, there are two aspects : o identity is linked to the individual, the person ; o identity reflects a state of existence, an outcome, the end of a path. On this basis, what is the answer to the original question, how can we express this identity which must take on a European dimension ? First of all, it is the individual, the European citizen, who must both give and receive the reply, in the context of his relationship with himself and his environment. The citizen should be able to express this identity, which in turn must be developed together with the citizen. Furthermore, if the identity of the individual is a fulfilment, the sum of a personal history, then European identity is made up of a huge and varied heritage. European identity appears here as being linked to the past, and the future is not a factor. To express European identity through heritage only, however rich this may be, would be to limit oneself to conservatism without a future. Europe needs visions which relate to the future. The development of a European identity can play no part other than through a European consciousness, bringing in itself movement and evolution, a European consciousness which captures the national identities in their diversity and conceives them as having a common future. Expressing this identity o a forward-looking European consciousness o implies the abolition of antagonism between national and European identities. European identity- consciousness is founded on national identities, and finds its expression in cooperation and interaction. We need this European identity-consciousness in order to avoid wars among ourselves or with others, to pool our resources, and to join forces in the face of the challenges of our time, which transcend national and continental boundaries. We draw this identity-consciousness from a heritage which expresses what is common to us, or what we recognise as being common to us. We draw it from history, the common European traits of which we are rediscovering, after two centuries of nationalism and nationalist interpretation. We draw it from the memory of the past, our memory banks o what are our European memory banks ? We draw it from the symbols which we have succeeded in creating and which we shall be capable of creating in the future. We find it in the democratic institutions and rules which structure and define life within our societies, the relationships of the individual and society, and the rights and duties of the citizen. The European Union has neither a political nor a social structure which would give it an "identity" and allow it to develop a citizen's European consciousness, or which would allow the citizen to develop a European consciousness. The Council of Ministers is not European, but inter-governmental. The European Commission acts as if it were inter-governmental. In order to have our voice heard, we must use the channels of the national representations or even national bodies. The European Parliament, the political representation of the citizen, is not truly recognised as such, because its powers, responsibilities and image do not correspond to what the European citizen, accustomed to the role of his national parliament, can or wants to expect from it. Nevertheless, it is the European institution with which the citizen can identify most easily, because it is supranational, or European, and because Parliament fights to give legitimacy to Europe, which also gives it symbolic value. However, there are forces within society which are not representative of national interests and are non-governmental, non-State and transnational by nature. First of all, there is the economic sector, or at least the bulk of it. Industrialists spend all day telling us that their vision is no longer national or even European, but global. The economy creates its own identities o corporate identities, which are neither national nor European. This is what is called the "IBM identity". The question remains as to whether, with the single currency, the economic sector can also help boost European identity. There is, however, another sector of society which is developing rapidly. It is known as the "Third Sector", a term which is both vague (in that it takes in the most diverse forms of organisation) and precise (in the sense that it refers to non-governmental organisations). Civil society translates the will and aspirations of the citizen and quite naturally goes beyond the national context o in fact increasingly so. As every domain of society is affected by, or is open to, international pressures , these organisations nowadays all engage in activities which to a greater or lesser extent go beyond national confines. This "Third Sector" o the expression has come to represent organisation, solidarity and community o represents the commitment of the citizen within society and through society. The Third Sector is not a third country, but a sector of present-day society which should become an increasingly important communication partner, a forum for proposals and for implementing new solutions needed to resolve the major problems facing us today. If we wish to develop European identity-consciousness, this movement towards more Europe, in and with the citizen, if we wish to organise participation and interaction, we must find, or in my view create, a way of organising relations between the Third Sector and the European institutions. As the social and cultural organisations of the Third Sector reflect this commitment on the part of the citizen to non-State and non-public forms of organisation and institution, it is necessary to create a space in society giving the citizen a voice outside the national framework, a European space in which the citizen's commitment to society can be expressed. This societal space should be able to communicate regularly with the European Parliament, whose powers would have to be extended, and with the European Commission. Participation and interaction could be expressed and organised around major subjects of civilisation, such as work and integration into society, national and European memory banks, European citizenship training for the younger generation, and the development of a European language policy. Security and a common area Adriano Moreira One way of analysing comparable transitions from unity among nations to a united Europe is to see it in peaceful terms as the setting of a boundary against a hostile power which threatens freedom and integrity. Let us say that as a general rule the fact of being subjected to the same climate of aggression generates a common defence system and the emergence of an identity through the feeling of security experienced in relation to the threat. Although Toynbee regards the West as the present-day aggressors, identified as such from outside by the peoples of the former colonial territories, the fact of being surrounded by a common threat has more than once united Europe. This was the situation in Western Europe for more than half of a century dominated by the military pacts (NATO and the Warsaw Pact), until it came to an end in 1989 with the fall of the Berlin Wall, and yet Western Europe was involved in defending a plan to unite the area from the Atlantic to the Urals. It thus firmly refused to be "le petit cap au bout de l'Asie", as Val?ry used to put it. A political era Identity implies a common area which has a geographical form, but this will only have a border if, for unambiguous reasons of security, solidarity among the peoples involved and well-established sociological proximity, that identity is assumed. Article 0 of the Maastricht Treaty lays down that any European State may become a member, but it does not attempt to define a European State. In fact the supposed common area is divided by various formal frontiers which do not coincide but were laid down for pragmatic reasons with a view to achieving the overriding objective. The European Union has 15 members since 1995 (when Austria, Sweden and Finland joined) and is considering admitting another 12 at the beginning of the next century, which is nearly upon us. The Council of Europe has 39, including Russia since 1996, which should make us wonder whether the area is broadening out or joining up. Meanwhile the Organisation for Security and Cooperation in Europe (OSCE) has 54, which raises the very same question in a more complicated form. The sea frontier is extremely long, from the North Sea down the Atlantic seaboard to the Mediterranean, a fact which raises the opposite question to the one about the Council of Europe, i.e. whether the European identity has the effect of fragmenting the Atlantic identity which is still best expressed through NATO. This multiplicity of formal frontiers, outlining areas which do not coincide, points towards a definition of a political area demarcated by a series of common threats facing it and a common determination to confront them. In Europe's experience, historical internal conflicts are identifiable as such and are not to be confused with external threats. Title V of the Treaty of Maastricht defines as one pillar of the European Union a common foreign and security policy, leading eventually to a common defence policy, but does not make a distinction between the internal frontier formed by the threat of the recent past and the external panorama constituted by a world context in flux. It should be remembered that the founding fathers of the new Europe, Jean Monnet, Adenauer and Schuman, had in mind to free Europe forever from the spectre of civil war, with Germany and France in the leading roles, and in the area of security it is WEU which reflects that rivalry most clearly : the United States came over to Europe to fight twice in the same generation because of that historical conflict, and the object of WEU was to define a restrictive arrangement for the entry of the Federal Republic of Germany into NATO. The external threat is a different issue, and that was reflected in the Atlantic Alliance for the half-century when the world was divided into two opposing camps. Europe and the Atlantic Alliance At the present time, when diplomacy conducted as a Nixon-style strategy, with the three pillars formed by the United States, Russia and China, seems once more to be to the fore, the question of a European identity in the political field of security and defence, which will be responsible for defining whatever geographical frontier is eventually adopted, seems to be couched in the following terms : o a return by Russia to the historical nation-based strategic concept, with the idea of a "near-abroad" (the former satellite countries), and an attempt to reconstitute the geographical borders prior to 1989, in response to the creation by the Atlantic Alliance and Europe of "near friends" from the Baltic to the Mediterranean, a development made very clear by the Barcelona Conference this year ; o the Western security organisation, to which the former Eastern European bloc is applying, and NATO, a group of countries which all aspire to be admitted to the European Union as well, thereby showing that they treat the two frontiers, the economic and political frontier on the one hand and the security frontier on the other, as autonomous ; o in the Mediterranean region, NATO is also being pressed to provide a security frontier by the countries in the North African corridor, while it is from the EU that they also seek support for their political, economic and social development ; o NATO is consequently being forced to give thought to adopting a new profile : -in addition to the collective defence objective, it now provides logistical and military support for the peace-keeping operations flowing from the UN's Agenda for Peace of 31 January 1992 ; -it has laid bridges for cooperation with the East such as the North Atlantic Cooperation Council of 1991 and the Partnership for Peace of 1994 ; -against this background, WEU has once again been brought into action as a point of reference for the Europeanisation of defence. Here it seems that the ongoing overhaul of the United States' strategic concept and the process of formulating a European strategic concept which is now under way are bound to acknowledge that the European security frontier and the NATO security frontier are still tending to coincide. In that case, the common defence policy, or any common defence system for the European Union which emerges, is clearly first and foremost an internal question for the Atlantic Alliance, following half a century of solidarity. Neither Reich nor Nation another future for the european union Roger De Weck What is it that keeps us Europeans together ? What is it that links the British, who so love to rage against the Continent, to the Poles or the Portuguese ? What do we have in common ? What are the differences, which not only divide but also unite us ? Is there a European identity ? The very fact that we raise the question of identity betrays the European in us. The French philosopher, Edgar Morin, speaks of Europe's "manifold unity" or "unitas multiplex". For all the variety of North America, its binding forces are obvious to the observer. The most striking feature of our continent is its diversity. Certainly, the whole of Europe shares the inheritance of Christianity ; indeed for centuries awareness of "Christendom" was much stronger than the notion of "Europe". But as Europeans desert the Christian churches in their droves, this last vestige of the Western heritage loses its relevance. Christianity no longer unites Europeans, but nor does it divide them. As time goes by, our other great legacy o the Enlightenment o becomes less and less specifically European. Other regions of the world have long drawn on this inheritance (just as other continents have become more Christian than our own). But more importantly, if the spirit of the Enlightenment forms part of European identity, then this particular part has been damaged since the Holocaust. The interplay of nationalism, imperialism and totalitarianism, which, sad to say, is all too European, brought disaster. Europe proved incapable of saving itself by its own efforts. We had to be liberated. Our fate hung on the United States, and that has undermined our self- confidence. In a century that has seen the most terrible of wars, the North Americans too have often gone astray. But they have always rejected totalitarianism. The United States is not only stronger as a result, but also more decisive. There was no American Voltaire, but nor was there an American Hitler. What is not European All of us in Europe have at least one identity, which we experience again and again and which can sometimes break right through to the surface o I'm talking here of a "negative identity". We may not know exactly what it is to be European, but we are quite sure of what is not European. We Europeans have never had hard and fast criteria for determining what counts as Europe. Our continent is ill defined both politically and culturally. Not even geography can help us o does our Eastern border really run along the Urals ? During the Cold War, many Westerners forgot that the "far-off" countries of Central and Eastern Europe were utterly European in character. Despite all the anti-American feeling prevailing at that time, they felt much closer to America and still do. Even so, seldom do we feel as European as when we watch an American television series, which may explain why they are so successful. They are foreign to us and yet familiar. According to one of the classic interpretative models used by psychologists, identity stems from negation. Europeans are hardly ever as united as in their determination to marginalize others. But there must be more to Europe than that, for in the long run negation is not enough : it offers a weak identity in which we protect our own egos by demonising others. For example, the British make a habit of "splendid isolation" and the Swiss nurture their "hedgehog" mentality. It is as if the Confederation would collapse were it not surrounded by enemies : the rabble-rousers on the Swiss right brand the EU as the "Fourth Reich" and one Green politician has waffled on about the "Empire of Evil". Expressing a common sense of purpose Europe is in fact made up of former enemies. When British Prime Minister John Major picks a fight with the European Union, his crisis team in London is immediately dubbed the "war cabinet", proving that the past is still close at hand. And yet wherever Europeans have finally come together, they now live in peace. New wars in Western Europe are virtually unthinkable and the Cold War is history. However, war and civil wars will remain a distinct possibility in Eastern Europe until its countries are able to join the European Union. The German Chancellor Helmut Kohl was right when he observed that ultimately the European question is still a question of war and peace. And just as Switzerland sees itself as a nation created by an act of will, there is in Europe a growing identity, both in the literal and in the figurative sense, that is also based on an effort of will. The vast majority of Europeans share an "identical" and hence "identity-forming" will to establish a peaceful, united Europe. What is at work here is a positive identity : the twin concepts of will and reason are very much European. No doubt, the European Union will face many setbacks in future, but it will hardly sink to a point so low that disintegration could mean destabilisation and even lead to war. This is because of the workings of what the French call "le sens de l'histoire" o in both senses of the term : Europe is moving in a certain "direction" and in so doing is giving itself a "purpose". Individuals object to having an identity foisted on them. Identity cannot be decreed from above by nation states or by the European Union, for it is something organic, which develops from small beginnings and either thrives or withers away. The EU is simply a powerful expression of a common sense of purpose shared by many Europeans, who, after centuries of war, have finally become aware of their responsibility for their own continent. A Europe of the nations may be the rallying- cry for some, but Europe is first and foremost a warning against the hubris of these same nations. "Verfassungspatriotismus" o or loyalty to the constitution o is a familiar concept in Germany. Underpinning the European idea is a kind of "loyalty to peace", which, however, is now fading away fifty years after the end of the Second World War. As time goes by, the younger generation which was spared those horrors has less and less sense of purpose and, in this respect, resembles the directionless and disoriented Swiss, since they too escaped the heavy toll in human lives. The EU Member States were not far enough down the road to a common security policy to prevent the carnage following the break-up of Yugoslavia. If Europe had been up to the task, the question of identity would hardly be raised any more. Identity is also a matter of success. Competition between world regions Is success at all possible in an era of mass unemployment where the virus of social disintegration infects everything which is not already geared to out-and-out economic warfare ? Globalisation (internationalisation) threatens both national and European identities o as if one day the only remaining form of identification will be that of the worker with the mega-firm that employs him. Yet the EU is not perceived as a force for order and moderation which is striving (for example through monetary union) to control the forces of globalisation and, logically, to steer in the opposite direction, something the nation states have long been incapable of. On the contrary, the EU is seen o albeit unjustifiably in many cases o as one of the mainsprings of the globalisation process which is oppressing countless individuals. This provokes national resentment. National politicians heighten the mistrust by claiming for themselves the credit for all political successes and laying the blame for failures at the EU's door. However, Europe is not merely a scapegoat, but at the same time the exact opposite : the hopelessly overburdened standard-bearer of hope, which is bound to disappoint, because so many people would like it to disappoint. Europe acts as a blank screen on to which the Frenchman can project his yearning for "grandeur", the German his deep-seated need to belong, the Briton his uncompromising cries of "I want my money back", and the Eastern European his desire for stability and a guarantee of democracy, the rule of law and human rights. While we are on the subject of human rights, in the vast globalisation process now under way, the old European claim to universal values is rebounding on Europe itself. Now that our continent is no longer at the centre of world events, Europeans must face up to the competition of values and identities. Just as the Swiss always feel the urge to retreat into their little corner, many Europeans also tend to withdraw into themselves in order to protect their own egos. Yet if there is one single characteristic that defines Europe, it is that curious capacity for openness, which our continent displays time and again and has contributed to the "infinite richness in a little room" that so delighted Marlowe. Europe has left its mark over the whole globe, but it has also proved to have a voracious appetite itself, being perfectly capable of absorbing influences from all over the world and positively devouring foreign ideas, without surrendering any of its own identity. However, globalisation unleashes the forces of homogenisation. It also throws open the question of the balance of power between continents. Must o indeed can o Europe summon up the will to compete as a united force against other regions of world ? Since the passing of Charlemagne, the diversity of Europe has been ranged against the concept of a single European power. Our instinct is not to concentrate, but to divide, spread out and split up. Our logic is not that of a single centre, but of multiple centres. The concept of a "European nation", which is ultimately bound up with power politics, is a contradiction in terms. Balkanisation is the real danger. The European Union lies somewhere in between. For far too long, Europe has swung between Scylla and Charybdis, between the Reich and the nation. The EU does not fit into this pattern ; it breaks the vicious circle. It is neither Reich nor nation and hence truly modern. Perhaps European identity is actually to be found in the new and lasting phenomenon of networks, which was first developed by the generation of '68 and took off with the electronic revolution. In many ways the European Union is o and is at its best as o a network. What the Swiss fail to understand, as outsiders with little first-hand experience, is that the EU has something more important than its institutions : the network of connections, the day- to-day working relationships remote from diplomatic channels, the exchanges. And these exchanges give rise to the "manifold unity", which according to Edgar Morin is the life-blood of Europe. Identity is a process Our generation has experienced both the integration of Western Europe and the disintegration of Eastern Europe. In the West the decades-long enthusiasm for the unification process o identification with the EU o has been somewhat dampened, particularly where closer union has degenerated into homogenisation. In the East, many people see Europe as providing an ersatz identity. This is just one of many examples that identity is not something static and does not always remain what it was. Identity is more of a process, and processes have driving forces, restraining forces and opposing forces. Identity always springs from contradictions and never becomes fully o and inhumanly o coherent. On the contrary, identity contains within it crisis in the original Greek sense of "krisis" o decision. That is one of the reasons why the European Union often cuts a poor figure, just as the Swiss Confederation presented an unflattering picture for most of the 550 years before the founding of the Federal State o civil war, treachery, pacts with foreign powers, intrigue and ineffective parliaments. It is actually growth which prompts the outbreak of identity crises. In a brilliant essay for the literary supplement of the "Weltwoche", Adolf Muschg recently asked ? How much identity does Switzerland need ? ?. Similar questions on the quantity and in particular the quality of identity could be asked about Europe. However, Muschg also went on to ask, ? What is it that Switzerland still has to protect from Europe ? ? Perhaps the difference is that Europe is looking for a new identity, while Switzerland is trying not to lose its old one. What does it mean to be a European ? Preliminary conclusions J?r?me Vignon From the very outset, at the preparatory meeting for the Coimbra Seminar, the historian Gilbert Trausch warned us that the task we faced was one fraught with difficulties and risks. "Though the search for a European identity is a classic exercise, indeed almost a commonplace for the social science disciplines, the quest for an identity specific to that very new arrival among the ranks of political animals, the European Union, is a much tougher proposition." In other words, to the historian's mind, the shaping of a collective identity is a long process, in contrast to the brief span of time occupied by the integration of Europe so far. Let there be no misunderstandings on that score. With this caveat ringing in its ears, the Coimbra Seminar proceeded to business. Advancing in stages, it started with what it means to be European as a general concept, then moved on to the challenges raised by political unification of the European continent in the here and now. The discussion progressed by way of the idea of a "European project" which arose spontaneously as participants made their contributions. Alongside the centrality of the political necessity of 'the European project', four other main categories emerge : legitimacy, necessity, the project and interactivity. Legitimacy Was it proper, for the proponents of an integrated Europe, to seek to mobilise the many facets of a European identity o history, culture, values and so on o to their own advantage, so as to construct some kind of political legitimacy for themselves ? In so doing, were they not falling into a double trap ? o A collective identity was the outcome of an approach which needed to be seen in context and in proportion. If it was supposed to appeal to "ordinary people", then it could only be from the standpoint of their particular perceptions and experiences where we stand now at the end of the XXth century. o To seek to exploit the material traditionally used to forge national identities was to ignore the special qualities of openness and multiculturalism, which were the marks of a truly European identity. Jose Vidal Beneyto disposed elegantly of these two posers. Reminding his listeners of the academic achievements chalked up by the sociology of knowledge, he stressed that there was no going back on what the experts now agreed on : "Like individual identities, collective identities exist de facto. It is not improper to refer to them, provided we recognise that the European identity evolves in step with whatever age we live in : it is a moving thing, not a thing established once and for all. And it goes much further than that : a collective European identity is bound to encompass not just variations but o especially o contradictions, contradictions which must be managed, and that is the job of politics. The purpose of a 'project' is just that, to reconcile contradictions, at the same time using the lessons we have learnt from the past and from a shared culture." Necessity The bond between the identity of the European Union and a common project is not something which has come about in a void, simply through the inspiration of a few founding fathers, or a historical accident. It also owes its being to necessity, and to the will to which it gives rise. Here, the Coimbra Seminar brought out a telling parallel between the 1950s and the 1990s. We are, in a sense, entitled to say that there was more to the setting up of community of countries belonging to the Western European camp from the time of the Hague Conference onwards than a deliberate plan by the Fathers of Europe. This community of belonging also sprang up and developed under pressure from a political necessity, the necessity created by the East-West dispute. An economic integration process, one might say, was a way of responding to a geopolitical necessity, in which case the brainwave of the pioneers of European integration was to harness this economic vehicle up to a prior objective which went much deeper, a plan for solidarity and reconciliation which went beyond the immediate geopolitical challenges. This was the sense in which Filippo Pandolfi was able to say that "it was only after 1989 that the full scope of the European project could be seen, its raison d'?tre, if you like." Marcelino Oreja reminded us that today, it was economic constraints, bringing with them the nagging challenges of competitiveness, which were the driving forces in integration. The progress made from 1985 to 1991 led to a political leap, the Economic and Monetary Union, which was itself reinforced by the geopolitical demands of enlargement. The Intergovernmental Conference now under way ought to graft a collective project adapted to meet the challenges of the present day. To put it another way, in the 1990s as in the 1950s, pressure of necessity created an opportunity for a new collective departure. If there was a secret behind the identity of a Political Union, it was that it should be capable of giving a generally accepted sense to the sweeping changes occurring in the European continent, over and above the geopolitical momentum behind them. The project What should such a project consist of, "now and for the future", if that shared sense was to unfold ? What, in other words, was to be the telos, the ultimate objective ? Are we not entitled to expect an answer to this question from those responsible for European integration, from those who govern, but also from the intellectual elite ? o Some speakers stressed the importance of overhauling the European social model, threatened as it now was by its inability to reconcile opening out to the world with maintaining social cohesion (Jos? Vidal Beneyto). Bonaventura Sousa Santos, in fact, proposed focusing our efforts back on restoring the State and the community once the other pillar of the European social model, the market, had outgrown itself. o Others wanted to go still further along the path of reshaping the model. Defining their stance in relation to the global challenges of the environment and population growth, they saw a contemporary European identity as an awareness of the urgent need for changes in lifestyles and patterns of consumption. Edy Korthals Altes, for example, saw it as a moral awareness with the capacity to answer the questions about the meaning of life. The same global view of developments in Europe today would, in the eyes of Zaki La?di, seek to identify Europe with efforts to act as an effective mediator for the world. President Mario Soares went so far as to say that the world needed a Europe capable of translating the spirit of democracy which was the only foundation it had at the present time into acts of international solidarity. o Those who identified the European Union with a way of giving a deeper dimension to democracy alluded to a project which was as much a cultural as a political exercise. In the words of Massimo La Torre, it was a matter of establishing, by law, a genuine European citizenship. Freed of any ties to the prior possession of a particular nationality, it would be the seedbed of an identity linked directly to democratic ideals, a sort of constitutional patriotism in the pure state. For Claire Lejeune, the Political Union should be one where the implicit subjection of men to women would have been overthrown. While invoking the urgent need for the European project to have a telos, those attending the Seminar stressed that the demos must be involved in the work of putting such a project together. In other words, to give expression to a European identity today meant embarking on a process of exchange, of listening and of interaction. Interactivity Warnings against the risk of overintellectualising came from intellectuals themselves. Heinrich Schneider pointed to the risk of totalitarianism lurking behind the concept of an avant-garde, if it were one enlightened not by reason but by a moral consciousness. Truls Frogner spoke of what the most deprived groups in Europe really expected in terms of jobs and unemployment. Maryon McDonald insisted on what made sense to people. This brought the meeting back, when it came to what it meant to be a European, to the sphere of "communicating", to "how to share, listen and receive", to "how to inspire and deserve trust". This was the point in the Seminar at which speakers' contributions became more specific and closer to the work being done by the European institutions. Under the subject heading of an interactive identity, four aspects were discussed : the institutions in the strict sense of the word ; communication ; new forms of mediation ; and, lastly, the need to foster interaction between the Member States and the Union. 1. Heinrich Schneider, a veteran of the battle for federalism, thought it was time to build something new out of the old federal mould. The institutions should be judged less against the yardstick of unity than on the basis of new criteria : whether the executive inspired confidence, whether joint action was effective, whether someone was visibly answerable for the exercise of power. It would have been hard to find a better definition of some of the challenges facing the IGC. 2. In the view of Elemer Hankiss, who was Head of Hungarian Television from 1991 to 1992, what the European Commission needed to overhaul was not so much its messages (though these, he said, were still not getting across strongly enough in his country) as its methods. Opportunities for working out what European integration meant in the present day needed to be provided in the shape of hundreds of forums like the Coimbra Seminar, where intellectuals, people from cultural and scientific backgrounds and journalists would debate the underlying issue, the raison d'?tre which Filippo Pandolfi had referred to. One was reminded of Denis de Rougemont saying that the search for Europe was itself Europe. 3. Many participants felt that the Commission did not allow enough space for mediation by associations acting as relays to develop, meaning the many hundreds of NGOs already structured into European networks which were capable of expressing the European sense of an operation carried out at local level, not to mention acting as the expression of a moral consciousness. Edy Korthals Altes spoke for them when he spoke of the practice of dialogue between religions at the European and Mediterranean levels. 4. We should stop acting and talking as if the Union and the nations in it were in competition. Nations were part of what it meant to be European, Maryon McDonald maintained. Bearing in mind the immense symbolic challenges posed by a single currency, we should leave it up to the national apparatuses, with their huge capacity to influence and respond, to talk to European people about Europe. Nor should we forget that farmers, students, textile workers, bosses of small businesses, doctors and trade unionists, in the publishing business, experienced Europe in the first instance through their day-to-day occupations. When the debates were over, some self-criticism emerged. Perhaps our group had taken too much of a consensus view. Had it allowed enough space for the anti- Maastricht protest voice to be heard ? Did it reflect the doubts and bewilderment in the minds of some grassroots voters ? The unconscious temptation to preach to the converted was certainly there, and we should bear it in mind when later Seminars came up. But a Seminar on Science and Culture was not there to do the work of a parliament : what it aspired to do was to think matters through and go back over the experience of the past. In that sense, Coimbra was a great help to us. Annex: A dialogue on unemploymentbetween Truls Frogner and his Neighbour You have not yet heard the trade union voice. Some people think that trade unions are fading away. Well, in Europe we have the ETUC, the European Trade Union Confederation, with member organisations from 33 countries, after the enlargement eastwards last December. Now, some 55 national organisations, representing more than 50 million members, come together in the ETUC to discuss and decide on common matters and then take care of our joint interests in the European Union and the European Economic Area (EEA). Do you know any other and more representative non-governmental European organisation ? In the European Union's search for its identity, a trade union has a relevant message. In my context, to be in a union means to take care of each other, knowing that acting together may give better results for all than acting individually. Let me also add that in Norway, community has a more positive connotation than union, since my country, for many years, was the weaker part in unions with other countries. A union in Norway is also associated with foreign rule. In our discussions today, I have heard that the magic words "European identity" contain the concepts of diversity, legitimacy and transcendence. My neighbour in Norway does not understand this and seldom speaks of identity. But he lost his job some months ago, and I can see this is doing something to his identity. I told my neighbour last week that I was going to Coimbra to discuss the "European identity." -What is that ?, he said. -Well, we are supposed to find out, I replied. -Do you have to go to Coimbra to find that out ? Why not here ? -No, it is easier to see what you are from the outside. In Sweden, I feel Norwegian. In Brussels, I feel Scandinavian and in Tokyo, I feel European. When I'm in a pub in Boston, I'm still in Europe. -I understand. As an unemployed person, I feel the importance of a job... -So, my friend, what is the European identity to you ? -Nothing ! Does it create jobs ? -It depends... -What do you mean ? Does it or does it not ? -It creates peace. What kind of employment policy is possible in Bosnia ? -Stop ! The European Union did not prevent war in ex-Yugoslavia. -Agreed, but in the old days, local war spread through all of Europe. The European Union, together with NATO, made this impossible. -OK, peace is a natural thing now. War will not happen in Europe again. -Are you sure ? -To be honest, no. I'm not sure of anything. Without a job, I don't know where I belong. How could I identify with the European Union if it does not create jobs ? -European Union made a report on "Growth, competitiveness and employment"... -Reports are not reality. The European Union is a marketplace. Growth and competitiveness yes, jobs no ! -With 20 million unemployed in Europe, it seems you are right. On the other hand, the European Union may change its treaty and enshrine employment in it. -Interesting, but paragraphs don't create jobs. Moreover, national governments don't follow up. -Should the European Union be the scapegoat if national governments fail in their economic policy ? -I admit you have a point. Moreover, unemployment is high outside the European Union, too. Except in Norway where it is 4% and the inflation rate is below 1%. But still, these positive figures don't help me. -We take you seriously. Within a short time, you will be offered a job, a labour market (professional training ?) course or another active alternative. And this is not mainly thanks to oil and gas, but to our social model and cooperation for employment. -Why can't the European Union do the same ? Isn't cooperation a part of what you call the European identity ? -Good question. Maybe because... eh... maybe... -Well, Truls, come on ! -I'm not really sure why the European Union has not used its potential. -Can't you ask them in Coimbra ? -I will. -Do you know what I think ? I think the European Union pays too little attention to the social dimension and too much to economic matters, or they have too narrow a concept of economy. -Yes and no. Where else in the world will you find such close relations between the social partners and politicians ? -Now you're talking me around again. It doesn't help me if you, on the one hand, speak of a fine European social model in a global context, and on the other hand, you have welfare cutbacks and rising unemployment. -It is a part of the European political identity to say one thing and do something else. -Ah ! Now I know what the European identity is : contradiction over unity. -It's true, but it could also be unity over contradiction. -Please tell me, Truls, why should I o being unemployed o identify with the European Union ? -The answer is both simple and complicated ; at one and the same time, the European Union identifies with you and with 20 million more people without jobs. -In that case, I will wait and see. -Oh no, this time I will challenge you. Why should you wait to see what the community can do for you ? Shouldn't you also ask yourself what you can do for the community ? -Hmm... let's make a deal. I will, in spite of unemployment and a poor private economy, keep my trade union membership and join the European Movement. But you should take an initiative to strengthen the European Union with what is important to my identity o employment. In practice ! Not only in fine words. -Agreed. You have a deal. -Not quite. Only a temporary deal. -Of course. Europe is not finished yet. Identity is something moving and invisible o an Unidentified Flying Object ! List of contributors Tom Bryder, Senior Research Fellow, Institute of Political Science, University of Copenhagen Truls Forgner, Director of Political Affairs, Federation of Professional Associations in Norway, Oslo Thomas Jansen, Adviser, Forward Studies Unit, European Commission, Brussels Ingmar Karlsson, Ambassador and Head of Policy Planning Unit, Swedish Ministry for Foreign Affais, Stockholm Edy Korthals Altes, former Ambassador of the Netherlands; President, World Conference on Religion and Peace (WCPR), New York Claire Lejeune, Poet; Secretary General of the Interdisciplinary Centre for Philosophical Studies at the University of Mons-Hainaut, Cl?phum, Belgique Maryon McDonald, Appointed Senior Fellow, Department of Social Anthropology, Cambridge University, Cambridge. Adriano Moreira, former Minister, Professor, Technical University of Lisbon Heinrich Schneider, Professor Emeritus, University of Vienna Mario Soares, former President of Portugal R?diger Stephan, Secretary General of the European Cultural Foundation in Amsterdam Massimo La Torre, Professor, Department of Law, European University Institute, Florence Gilbert Trausch, Professor Emeritus, University of Li?ge J?r?me Vignon, former Director of the Forward Studies Unit, European Commission, Brussels (1989-1998). Director for the Strategy, D?l?gation ? l'Am?nagement du Territoire et ? l'Action R?gionale (DATAR), Paris Roger de Weck, Editor of "Tages-Anzeiger", Z?rich From checker at panix.com Thu Dec 29 02:34:56 2005 From: checker at panix.com (Premise Checker) Date: Wed, 28 Dec 2005 21:34:56 -0500 (EST) Subject: [Paleopsych] NYT: The Face of the Future Message-ID: The Face of the Future http://www.nytimes.com/2005/12/15/fashion/thursdaystyles/15FACE.html [Joel Garreau's new book, _Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies--and What It Means to be Human_ (NY: Doubleday, 2005) has just arrived. I am signed up to review it for _The Journal of Evolution and Technology_ and commenced reading it at once. Accordingly, I have stopped grabbing articles to forward until I have written my review *and* have caught up on my reading, this last going on for how many ever weeks it takes. I have a backlog of articles to send and will exhaust them by the end of the year. After that, I have a big batch of journal articles I downloaded on my annual visit to the University of Virginia and will dole our conversions from PDF to TXT at the rate of one a day. I'll also participate in discussions and do up and occasional meme. But you'll be on your own in analyzing the news. I hope I have given you some of the tools to do so.] By RUTH LA FERLA and NATASHA SINGER AS she waited for her pedicure at Just Calm Down, a day spa in the Chelsea neighborhood of Manhattan, Vicki Murray, a 30-year-old homemaker, found herself engaged in heated debate about extreme plastic surgery. "Sure, if my face were injured or disfigured, I would think about a transplant," Ms. Murray said, adding matter-of-factly that under such radical conditions she would trade in her face for a comelier model, one with, say, the vulcanized features of Angelina Jolie. Why not? Ms. Murray mused. "If celebrities put up their faces for auction after they died, people would be bidding on her features all the time." Debbie Greengrass, a friend, pondered that assertion. "I have nothing against plastic surgery," said Ms. Greengrass, 30, a nurse practitioner at a New Jersey fertility clinic, "but accepting a skin transplant from an organ donor just to look like Angelina Jolie somehow doesn't set right with me." The women's conversation, bizarre and of a sort customarily relegated to science fiction, was occasioned by the groundbreaking partial face transplant two and a half weeks ago in Amiens, France. A 38-year-old woman whose features had been gnawed away by her Labrador retriever received lips, a chin and a nose from a brain-dead donor. The procedure is considered by medical experts to be too experimental, and medically and ethically controversial, to have cosmetic applications. Nonetheless the prospect of being able to one day swap one's features for a prettier, more idealized configuration seems to have sent the imaginations of people into overdrive, fueling discussion and over-the-top fantasies at the proverbial water cooler. Among doctors and nonprofessionals alike, the medical and scientific advances that made possible the first face transplant raise issues both practical and moral, and touch on matters pertaining to class, wealth and the more profound question of human identity. The idea that a face might one day be as interchangeable as a watchband, a concept long popularized in futuristic novels and films, engenders reactions of mingled revulsion and awe. "Replacing your features with those of a donor just to make yourself prettier - that idea is abhorrent," said Sally Cook, an author of children's books who lives in New York. But Ms. Cook, 51, added she was deeply impressed to learn the that procedure was available and would favor such an operation for patients who were disfigured from birth or as the result of an injury. Others, however, were more willing to entertain the possibility of a future in which a face transplant becomes a means by which one can trade in a shopworn mug as readily as exchanging an outmoded iPod for the newer, trimmer Nano model. "We're standing on the edge of a new frontier," said Dr. Anthony C. Griffin, a plastic surgeon in Beverly Hills, who appears on "Extreme Makeover," the ABC reality show. Dr. Griffin speculated that if a face transplant should become common practice, it would be easier to obtain abroad than in the United States. "We're too puritanical in America to ever allow face transplants for cosmetic reasons," he said, "but I can see someone like Michael Jackson flying to Paris for a nose transplant, although not in my lifetime." To many people, living at a time when plastic surgery has lost much of its stigma - total cosmetic medical treatments rose 24 percent over the past four years, says the American Society of Plastic Surgeons - the idea of a new face requires no great leap of imagination, despite the medical hurdles. Sam Shahid, an advertising art director who once created a magazine cover in which the model's face was a pastiche of the features of other models, predicted that although the notion is bound to seem horrific at first, "once people get over the shock," it would become "acceptable, perhaps desirable." A new face might one day be a covetable luxury item, suggested Scott Westerfeld, a science fiction writer, whose novels "Uglies" and "Pretties" project a future in which a compulsory operation at 16 makes everyone conform to an ideal standard of beauty. In that future world, "it's not just how much cosmetic surgery you get, it's how often," Mr. Westerfeld said, adding, "There will come a day when having extreme cosmetic surgery will be like buying a $1,000 Gucci bag, an indication that you are a member of the privileged class." Well before the first partial face transplant became a reality, writers, filmmakers and other visionaries were depicting that future as a promising, though grotesque, inevitability. As Hanif Kureishi has his hero observe in "The Body," his 2004 novel about a 60-year-old writer whose brain is transferred into the fresh corpse of a young man, "It seems logical that technology and medical capability only need to catch up with the human imagination or will." In October, Elle magazine published a cover depicting the distant future, one eerily anticipating the transplant in France. It featured Claudia Schiffer ("Still Sexy at 135!") with a cover line asking, "How He Feels About Your Face Transplant." "We were being humorous," said Roberta Myers, the editor of Elle, acknowledging that thanks to movies and popular television dramas like FX's "Nip/Tuck," which explores the outer limits of plastic surgery, and reality shows like "The Swan," on Fox, which raises the possibility of an infinitely mutating identity through cosmetic surgery, people might not find the concept of a face transplant so far-fetched. (On a "Nip/Tuck" episode last week, the face of a brain-dead young woman was grafted onto a burn patient, whose immune system rejected the transplant.) Mr. Westerfeld noted that such themes, once the province of science fiction, now parallel mainstream attitudes about the self and identity. There is increasing acceptance that "as human beings we get to choose who we are," he said. "And the line between what you get to choose and don't choose is moving all the time." Plastic surgery first entered popular awareness in large part through movies, in a catalog of films dating back at least to the 1940's. In the noir classic "Dark Passage," Humphrey Bogart plays an escaped prisoner who seeks a plastic surgeon to give him a new face and identity. In "Eyes Without a Face," a 1959 French horror movie, a famous surgeon kidnaps young women to strip off their skin and graft it onto the face of his disfigured daughter. And in "Seconds," a 1966 John Frankenheimer fantasy, a nondescript aging bank clerk is transformed via surgery into a youth with the features of Rock Hudson. A more contemporary variation of that theme is encountered in "Face/Off," in which a terrorist (Nicolas Cage) trades faces with the F.B.I. agent (John Travolta) who is pursuing him. Filmmakers have been persistently fascinated by plastic surgery because, as the art historian J?rgen M?ller has written, "It is used to dramatize or reflect on the essence of identity." In his essay "Plastic Surgery in Movies," published in "Aesthetic Surgery" (Taschen, 2005), Dr. M?ller, chairman of the art history department at Dresden University, argues that in films like "Seconds" or Face/Off," the face is both the "proof and the expression of personality." "In this context, a look in the mirror brings with it the question of identity, of whether inside and outside still correspond," he writes. Off screen, in real life, some argue, a medical procedure that necessarily tampers with identity might take an unacceptable psychic toll. "The implications are shudder-worthy," said the writer Daphne Merkin. "Can you borrow someone else's features and still be you?" Noting that Botox and plastic surgery have already eroded the idea of character by erasing laugh and frown lines, she asserted that such a face transplant might eliminate the concept of character. "Are we equipped to deal with this aesthetic fungibility?" she said. In the case of the French transplant patient, Isabelle Dinoire, critics have raised questions about the psychological impact of having another person's features - in this case, a donor who may have committed suicide, it was revealed this week. Medical experts point out that a transplant recipient would never acquire exactly the features of another person, because the recipient's underlying bone structure would affect the way the skin appears. Face transplants are difficult and controversial in large part because of the risk that the recipient's immune system will reject the borrowed tissue. The patients must take strong immune-suppressing drugs for the rest of their lives, and these may cause cancer or be toxic to the heart, doctors say. The day of routine practical transplants, even of a part of the face, is "very, very far off," said Dr. Peter G. Cordeiro, the chief of plastic and reconstructive surgery at Memorial Sloan-Kettering Cancer Center in New York. And why bother with a transplant at all, some ask, when conventional surgery will do the job? "Many people walking around, especially celebrities, already have had so many procedures that they no longer look like themselves," said Dr. Frederic Brandt, a dermatologist in New York and Miami who has a large celebrity following. That notion is not lost on Janice Dickinson, a onetime supermodel and the author of "Everything About Me is Fake ... And I'm Perfect" (Regan Books, 2004). Ms. Dickinson, who acknowledged in an interview having had a number of cosmetic procedures, including a face-lift, confided only half in jest that she would not mind trading in her features for a classier set. "I've been dying to look like Iman Bowie," she said referring to the model and cosmetics entrepreneur. Nor is the concept of a transplant as a mark of privilege utterly alien to Suzanne Yalof Schwartz, the executive fashion director at Glamour magazine. "If I had to have a face transplant, why not upgrade?" Ms. Schwartz asked. "I've lived long enough as a jalopy. I want to be a Jaguar." From checker at panix.com Thu Dec 29 02:36:59 2005 From: checker at panix.com (Premise Checker) Date: Wed, 28 Dec 2005 21:36:59 -0500 (EST) Subject: [Paleopsych] NYT Mag: New World Economy Message-ID: New World Economy http://select.nytimes.com/preview/2005/12/18/magazine/1124990127091.html [Joel Garreau's new book, _Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies--and What It Means to be Human_ (NY: Doubleday, 2005) has just arrived. I am signed up to review it for _The Journal of Evolution and Technology_ and commenced reading it at once. Accordingly, I have stopped grabbing articles to forward until I have written my review *and* have caught up on my reading, this last going on for how many ever weeks it takes. I have a backlog of articles to send and will exhaust them by the end of the year. After that, I have a big batch of journal articles I downloaded on my annual visit to the University of Virginia and will dole our conversions from PDF to TXT at the rate of one a day. I'll also participate in discussions and do up and occasional meme. But you'll be on your own in analyzing the news. I hope I have given you some of the tools to do so.] The Way We Live Now By MATT BAI In recent weeks, looking toward next year's midterm elections, leaders of both parties have engaged in highly charged arguments about withdrawal from Iraq, Medicaid shortfalls and allegations of Republican corruption. Anyone bothering to peruse the rest of the front page, however, might have noticed a few items that seemed tangentially related, but that, together, tell a story that is far more consequential for the next 50 years of American life. First, just before Thanksgiving, General Motors, buckling under the weight of $2 billion in losses, announced that it now planned to lay off 30,000 workers and scale back or close a dozen plants. A few days later, at the traditional commencement of the holiday season, thousands of American consumers began lining up in the dark hours of morning to be among the first to pile into Wal-Mart, hoping to re-emerge with discounted laptops and Xboxes under their arms. Wal-Mart has now inherited G.M.'s mantle as the largest employer in the United States, which is why these snapshots of two corporations, taken in a single week, say more about America's economic trajectory than any truckload of spreadsheets ever could. G.M., of course, was the very prototype of 20th-century bigness, the flagship company for a time when corporate power was vested in the hands of a small number of industrial-era institutions. There is no question that rising labor costs hurt G.M., but that obscures the larger point of the company's decline; caught in the last century's mind-set, it has often been unable or unwilling to let consumers drive its designs, as opposed to the other way around. Must the company keep making Buicks and Pontiacs until the end of days, even as they recede into American lore? Many of the workers G.M. decided to lay off last month were its best and most productive. Their bosses simply couldn't give them a car to build that Americans really wanted to buy. As it happens, G.M.'s inability to adapt offers some perspective on our political process, too. Democrats in particular, architects of the finest legislation of the industrial age, have approached the global economy with the same inflexibility, at least since Bill Clinton left the scene. Just as G.M. has protected its outdated products at the expense of its larger mission, so, too, have Democrats become more attached to their programs than to the principles that made them vibrant in the first place. So what if Social Security and Medicaid functioned best in a world where most workers had company pensions and health insurance and spent their entire careers with one employer? The mere suggestion that these programs might be updated for a new, more consumer-driven economy sends Democratic leaders into fits of apoplexy. While G.M. rusts away like some relic from the last century, Wal-Mart beckons us toward our shrink-wrapped and discounted future. Wal-Mart's founding family is said to be wealthier than Bill Gates and Warren Buffet combined, and yet more than half of the company's employees don't receive health care, and its enduring quest to bring us lower prices drives down wages everywhere. Here we have the model for globalization as Republicans envision it - a world in which rugged entrepreneurialism is overly romanticized and the unskilled expendable, and where shareholder profits are the only measure of success. Republicans have embraced the future of the global marketplace, but to them the future looks a lot like "Road Warrior." The debate over Wal-Mart centers on whether it is, on the whole, good or not so good. Jason Furman, a Democratic policy expert, has prepared a persuasive report for the Center for American Progress in which he notes that Americans may have saved as much as $263 billion last year - that's $2,329 per household - by shopping at Wal-Mart, which amounts to the equivalent of a massive tax break. This argument over Wal-Mart's virtue or villainy is interesting but ultimately academic; it is like having had an argument, at the dawn of the microchip, about the merits of automation. The service economy is a reality of our time, and it would be wishful to expect that its engine can sustain the middle class in the way that industry once did. Wal-Mart didn't ask to be the new G.M., and even if it wanted to treat its employees as generously, it couldn't; Furman concludes that Wal-Mart's profits would be obliterated by adopting companywide health care or a significant raise in wages. It makes little sense to blame one company for the pain caused by a profound economic transformation. What would be more constructive, probably, is a total reimagination of the basic contract between government, businesses and workers - a process that Clinton tentatively put in motion but that has since stalled as both parties retreated from the vexing challenges of globalization. After all, if you were going to sit down and create a system for our time, it probably wouldn't look much like the one we have. Does it make sense to expect businesses to finance lavish health care plans when foreign competition is forcing companies to cut their costs? Isn't government better equipped to insure a nomadic work force while employers take on the more manageable task of childcare - a problem that hardly existed 50 years ago? If government were to remove the burden of health care costs from businesses, enabling them to better compete, wouldn't it then be more reasonable to create disincentives for employers who are thinking of shipping their jobs overseas? Isn't the very notion of a payroll tax for workers antiquated and inequitable in a society where so many Americans earn stock dividends and where a growing number are self-employed? If they were to spend more time debating these and other longer-term questions, our politicians might have some small hope of leaving a legacy to match their predecessors' - a legacy better than the choice between the New Deal and no deal at all. Matt Bai, who covers national politics for the magazine, is working on a book about the Democratic Party. From checker at panix.com Thu Dec 29 02:37:25 2005 From: checker at panix.com (Premise Checker) Date: Wed, 28 Dec 2005 21:37:25 -0500 (EST) Subject: [Paleopsych] WebMd: Genes May Help Some People Stay Mentally Sharp Into Their 90s and Beyond Message-ID: Genes May Help Some People Stay Mentally Sharp Into Their 90s and Beyond http://www.webmd.com/content/Article/116/112178.htm [Joel Garreau's new book, _Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies--and What It Means to be Human_ (NY: Doubleday, 2005) has just arrived. I am signed up to review it for _The Journal of Evolution and Technology_ and commenced reading it at once. Accordingly, I have stopped grabbing articles to forward until I have written my review *and* have caught up on my reading, this last going on for how many ever weeks it takes. I have a backlog of articles to send and will exhaust them by the end of the year. After that, I have a big batch of journal articles I downloaded on my annual visit to the University of Virginia and will dole our conversions from PDF to TXT at the rate of one a day. I'll also participate in discussions and do up and occasional meme. But you'll be on your own in analyzing the news. I hope I have given you some of the tools to do so.] By Miranda Hitti WebMD Medical News Reviewed By Ann Edmundson, MD on Thursday, December 15, 2005 Dec. 15, 2005 --- Some people seem wired to stay mentally sharp for 90 years or more. Just ask George Zubenko, MD, PhD. He's not one of those quick-witted seniors (not yet, anyway), but he studied their genes. Zubenko is a University of Pittsburgh professor of psychiatry and biological sciences. He compared the genes of two groups of healthy people who differed in age by more than half a century. The results hint that genes affect the aging brain and that healthy lifestyles also count. In a news release, Zubenko calls the findings "exciting." He says, "Identifying such genetic and behavioral factors may hold promise for better understanding the aging process and perhaps one day enriching or extending the lives of other individuals." Aging America Aging is a timely topic, as the U.S. population ages. The CDC estimates that a baby born in 2003 has a life expectancy of 77.6 years. That's a record highThat's a record high. The CDC also predicts that people aged 55-64 will be America's fastest-growing age group for the next decade. Those people are practically youngsters next to the oldest people aliveoldest people alive. About 450 people worldwide are reportedly older than 110. Sharp as Ever After 90 Zubenko's study included 200 people split into two age groups. One group included 100 elders whose minds hadn't lost much ground to aging. They were 94 people in their 90s and six centenarians. Most of the elders were living independently and could handle activities of their daily lives. Half were men. The second group consisted of 100 young adults aged 18-25 years. Zubenko matched them to the elders regarding sex, race, ethnic background, and geographic location. Gene Advantage Zubenko compared the groups' genes. He noticed that compared to the young adults, the elders had more of one genetic marker (the APOE E2 allele) and less of another (the APOE E4 allele). This genetic profile may offer some protection from Alzheimer's disease, though no one knows exactly what causes Alzheimer'sAlzheimer's. The study also shows some different gene patterns among the male and female elders. Zubenko notes that women often live longer than men, in a news release. "It would not be surprising if the collection of genes that influences the capacity to reach old age with normal mental capacity differs somewhat for men and women," he says. Lifestyle Counts, Too Genes are only part of the picture in healthy aging. Our circumstances and the way we treat ourselves can also make a difference. The elders in Zubenko's study had a few things in common including lifestyle factors. Only one was a current smoker. * 80% drank alcohol less than once a month * None had a history of mental disorders in early or middle adulthood Lifestyle factorsLifestyle factors such as diet, exercise, optimism, and social support weren't reported in Zubenko's study. SOURCES: Annual meeting of American College of Neuropsychopharmacology, Waikoloa, Hawaii, Dec. 11-15, 2005. News release, GYMR. From checker at panix.com Thu Dec 29 02:37:54 2005 From: checker at panix.com (Premise Checker) Date: Wed, 28 Dec 2005 21:37:54 -0500 (EST) Subject: [Paleopsych] Gregory Benford & Michael Rose: The Old Future Message-ID: Gregory Benford & Michael Rose: The Old Future http://www.benford-rose.com/tnf-intro.php [Joel Garreau's new book, _Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies--and What It Means to be Human_ (NY: Doubleday, 2005) has just arrived. I am signed up to review it for _The Journal of Evolution and Technology_ and commenced reading it at once. Accordingly, I have stopped grabbing articles to forward until I have written my review *and* have caught up on my reading, this last going on for how many ever weeks it takes. I have a backlog of articles to send and will exhaust them by the end of the year. After that, I have a big batch of journal articles I downloaded on my annual visit to the University of Virginia and will dole our conversions from PDF to TXT at the rate of one a day. I'll also participate in discussions and do up and occasional meme. But you'll be on your own in analyzing the news. I hope I have given you some of the tools to do so.] The Old Future No, our time is not the end of history, just the end of old illusions about our journey through history. What we had thought of as our future did not arrive with the dawn of a new millennium. Whether religious, ideological, or merely pragmatic, all the old systems of futurist thought have become irrelevant, disposable, confusing more than helpful, Procrustean more than enlightening. Some have reacted with vicious negation to this loss of illusion, from Islamic radicals to Biblical fundamentalists to neo-Marxist academics. For such people, clinging to a fossilized set of beliefs is crucial to their psychological health. We can feel sorry for them, while fending off their assaults on our cities, our universities, and our culture with a steadfastness that should grow more obdurate as the obvious futility of their cause becomes clear. They are the cultural dinosaurs of our time, still destructive in their death throes, but as irrelevant to our future as Jove was in the early centuries of the first millennium A.D. Islamic radicals will be killing people by the thousands well into the 21st century. Our new future is too much for them. In OECD countries, most people have simply given up on ideology. They are bombarded with the fading rhetoric of the media, the edicts of bureaucrats, the spittle of Texas preachers, and the fulminations of antique radicals from Ralph Nader to Noam Chomsky. College students swim in the fetid sewage of political correctness during the day, but at night they will dance to misogynist hip-hop, play gratuitously violent video games, and get ripped on alcohol or drugs before fumbling toward ill-considered sex. They party to forget the day. Given the confusions and irrelevance of their professors, it is hard to criticize their opportunistic alternation between careerism and hedonism. Their parents have generally given up on all but the small satisfactions of middle-age, having lost the hormonal surges of youth and the need or ability to prove themselves in new careers. Their world is adrift. It wasn't supposed to be like this, "In the future," as we always used to say. In the future, we would all wear the same clothes and have some mythic figure to lead us, whether benign or malign, a new Gandhi or another Big Brother. The future, as imagined from 1848 to 1989, was supposed to be some kind of collective transcendence. The paragon of the collectivist vision was the brief Khmer Rouge rule of Cambodia from 1975 to 1980. In that brief spasm, Rousseau, Thoreau, and Marx received their ultimate homage in the creation of a society that lacked almost any trace of freedom, civilization, or humanity. The Khmer Rouge suppressed education, destroyed medical care, demolished transportation infrastructure, and banished currency. Instead, they sent everyone to countryside collectives to lead lives uncorrupted by capitalism-- lives of starvation, indoctrination, malaria, torture, and dysentery. Everyone who could have contributed medical or technological expertise they killed outright. All to escape modernity, to escape from freedom. Soviet-style communism was thin gruel compared to this grand celebration of the pernicious ideologies that descended from Rousseau and Marx. So it was natural that another scorpion in the bottle of post-American southeast Asia, Vietnam, destroyed the Khmer Rouge. The Maoist killers of Cambodia were too perfect to last. As the French intellectuals (particularly Sartre) announced in 1975, the revolution supplied by the Khmer Rouge was the purest of all communist revolutions. The moral and intellectual bankruptcy of the Left is now complete. Only university faculty in Europe or the United States have the fatuity to believe in that ideological nightmare. While for many on the Right this collapse seems to make way for the triumph of the God-fearing faithful, their collective vision too is but an ugly echo of history. The most successful theocracy of modern times is that of Iran, where the mullahs wield ultimate power. There we have religious thought-police and dress codes. Yet the younger people, some of them now in middle-age, lead lives of sexual promiscuity and drug abuse. Once the great mass of the Iranian people was delighted with their religious leaders. Certainly they were in 1980. Now they are mostly weary and cynical. The only thing that keeps the Religious Right in the United States from the same fate is the fact that they don't get to run the country in quite the manner that they want, George W. Bush, John Ashcroft, and the Patriot Act notwithstanding. We can all thank James Madison's Constitution for that. Ironically, even science fiction perpetuated the old myth of the future, the utopian vision. From Ursula K. LeGuin's anarchist fantasy in The Dispossessed to Aldous Huxley's dystopian Brave New World, the future of science fiction often involved collectives of one kind or another. There might be a few renegades bravely fighting against the collective machinery of society, but that collective machinery was usually there. The shadows of Rousseau, Marx, Lenin, Hitler, and Mao have been too long, blocking out the vision even of the writers who were professional visionaries. Cracks in the Edifice No matter how many people the revolutionaries of the Left or the Right kill, no matter how mightily the politically-correct universities and publishers suppress the news about the new world being born, the Old Future is dead. Life at the start of the 21st Century is messy. People want the freedom to consume what they like, to sell their services at the highest price they can get, to say what they like in private, and to brandish their opinions on the internet. Regardless of the fascinations and fashions of religious fanatics, academics, journalists, or commercial writers, the lives of ordinary people have pursued similar goals throughout history. Most people want a happy family life, material comfort, and the opportunity to do what they like. These goals often conflict, most obviously for the indulged youth of the West and the Middle East For them, choice and its conflicts often confuse. In turn this provokes the comforting abdication of freedom that political or religious zealotry provides. It feels so good to stop thinking, choosing, deciding! Notably, alienated youth often become pragmatic parents and retirees. An exception is university faculty--among the most temperamentally youthful, not to say petulant and self-indulgent, of the middle-aged. None of the pragmatism of the 'silent majority' should be confused with virtue, civic or otherwise. The heroic and the altruistic appear in large human populations, but they are exceptional. For every gentile who harbored Jews at the peak of the Third Reich, there were hundreds and thousands who did not. Many disapproved of Hitler's holocaust, but were unwilling to risk their lives, families, or position to save even a few of the millions destined for extermination. We do not wish to idealize everyday pragmatism; it can be frighteningly callous. But fierce ideologies and intemperate faiths do not purchase the loyalty of the great mass for very long. Perhaps the most obvious sign of the decay of ideology is that most people are now tired of it. They only want peace, affluence, and fun. While Marxism is still the state religion of the People's Republic of China, just as Shiite Islam is the monolithic doctrine of revolutionary Iran, the Shanghai appartchiks and Tehran mullahs are cutting deals on the side. Not only do most Chinese and Iranians just want to be better off, the cynicism of their rulers is also palpable. Only North Korea remains as a monumental Inferno of ideology. If it weren't for the risks inherent in its acquisition of atomic weapons and the vast suffering of its victims, it might be worth preserving it as a museum exhibit of the follies of collectivism. Not the least of its charms lies in its conversion to monarchical despotism, with the son of the previous ruler inheriting absolute power. Journalists deplore the corrupt leaders of such regimes, missing the point that corruption is one of the most positive features of such societies. Violation of rigid ideals can mitigate the intimidation of the absolute state. The Khmer people of Cambodia knew that their rulers had feet of clay when the Khmer Rouge elite started to wear Rolex watches and fine silk scarves along with their revolutionary black garb. At that point, the fall of the Khmer Rouge from power was only months away. But now these small cracks have widened, bringing down (in the case of the Soviet Empire) or radically compromising (in the case of the PRC) most of the significant collectivist regimes. The sullen demeanor of ideologues, East and West, is now palpable. Perhaps the only substantial redoubt of insanely absolute faith is among Islamic terrorists. Ironically, their tradition of assassination and religious bloodshed is entirely authentic, dating back before the Christian Crusades. The term assassin itself is Arab in it origins, alluding to crazed fanatics who purportedly used hashish to fuel their deadly work. [A dubious notion, given the pacifying effects of hashish, but inappropriate derivation is common in etymology.] Whether modern nation-states will have to continue killing these people, or educational reform will cause them to wither away, is not decidable at present. Islamic terrorists seem to aspire to become the most rabid vermin of modern civilization, so perhaps they deserve little more than extermination. In any case, they hardly have the cachet that communists and anarchists had on the Left Bank or in faculty clubs, where morally bankrupt intellectuals used to sing the praises of one or another collectivist monster in order to impress, and often to bed, the young and impressionable. The Countervailing Tradition There is a thoughtful tradition that has long opposed the powerful and the ideological. It is associated with Socrates, although it should be remembered that Socrates accepted the judgment of an intolerant Athens. Then his foremost student, Plato, only perpetuated Greek tendencies to absolutism. Aristotle, Plato's abandoned prot?g?, is perhaps a better candidate as a progenitor of the opposition to collectivism, though more in his generally empirical curiosity than his specific political proposals. George Orwell was the 20th Century's most generally accepted intellectual opponent of totalitarianism, particularly in 1984 and Animal Farm. Still, he harbored some collectivist ideas. After all, Orwell was a man of the Left, and fought alongside the communists and anarchists in the Spanish civil war. Our view is that the clearest, and historically most important, expression of this tradition came out of the Scottish Enlightenment: David Hume, Adam Smith, Adam Ferguson, among others. This tradition emphasizes indirect effects, the futility of government attempts to control markets and international trade, the value of enterprise, and the limits to the benign effects of concerted action. This tradition had its most visible success with James Madison's Constitution for the new American republic, the vastly successful state that replaced the loose confederation of colonies who started the American rebellion against the English Crown. Madison was perhaps the greatest practical student of the Scottish Enlightenment, and certainly the person who most effectively set about implementing its precepts. His design for the new state was one exquisitely, and indeed laboriously designed - see his Notes on the Constitutional Convention, contrived to prevent the imposition of domestic despotism on the American people. [Of course George Washington preeminently guaranteed the American freedom from external despotism, but that is another story.] The United States of America has since shown both the value and the limitations of political and economic freedom for modern civilization. It certainly produces economic creativity and debate, with the crass and the tawdry as perhaps inevitable accompaniments. In the 20th Century, the themes of the Scottish Enlightenment were taken up again by such figures as Friedrich von Hayek, Karl Popper, and Michael Oakeshott, some of the most reviled authors in late 20th Century British and American universities. Their books, such as The Road to Serfdom, The Open Society and Its Enemies, and Rationalism in Politics, respectively, are among the foundation stones of an alternative tradition within the humanities and social sciences. Of course this tradition enjoys the marked hostility of the dominant traditions of contemporary critical theory, structuralism, deconstructionism, and the other nihilistic systems of thought in modern Western universities and colleges. For this reason, the very names of these titanic figures are often known among young people as little more than targets for passing abuse. Their names serve to wind up their professors in the advanced seminars that these pillars of mediocrity give to their benighted acolytes. In the natural sciences and related fields the thinking of Aristotle, Hume, and Popper has enjoyed the greatest influence. Indeed, one might point to the entire edifice of modern technology as the fruition of this tradition of thought. Its empiricism and cautious speculation provide the cultural matrix for much of Western science. Charles Darwin, for example, can be seen as a child of this tradition, and indeed much of his thinking is an overt use (in his use of Malthus) or implicit appropriation (employing Hume's careful materialistic reasoning, for example) of themes and methods from the Scottish Enlightenment. From Darwin, 20th Century biology derived almost all of its intellectually cogent framework, which then enabled Anglo-American, reductionist, molecular and cell biologists to pursue the details of biological mechanism untrammeled by religious, idealist, or Hegelian clap-trap. What are we about here? We wish to recruit new adherents. Our agenda is simply the view that solutions to political and cultural difficulties can be found in the deliberate cultivation of the empirical, individualistic, skeptical Western tradition. Put another way: We wish to drive a stake through the heart of the dominant cultural traditions of piety, correctness, ideology, and faith. Then we would like to dance on their graves. Western civilization used to be palpably great. Now it is too often mediocre, with enclaves of greatness: the military, the computer business, and scientific research. We're sure that you have your favorites. But it is more notable that we have been failing regularly in areas we used to dominate: spying, making cars, education, economic growth--pick your debacle. We want the West to have another resurgence of greatness, to be seen once again as the standard against which all other societies can be judged. We make no apology for ethnocentrism: "the West" is a cultural ideal, not a form of genetic differentiation. The cellist Yo-yo Ma is a paragon of Western civilization as much as Mikhail Rostropovich, the great Japanese geneticist Motoo Kimura as much as Gregor Mendel. And by this standard, Adolf Hitler chose to be as much an enemy of the West as Cambodia's Pol Pot did. "The West" is an idea, a cultural tradition, an aspiration. It has survived through good and bad times since Periclean Athens. It has been the best hope for the entire species in our known history. Let us hope that we do not lose it as we stumble out of the dark charnel house that was the 20th Century, into the light of our new future. For we have a great future, if we will but seize it. From shovland at mindspring.com Thu Dec 29 02:45:13 2005 From: shovland at mindspring.com (Steve Hovland) Date: Wed, 28 Dec 2005 18:45:13 -0800 Subject: [Paleopsych] WebMd: Genes May Help Some People Stay Mentally Sharp Into Their 90s and Beyond In-Reply-To: Message-ID: They won't know it for 20 years, but in view of epigenetics, biotechnology is a dead end :-) -----Original Message----- From: paleopsych-bounces at paleopsych.org [mailto:paleopsych-bounces at paleopsych.org]On Behalf Of Premise Checker Sent: Wednesday, December 28, 2005 6:37 PM To: paleopsych at paleopsych.org Subject: [Paleopsych] WebMd: Genes May Help Some People Stay Mentally Sharp Into Their 90s and Beyond Genes May Help Some People Stay Mentally Sharp Into Their 90s and Beyond http://www.webmd.com/content/Article/116/112178.htm [Joel Garreau's new book, _Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies--and What It Means to be Human_ (NY: Doubleday, 2005) has just arrived. I am signed up to review it for _The Journal of Evolution and Technology_ and commenced reading it at once. Accordingly, I have stopped grabbing articles to forward until I have written my review *and* have caught up on my reading, this last going on for how many ever weeks it takes. I have a backlog of articles to send and will exhaust them by the end of the year. After that, I have a big batch of journal articles I downloaded on my annual visit to the University of Virginia and will dole our conversions from PDF to TXT at the rate of one a day. I'll also participate in discussions and do up and occasional meme. But you'll be on your own in analyzing the news. I hope I have given you some of the tools to do so.] By Miranda Hitti WebMD Medical News Reviewed By Ann Edmundson, MD on Thursday, December 15, 2005 Dec. 15, 2005 --- Some people seem wired to stay mentally sharp for 90 years or more. Just ask George Zubenko, MD, PhD. He's not one of those quick-witted seniors (not yet, anyway), but he studied their genes. Zubenko is a University of Pittsburgh professor of psychiatry and biological sciences. He compared the genes of two groups of healthy people who differed in age by more than half a century. The results hint that genes affect the aging brain and that healthy lifestyles also count. In a news release, Zubenko calls the findings "exciting." He says, "Identifying such genetic and behavioral factors may hold promise for better understanding the aging process and perhaps one day enriching or extending the lives of other individuals." Aging America Aging is a timely topic, as the U.S. population ages. The CDC estimates that a baby born in 2003 has a life expectancy of 77.6 years. That's a record highThat's a record high. The CDC also predicts that people aged 55-64 will be America's fastest-growing age group for the next decade. Those people are practically youngsters next to the oldest people aliveoldest people alive. About 450 people worldwide are reportedly older than 110. Sharp as Ever After 90 Zubenko's study included 200 people split into two age groups. One group included 100 elders whose minds hadn't lost much ground to aging. They were 94 people in their 90s and six centenarians. Most of the elders were living independently and could handle activities of their daily lives. Half were men. The second group consisted of 100 young adults aged 18-25 years. Zubenko matched them to the elders regarding sex, race, ethnic background, and geographic location. Gene Advantage Zubenko compared the groups' genes. He noticed that compared to the young adults, the elders had more of one genetic marker (the APOE E2 allele) and less of another (the APOE E4 allele). This genetic profile may offer some protection from Alzheimer's disease, though no one knows exactly what causes Alzheimer'sAlzheimer's. The study also shows some different gene patterns among the male and female elders. Zubenko notes that women often live longer than men, in a news release. "It would not be surprising if the collection of genes that influences the capacity to reach old age with normal mental capacity differs somewhat for men and women," he says. Lifestyle Counts, Too Genes are only part of the picture in healthy aging. Our circumstances and the way we treat ourselves can also make a difference. The elders in Zubenko's study had a few things in common including lifestyle factors. Only one was a current smoker. * 80% drank alcohol less than once a month * None had a history of mental disorders in early or middle adulthood Lifestyle factorsLifestyle factors such as diet, exercise, optimism, and social support weren't reported in Zubenko's study. SOURCES: Annual meeting of American College of Neuropsychopharmacology, Waikoloa, Hawaii, Dec. 11-15, 2005. News release, GYMR. _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From ljohnson at solution-consulting.com Thu Dec 29 06:57:39 2005 From: ljohnson at solution-consulting.com (Lynn D. Johnson, Ph.D.) Date: Wed, 28 Dec 2005 23:57:39 -0700 Subject: [Paleopsych] WebMd: Genes May Help Some People Stay Mentally Sharp Into Their 90s and Beyond In-Reply-To: References: Message-ID: <43B388E3.9040003@solution-consulting.com> Somewhere (I need to look it up) I have some research reports about optimism and gratitude affecting gene expression - people who are very optimistic, happy, and grateful will live longer and healthier lives. Frank Forman and I were discussing the value (or lack of it, as Frank see it) of a religious life, and Steve's gene expression emphasis suggests one of the values. Since both Christianity and Buddhism strongly emphasize gratitude as a vital virtue (and I believe Islam, not as sure), that may account for religious people tending to live longer. Religions remind one to feel forgiving and grateful. Grateful people are low in cortisol, high in dhea, stronger immune systems, and so on. So religion may help healthy gene expression. Of course, Frank, you can always also be grateful to the big bang et seq. but somehow it doesn't seem as soul-satisfying. So here is my effort at a hymn for materialists. Lynn A hymn for Frank and Sarah "We thank thee, dear Darwin, down under our feet, For all life's developing complexity. We thank thee for frontal lobes mighty and full, And right temporal lobes where we feel mystery's pull. Oh, dear father Hubble, as stars rush away, We're glad they have given us an earth where we stay. And for a world tilted just twenty-one degrees, That makes life adjust to the changes we need. The Anthropic Principle fills hearts with delight, As we ponder the chances that life would arrive From strong and weak forces ideally aligned To tickle our minds with the presence divine. Now let's nurture gratitude deep in our hearts, So good gene expression will sure do its part To lengthen out full lives for you and for me To create our very own divinity! copyright (c) 2005 lynn johnson - distribution is encouraged and will be gratefully appreciated. Direct criticism to whocares at deadletter.com Useful graphic: http://universe-review.ca/I02-21-multiverse3.jpg Lynn D. Johnson, Ph.D. Solutions Consulting Group 166 East 5900 South, Ste. B-108 Salt Lake City, UT 84107 Tel: (801) 261-1412; Fax: (801) 288-2269 Check out our webpage: www.solution-consulting.com Feeling upset? Order Get On The Peace Train, my new solution-oriented book on negative emotions. Steve Hovland wrote: >They won't know it for 20 years, but in view of epigenetics, >biotechnology is a dead end :-) > > >-----Original Message----- >From: paleopsych-bounces at paleopsych.org >[mailto:paleopsych-bounces at paleopsych.org]On Behalf Of Premise Checker >Sent: Wednesday, December 28, 2005 6:37 PM >To: paleopsych at paleopsych.org >Subject: [Paleopsych] WebMd: Genes May Help Some People Stay Mentally >Sharp Into Their 90s and Beyond > > >Genes May Help Some People Stay Mentally Sharp Into Their 90s and Beyond >http://www.webmd.com/content/Article/116/112178.htm > >[Joel Garreau's new book, _Radical Evolution: The Promise and Peril of >Enhancing Our Minds, Our Bodies--and What It Means to be Human_ (NY: >Doubleday, 2005) has just arrived. I am signed up to review it for _The >Journal of Evolution and Technology_ and commenced reading it at once. >Accordingly, I have stopped grabbing articles to forward until I have >written my review *and* have caught up on my reading, this last going on >for how many ever weeks it takes. I have a backlog of articles to send and >will exhaust them by the end of the year. After that, I have a big batch >of journal articles I downloaded on my annual visit to the University of >Virginia and will dole our conversions from PDF to TXT at the rate of one >a day. I'll also participate in discussions and do up and occasional >meme. But you'll be on your own in analyzing the news. I hope I have given >you some of the tools to do so.] > >By Miranda Hitti > >WebMD Medical News Reviewed By Ann Edmundson, MD >on Thursday, December 15, 2005 > >Dec. 15, 2005 --- Some people seem wired to stay mentally sharp for 90 >years or more. > >Just ask George Zubenko, MD, PhD. He's not one of those quick-witted >seniors (not yet, anyway), but he studied their genes. > >Zubenko is a University of Pittsburgh professor of psychiatry and >biological sciences. He compared the genes of two groups of healthy >people who differed in age by more than half a century. > >The results hint that genes affect the aging brain and that healthy >lifestyles also count. > >In a news release, Zubenko calls the findings "exciting." He says, >"Identifying such genetic and behavioral factors may hold promise for >better understanding the aging process and perhaps one day enriching or >extending the lives of other individuals." > >Aging America > >Aging is a timely topic, as the U.S. population ages. > >The CDC estimates that a baby born in 2003 has a life expectancy of 77.6 >years. That's a record highThat's a record high. > >The CDC also predicts that people aged 55-64 will be America's >fastest-growing age group for the next decade. > >Those people are practically youngsters next to the oldest people >aliveoldest people alive. About 450 people worldwide are reportedly >older than 110. > >Sharp as Ever After 90 > >Zubenko's study included 200 people split into two age groups. > >One group included 100 elders whose minds hadn't lost much ground to >aging. They were 94 people in their 90s and six centenarians. > >Most of the elders were living independently and could handle activities >of their daily lives. Half were men. > >The second group consisted of 100 young adults aged 18-25 years. Zubenko >matched them to the elders regarding sex, race, ethnic background, and >geographic location. > >Gene Advantage > >Zubenko compared the groups' genes. He noticed that compared to the >young adults, the elders had more of one genetic marker (the APOE E2 >allele) and less of another (the APOE E4 allele). > >This genetic profile may offer some protection from Alzheimer's disease, >though no one knows exactly what causes Alzheimer'sAlzheimer's. > >The study also shows some different gene patterns among the male and >female elders. Zubenko notes that women often live longer than men, in a >news release. > >"It would not be surprising if the collection of genes that influences >the capacity to reach old age with normal mental capacity differs >somewhat for men and women," he says. > >Lifestyle Counts, Too > >Genes are only part of the picture in healthy aging. Our circumstances >and the way we treat ourselves can also make a difference. > >The elders in Zubenko's study had a few things in common including >lifestyle factors. Only one was a current smoker. > > * 80% drank alcohol less than once a month > * None had a history of mental disorders in early or middle >adulthood > >Lifestyle factorsLifestyle factors such as diet, exercise, optimism, and >social support weren't reported in Zubenko's study. > >SOURCES: Annual meeting of American College of Neuropsychopharmacology, >Waikoloa, Hawaii, Dec. 11-15, 2005. News release, GYMR. >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > > > From shovland at mindspring.com Thu Dec 29 14:30:00 2005 From: shovland at mindspring.com (Steve Hovland) Date: Thu, 29 Dec 2005 06:30:00 -0800 Subject: [Paleopsych] WebMd: Genes May Help Some People Stay MentallySharp Into Their 90s and Beyond In-Reply-To: <43B388E3.9040003@solution-consulting.com> Message-ID: These might be examples of epigenetic control. -----Original Message----- From: paleopsych-bounces at paleopsych.org [mailto:paleopsych-bounces at paleopsych.org]On Behalf Of Lynn D. Johnson, Ph.D. Sent: Wednesday, December 28, 2005 10:58 PM To: The new improved paleopsych list Subject: Re: [Paleopsych] WebMd: Genes May Help Some People Stay MentallySharp Into Their 90s and Beyond Somewhere (I need to look it up) I have some research reports about optimism and gratitude affecting gene expression - people who are very optimistic, happy, and grateful will live longer and healthier lives. Frank Forman and I were discussing the value (or lack of it, as Frank see it) of a religious life, and Steve's gene expression emphasis suggests one of the values. Since both Christianity and Buddhism strongly emphasize gratitude as a vital virtue (and I believe Islam, not as sure), that may account for religious people tending to live longer. Religions remind one to feel forgiving and grateful. Grateful people are low in cortisol, high in dhea, stronger immune systems, and so on. So religion may help healthy gene expression. Of course, Frank, you can always also be grateful to the big bang et seq. but somehow it doesn't seem as soul-satisfying. So here is my effort at a hymn for materialists. Lynn A hymn for Frank and Sarah "We thank thee, dear Darwin, down under our feet, For all life's developing complexity. We thank thee for frontal lobes mighty and full, And right temporal lobes where we feel mystery's pull. Oh, dear father Hubble, as stars rush away, We're glad they have given us an earth where we stay. And for a world tilted just twenty-one degrees, That makes life adjust to the changes we need. The Anthropic Principle fills hearts with delight, As we ponder the chances that life would arrive From strong and weak forces ideally aligned To tickle our minds with the presence divine. Now let's nurture gratitude deep in our hearts, So good gene expression will sure do its part To lengthen out full lives for you and for me To create our very own divinity! copyright (c) 2005 lynn johnson - distribution is encouraged and will be gratefully appreciated. Direct criticism to whocares at deadletter.com Useful graphic: http://universe-review.ca/I02-21-multiverse3.jpg Lynn D. Johnson, Ph.D. Solutions Consulting Group 166 East 5900 South, Ste. B-108 Salt Lake City, UT 84107 Tel: (801) 261-1412; Fax: (801) 288-2269 Check out our webpage: www.solution-consulting.com Feeling upset? Order Get On The Peace Train, my new solution-oriented book on negative emotions. Steve Hovland wrote: >They won't know it for 20 years, but in view of epigenetics, >biotechnology is a dead end :-) > > >-----Original Message----- >From: paleopsych-bounces at paleopsych.org >[mailto:paleopsych-bounces at paleopsych.org]On Behalf Of Premise Checker >Sent: Wednesday, December 28, 2005 6:37 PM >To: paleopsych at paleopsych.org >Subject: [Paleopsych] WebMd: Genes May Help Some People Stay Mentally >Sharp Into Their 90s and Beyond > > >Genes May Help Some People Stay Mentally Sharp Into Their 90s and Beyond >http://www.webmd.com/content/Article/116/112178.htm > >[Joel Garreau's new book, _Radical Evolution: The Promise and Peril of >Enhancing Our Minds, Our Bodies--and What It Means to be Human_ (NY: >Doubleday, 2005) has just arrived. I am signed up to review it for _The >Journal of Evolution and Technology_ and commenced reading it at once. >Accordingly, I have stopped grabbing articles to forward until I have >written my review *and* have caught up on my reading, this last going on >for how many ever weeks it takes. I have a backlog of articles to send and >will exhaust them by the end of the year. After that, I have a big batch >of journal articles I downloaded on my annual visit to the University of >Virginia and will dole our conversions from PDF to TXT at the rate of one >a day. I'll also participate in discussions and do up and occasional >meme. But you'll be on your own in analyzing the news. I hope I have given >you some of the tools to do so.] > >By Miranda Hitti > >WebMD Medical News Reviewed By Ann Edmundson, MD >on Thursday, December 15, 2005 > >Dec. 15, 2005 --- Some people seem wired to stay mentally sharp for 90 >years or more. > >Just ask George Zubenko, MD, PhD. He's not one of those quick-witted >seniors (not yet, anyway), but he studied their genes. > >Zubenko is a University of Pittsburgh professor of psychiatry and >biological sciences. He compared the genes of two groups of healthy >people who differed in age by more than half a century. > >The results hint that genes affect the aging brain and that healthy >lifestyles also count. > >In a news release, Zubenko calls the findings "exciting." He says, >"Identifying such genetic and behavioral factors may hold promise for >better understanding the aging process and perhaps one day enriching or >extending the lives of other individuals." > >Aging America > >Aging is a timely topic, as the U.S. population ages. > >The CDC estimates that a baby born in 2003 has a life expectancy of 77.6 >years. That's a record highThat's a record high. > >The CDC also predicts that people aged 55-64 will be America's >fastest-growing age group for the next decade. > >Those people are practically youngsters next to the oldest people >aliveoldest people alive. About 450 people worldwide are reportedly >older than 110. > >Sharp as Ever After 90 > >Zubenko's study included 200 people split into two age groups. > >One group included 100 elders whose minds hadn't lost much ground to >aging. They were 94 people in their 90s and six centenarians. > >Most of the elders were living independently and could handle activities >of their daily lives. Half were men. > >The second group consisted of 100 young adults aged 18-25 years. Zubenko >matched them to the elders regarding sex, race, ethnic background, and >geographic location. > >Gene Advantage > >Zubenko compared the groups' genes. He noticed that compared to the >young adults, the elders had more of one genetic marker (the APOE E2 >allele) and less of another (the APOE E4 allele). > >This genetic profile may offer some protection from Alzheimer's disease, >though no one knows exactly what causes Alzheimer'sAlzheimer's. > >The study also shows some different gene patterns among the male and >female elders. Zubenko notes that women often live longer than men, in a >news release. > >"It would not be surprising if the collection of genes that influences >the capacity to reach old age with normal mental capacity differs >somewhat for men and women," he says. > >Lifestyle Counts, Too > >Genes are only part of the picture in healthy aging. Our circumstances >and the way we treat ourselves can also make a difference. > >The elders in Zubenko's study had a few things in common including >lifestyle factors. Only one was a current smoker. > > * 80% drank alcohol less than once a month > * None had a history of mental disorders in early or middle >adulthood > >Lifestyle factorsLifestyle factors such as diet, exercise, optimism, and >social support weren't reported in Zubenko's study. > >SOURCES: Annual meeting of American College of Neuropsychopharmacology, >Waikoloa, Hawaii, Dec. 11-15, 2005. News release, GYMR. >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > > > _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych From thrst4knw at aol.com Thu Dec 29 16:16:45 2005 From: thrst4knw at aol.com (Todd I. Stark) Date: Thu, 29 Dec 2005 11:16:45 -0500 Subject: [Paleopsych] WebMd: Genes May Help ... (religion, belief, and well being) In-Reply-To: <43B388E3.9040003@solution-consulting.com> References: <43B388E3.9040003@solution-consulting.com> Message-ID: <43B40BED.7080202@aol.com> Lynn, this is beautiful, in its own gently cynical way. It shows dramatically the *emotional* level of the misunderstnding between theists and atheists that I think _also_ drives those misguided attempts to save our souls from science, and drives even some more or less educated folks over to loyalist political movements like intelligent design. The most interesting thing about it is the compelling intuitive assumption that the meaning of life must be found somewhere in universal laws of physics or biology, of all places. Personally, I find it absolutely astonishing that anyone could find reassurance in any sense in the "fine tuning" of constants of the universe. My suspicion is that there is a fine edged wedge that we all teeter on in our early development, and we all either fall down on one side or the other as we mature. On one side of the wedge are those of us who imagine nature as having a spiritual presence and finding that reassuring. I'm guessing that most people are on that side of the wedge. On the other side are those of us who have a lot of trouble imagining nature having a spiritual presence, and aren't very much reassured by it when we do imagine it. I think when we look seriously at the theory that religion "reminds" us to be forgiving and grateful, I agree that it may have some validity in some abstract sense, but not in the straightforward way we tend to think of it intuitively. Emotional response patterns are influenced by a mixture of temperament and cognitive habits, and religious belief by no means has a consistent effect on cognitive patterns regarding emotional response! Think about it. That would be like saying that Christians all tend to respond the same way to the same situations because of certain particular religious beliefs they hold, and that Jews respond to the same situations systematically differently because their particular religious beliefs are different. Even in the case of theology this doesn't hold up. There are liberal Jews who think more like liberal Protestants than like conservative Jews on most issues, for example. We can find common patterns whereby beliefs cluster, but I don't think they cluster around particular items of creed that religions find so important in distinguishing themselves. Even the very belief in a deity doesn't particularly distinguish us morally or ethically. The hypothesis that religious beliefs in particular guide cognition in any global way just doesn't seem very plausible to me. Religion and its embedded culture do have all sorts of aspects that affect social conditions and how we develop. Our temptation is to overemphasize the "belief" or "creed" aspect of religion, and attribute everything to that, when in reality, I think it is one of the less important aspects of religon in terms of its effects on our well being. I strongly suspect that our temptation to focus on "belief" is driven by an instinct to segregate ourselves based on different ways of thinking, we try to discern each others' "beliefs" in order to help predict their behavior. So when we think about each other, we tend to think of them in terms of what we imagine people to believe, and we want to attribute their goodness or well-being to what they believe as well. To put it another, perhaps melodramatic way, there are an awful lot more forgiving, humble, grateful, ethical atheists and a whole lot more unforgiving, arrogant, dishonest theists than we should expect as a prediction of the theory that religion particularly reminds us to be good or reminds us to be humble. We truly need to look farther than people's religious beliefs to find the real source of human goodness and the relationship between culture and well-being, in my opinion. It appears to me that the world's religions are collectively like a huge canvas which we all look at in our own selective way for the pieces we need to reassure ourselves of what we already believe. Thanks again for the beautiful prose. Now if you can come up with a cool humanist holiday to rival Christmas, I'll be really impressed! warm regards, Todd Lynn D. Johnson, Ph.D. wrote on 12/29/2005, 1:57 AM: > Somewhere (I need to look it up) I have some research reports about > optimism and gratitude affecting gene expression - people who are very > optimistic, happy, and grateful will live longer and healthier lives. > > Frank Forman and I were discussing the value (or lack of it, as Frank > see it) of a religious life, and Steve's gene expression emphasis > suggests one of the values. Since both Christianity and Buddhism > strongly emphasize gratitude as a vital virtue (and I believe Islam, not > as sure), that may account for religious people tending to live longer. > Religions remind one to feel forgiving and grateful. Grateful people are > low in cortisol, high in dhea, stronger immune systems, and so on. So > religion may help healthy gene expression. Of course, Frank, you can > always also be grateful to the big bang et seq. but somehow it doesn't > seem as soul-satisfying. So here is my effort at a hymn for materialists. > Lynn > > A hymn for Frank and Sarah > > "We thank thee, dear Darwin, down under our feet, > For all life's developing complexity. > We thank thee for frontal lobes mighty and full, > And right temporal lobes where we feel mystery's pull. > > Oh, dear father Hubble, as stars rush away, > We're glad they have given us an earth where we stay. > And for a world tilted just twenty-one degrees, > That makes life adjust to the changes we need. > > The Anthropic Principle fills hearts with delight, > As we ponder the chances that life would arrive > From strong and weak forces ideally aligned > To tickle our minds with the presence divine. > > Now let's nurture gratitude deep in our hearts, > So good gene expression will sure do its part > To lengthen out full lives for you and for me > To create our very own divinity! > > copyright (c) 2005 lynn johnson - distribution is encouraged and will be > gratefully appreciated. Direct criticism to whocares at deadletter.com > > Useful graphic: http://universe-review.ca/I02-21-multiverse3.jpg > > Lynn D. Johnson, Ph.D. > Solutions Consulting Group > 166 East 5900 South, Ste. B-108 > Salt Lake City, UT 84107 > > Tel: (801) 261-1412; Fax: (801) 288-2269 > > Check out our webpage: www.solution-consulting.com > > Feeling upset? Order Get On The Peace Train, my new solution-oriented > book on negative emotions. From ljohnson at solution-consulting.com Thu Dec 29 19:46:29 2005 From: ljohnson at solution-consulting.com (Lynn D. Johnson, Ph.D.) Date: Thu, 29 Dec 2005 12:46:29 -0700 Subject: [Paleopsych] WebMd: Genes May Help ... (religion, belief, and well being) In-Reply-To: <43B40BED.7080202@aol.com> References: <43B388E3.9040003@solution-consulting.com> <43B40BED.7080202@aol.com> Message-ID: <43B43D15.3040906@solution-consulting.com> Todd, I think you might be misunderstanding my argument. Anyway, you are supposed to send criticisms to another email address, I think Comments below Todd I. Stark wrote: >Lynn, this is beautiful, in its own gently cynical way. > >It shows dramatically the *emotional* level of the misunderstnding >between theists and atheists that I think _also_ drives those misguided >attempts to save our souls from science, and drives even some more or >less educated folks over to loyalist political movements like >intelligent design. > > > No, you misread. The key is gratitude, optimism, and so on. Such emotional states drive a positive hormone environment. BTW, you can get a lot of that from owning dogs and cats that you pet, since that also elevates DHEA. I brought in the business about religion because Frank had recently said to me he couldn't see the value in it. Since he is not going to become an adherent, I came up with a quasi-mystical hymn he could sing at random times throughout the year, so as to raise his good hormones. Or maybe he should pet a dog. >The most interesting thing about it is the compelling intuitive >assumption that the meaning of life must be found somewhere in universal >laws of physics or biology, of all places. Personally, I find it >absolutely astonishing that anyone could find reassurance in any sense >in the "fine tuning" of constants of the universe. > Sounds like your right temporal lobe is going to waste, hummm??? > My suspicion is that >there is a fine edged wedge that we all teeter on in our early >development, and we all either fall down on one side or the other as we >mature. On one side of the wedge are those of us who imagine nature as >having a spiritual presence and finding that reassuring. I'm guessing >that most people are on that side of the wedge. On the other side are >those of us who have a lot of trouble imagining nature having a >spiritual presence, and aren't very much reassured by it when we do >imagine it. > > > Agreed. Most people are on the believing side, 80% - 90% in the US, less in godless europe, but what the heck, they'll all be muslim within 100 years anyway. Perhaps 40% - 50% of serious scientists are theists. http://en.wikipedia.org/wiki/Religion 2.1 billion christians 1.1 moslems 1 secularists lots of other stuff. >I think when we look seriously at the theory that religion "reminds" us >to be forgiving and grateful, I agree that it may have some validity in >some abstract sense, but not in the straightforward way we tend to think >of it intuitively. > Well, if you go to church, you will be very literally reminded of it, and quite straightforwardly. >Emotional response patterns are influenced by a >mixture of temperament and cognitive habits, and religious belief by no >means has a consistent effect on cognitive patterns regarding emotional >response! > Nothing has a consistent and straightforward effect, but generally there is a strong elevating message there. It does have an effect, if I look at my own life and that of others. >Think about it. That would be like saying that Christians >all tend to respond the same way to the same situations because of >certain particular religious beliefs they hold, and that Jews respond to >the same situations systematically differently because their particular >religious beliefs are different. > No, they are about the same. There is very little difference in core values, except that christians have a stronger injunction to forgive. Not absolute, just stronger. See recent essays by Dennis Prager on that, a professing jew who points out how very similar his values are to christian values, which he sees as proceeding from the jewish foundation. He recently wrote about being criticized by his jewish friends for supporting christians, but he thinks such divisiveness is silly. >Even in the case of theology this >doesn't hold up. There are liberal Jews who think more like liberal >Protestants than like conservative Jews on most issues, for example. > Yes, but they aren't the happy ones (come on, it is a joke!) >We >can find common patterns whereby beliefs cluster, but I don't think they >cluster around particular items of creed that religions find so >important in distinguishing themselves. Even the very belief in a deity >doesn't particularly distinguish us morally or ethically. The >hypothesis that religious beliefs in particular guide cognition in any >global way just doesn't seem very plausible to me. > > > Hum . . . evidence? Surveys? So how to explain the pro-social benefits of religious adherence? That was my topic, I thought. Maslow found spiritually committed people survived concentration camps better than secular and non-believing. That has been recently supported in various meta analyses cf., http://archfami.ama-assn.org/cgi/content/full/7/2/118 Do an APA lit search on _religion_ and _benefits_. I am too lazy to get to it right now. >Religion and its embedded culture do have all sorts of aspects that >affect social conditions and how we develop. Our temptation is to >overemphasize the "belief" or "creed" aspect of religion, and attribute >everything to that, when in reality, I think it is one of the less >important aspects of religon in terms of its effects on our well being. > > > Not what Maslow found. Belief had a very positive effect. Belief is amazingly robust as a driving force in our behavior. See Seligman's work on learned depression, learned optimism, and the attitudinal (belief-oriented) components of resistence to depression. >I strongly suspect that our temptation to focus on "belief" is driven by >an instinct to segregate ourselves based on different ways of thinking, >we try to discern each others' "beliefs" in order to help predict their >behavior. So when we think about each other, we tend to think of them >in terms of what we imagine people to believe, and we want to attribute >their goodness or well-being to what they believe as well. > > NOt a bad arugment, but too limiting. It could be one factor, but there are more powerful benefits of a robust belief system. >To put it another, perhaps melodramatic way, there are an awful lot more >forgiving, humble, grateful, ethical atheists and a whole lot more >unforgiving, arrogant, dishonest theists than we should expect as a >prediction of the theory that religion particularly reminds us to be >good or reminds us to be humble. > > > Citations? Surveys? Some of the evidence you may offer would be rather suspect, such as Adorno et al., the F-scale which I think turns out to have no real validity. Adorno was a True Believer, and knew what he wanted before starting his research (see Robert Rosenthal). >We truly need to look farther than people's religious beliefs to find >the real source of human goodness and the relationship between culture >and well-being, in my opinion. > > > Have you read the stuff on vertical and horizontal religion by Alport? >It appears to me that the world's religions are collectively like a huge >canvas which we all look at in our own selective way for the pieces we >need to reassure ourselves of what we already believe. > >Thanks again for the beautiful prose. Now if you can come up with a >cool humanist holiday to rival Christmas, I'll be really impressed! > >warm regards, > >Todd > > > > >Lynn D. Johnson, Ph.D. wrote on 12/29/2005, 1:57 AM: > > > Somewhere (I need to look it up) I have some research reports about > > optimism and gratitude affecting gene expression - people who are very > > optimistic, happy, and grateful will live longer and healthier lives. > > > > Frank Forman and I were discussing the value (or lack of it, as Frank > > see it) of a religious life, and Steve's gene expression emphasis > > suggests one of the values. Since both Christianity and Buddhism > > strongly emphasize gratitude as a vital virtue (and I believe Islam, not > > as sure), that may account for religious people tending to live longer. > > Religions remind one to feel forgiving and grateful. Grateful people are > > low in cortisol, high in dhea, stronger immune systems, and so on. So > > religion may help healthy gene expression. Of course, Frank, you can > > always also be grateful to the big bang et seq. but somehow it doesn't > > seem as soul-satisfying. So here is my effort at a hymn for materialists. > > Lynn > > > > A hymn for Frank and Sarah > > > > "We thank thee, dear Darwin, down under our feet, > > For all life's developing complexity. > > We thank thee for frontal lobes mighty and full, > > And right temporal lobes where we feel mystery's pull. > > > > Oh, dear father Hubble, as stars rush away, > > We're glad they have given us an earth where we stay. > > And for a world tilted just twenty-one degrees, > > That makes life adjust to the changes we need. > > > > The Anthropic Principle fills hearts with delight, > > As we ponder the chances that life would arrive > > From strong and weak forces ideally aligned > > To tickle our minds with the presence divine. > > > > Now let's nurture gratitude deep in our hearts, > > So good gene expression will sure do its part > > To lengthen out full lives for you and for me > > To create our very own divinity! > > > > copyright (c) 2005 lynn johnson - distribution is encouraged and will be > > gratefully appreciated. Direct criticism to whocares at deadletter.com > > > > Useful graphic: http://universe-review.ca/I02-21-multiverse3.jpg > > > > Lynn D. Johnson, Ph.D. > > Solutions Consulting Group > > 166 East 5900 South, Ste. B-108 > > Salt Lake City, UT 84107 > > > > Tel: (801) 261-1412; Fax: (801) 288-2269 > > > > Check out our webpage: www.solution-consulting.com > > > > Feeling upset? Order Get On The Peace Train, my new solution-oriented > > book on negative emotions. > > > > > > From shovland at mindspring.com Thu Dec 29 22:31:29 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Thu, 29 Dec 2005 14:31:29 -0800 (GMT-08:00) Subject: [Paleopsych] WebMd: Genes May Help ... (religion, belief, and well being) Message-ID: <1562903.1135895489625.JavaMail.root@mswamui-chipeau.atl.sa.earthlink.net> What we call "spiritual" may be memories of ancient science lost in "the flood." If this seems strange, why would we think this is the first time high civilization has arisen on this planet? Most people live close to the ocean, and last year we saw what can happen. What would happen if multiple large meteor strikes caused tidal waves in both the Atlantic and Pacific, wiping out the major cities on the coasts? How long would it be before the survivors were reduced to a much simpler life style? -----Original Message----- >From: "Lynn D. Johnson, Ph.D." >Sent: Dec 29, 2005 11:46 AM >To: "Todd I. Stark" , The new improved paleopsych list >Subject: Re: [Paleopsych] WebMd: Genes May Help ... (religion, belief, and well being) > >Todd, I think you might be misunderstanding my argument. Anyway, you are >supposed to send criticisms to another email address, I think >Comments below > >Todd I. Stark wrote: > >>Lynn, this is beautiful, in its own gently cynical way. >> >>It shows dramatically the *emotional* level of the misunderstnding >>between theists and atheists that I think _also_ drives those misguided >>attempts to save our souls from science, and drives even some more or >>less educated folks over to loyalist political movements like >>intelligent design. >> >> >> >No, you misread. The key is gratitude, optimism, and so on. Such >emotional states drive a positive hormone environment. BTW, you can get >a lot of that from owning dogs and cats that you pet, since that also >elevates DHEA. I brought in the business about religion because Frank >had recently said to me he couldn't see the value in it. Since he is not >going to become an adherent, I came up with a quasi-mystical hymn he >could sing at random times throughout the year, so as to raise his good >hormones. > >Or maybe he should pet a dog. > > >>The most interesting thing about it is the compelling intuitive >>assumption that the meaning of life must be found somewhere in universal >>laws of physics or biology, of all places. Personally, I find it >>absolutely astonishing that anyone could find reassurance in any sense >>in the "fine tuning" of constants of the universe. >> >Sounds like your right temporal lobe is going to waste, hummm??? > >> My suspicion is that >>there is a fine edged wedge that we all teeter on in our early >>development, and we all either fall down on one side or the other as we >>mature. On one side of the wedge are those of us who imagine nature as >>having a spiritual presence and finding that reassuring. I'm guessing >>that most people are on that side of the wedge. On the other side are >>those of us who have a lot of trouble imagining nature having a >>spiritual presence, and aren't very much reassured by it when we do >>imagine it. >> >> >> >Agreed. Most people are on the believing side, 80% - 90% in the US, less >in godless europe, but what the heck, they'll all be muslim within 100 >years anyway. Perhaps 40% - 50% of serious scientists are theists. > >http://en.wikipedia.org/wiki/Religion > >2.1 billion christians >1.1 moslems >1 secularists >lots of other stuff. > >>I think when we look seriously at the theory that religion "reminds" us >>to be forgiving and grateful, I agree that it may have some validity in >>some abstract sense, but not in the straightforward way we tend to think >>of it intuitively. >> >Well, if you go to church, you will be very literally reminded of it, >and quite straightforwardly. > >>Emotional response patterns are influenced by a >>mixture of temperament and cognitive habits, and religious belief by no >>means has a consistent effect on cognitive patterns regarding emotional >>response! >> >Nothing has a consistent and straightforward effect, but generally there >is a strong elevating message there. It does have an effect, if I look >at my own life and that of others. > >>Think about it. That would be like saying that Christians >>all tend to respond the same way to the same situations because of >>certain particular religious beliefs they hold, and that Jews respond to >>the same situations systematically differently because their particular >>religious beliefs are different. >> >No, they are about the same. There is very little difference in core >values, except that christians have a stronger injunction to forgive. >Not absolute, just stronger. See recent essays by Dennis Prager on that, >a professing jew who points out how very similar his values are to >christian values, which he sees as proceeding from the jewish foundation. > >He recently wrote about being criticized by his jewish friends for >supporting christians, but he thinks such divisiveness is silly. > >>Even in the case of theology this >>doesn't hold up. There are liberal Jews who think more like liberal >>Protestants than like conservative Jews on most issues, for example. >> >Yes, but they aren't the happy ones (come on, it is a joke!) > >>We >>can find common patterns whereby beliefs cluster, but I don't think they >>cluster around particular items of creed that religions find so >>important in distinguishing themselves. Even the very belief in a deity >>doesn't particularly distinguish us morally or ethically. The >>hypothesis that religious beliefs in particular guide cognition in any >>global way just doesn't seem very plausible to me. >> >> >> > >Hum . . . evidence? Surveys? > >So how to explain the pro-social benefits of religious adherence? That >was my topic, I thought. Maslow found spiritually committed people >survived concentration camps better than secular and non-believing. That >has been recently supported in various meta analyses >cf., http://archfami.ama-assn.org/cgi/content/full/7/2/118 > >Do an APA lit search on _religion_ and _benefits_. I am too lazy to get >to it right now. > >>Religion and its embedded culture do have all sorts of aspects that >>affect social conditions and how we develop. Our temptation is to >>overemphasize the "belief" or "creed" aspect of religion, and attribute >>everything to that, when in reality, I think it is one of the less >>important aspects of religon in terms of its effects on our well being. >> >> >> >Not what Maslow found. Belief had a very positive effect. Belief is >amazingly robust as a driving force in our behavior. See Seligman's work >on learned depression, learned optimism, and the attitudinal >(belief-oriented) components of resistence to depression. > >>I strongly suspect that our temptation to focus on "belief" is driven by >>an instinct to segregate ourselves based on different ways of thinking, >>we try to discern each others' "beliefs" in order to help predict their >>behavior. So when we think about each other, we tend to think of them >>in terms of what we imagine people to believe, and we want to attribute >>their goodness or well-being to what they believe as well. >> >> >NOt a bad arugment, but too limiting. It could be one factor, but there >are more powerful benefits of a robust belief system. > >>To put it another, perhaps melodramatic way, there are an awful lot more >>forgiving, humble, grateful, ethical atheists and a whole lot more >>unforgiving, arrogant, dishonest theists than we should expect as a >>prediction of the theory that religion particularly reminds us to be >>good or reminds us to be humble. >> >> >> >Citations? Surveys? > >Some of the evidence you may offer would be rather suspect, such as >Adorno et al., the F-scale which I think turns out to have no real >validity. Adorno was a True Believer, and knew what he wanted before >starting his research (see Robert Rosenthal). > >>We truly need to look farther than people's religious beliefs to find >>the real source of human goodness and the relationship between culture >>and well-being, in my opinion. >> >> >> >Have you read the stuff on vertical and horizontal religion by Alport? > >>It appears to me that the world's religions are collectively like a huge >>canvas which we all look at in our own selective way for the pieces we >>need to reassure ourselves of what we already believe. >> >>Thanks again for the beautiful prose. Now if you can come up with a >>cool humanist holiday to rival Christmas, I'll be really impressed! >> >>warm regards, >> >>Todd >> >> >> >> >>Lynn D. Johnson, Ph.D. wrote on 12/29/2005, 1:57 AM: >> >> > Somewhere (I need to look it up) I have some research reports about >> > optimism and gratitude affecting gene expression - people who are very >> > optimistic, happy, and grateful will live longer and healthier lives. >> > >> > Frank Forman and I were discussing the value (or lack of it, as Frank >> > see it) of a religious life, and Steve's gene expression emphasis >> > suggests one of the values. Since both Christianity and Buddhism >> > strongly emphasize gratitude as a vital virtue (and I believe Islam, not >> > as sure), that may account for religious people tending to live longer. >> > Religions remind one to feel forgiving and grateful. Grateful people are >> > low in cortisol, high in dhea, stronger immune systems, and so on. So >> > religion may help healthy gene expression. Of course, Frank, you can >> > always also be grateful to the big bang et seq. but somehow it doesn't >> > seem as soul-satisfying. So here is my effort at a hymn for materialists. >> > Lynn >> > >> > A hymn for Frank and Sarah >> > >> > "We thank thee, dear Darwin, down under our feet, >> > For all life's developing complexity. >> > We thank thee for frontal lobes mighty and full, >> > And right temporal lobes where we feel mystery's pull. >> > >> > Oh, dear father Hubble, as stars rush away, >> > We're glad they have given us an earth where we stay. >> > And for a world tilted just twenty-one degrees, >> > That makes life adjust to the changes we need. >> > >> > The Anthropic Principle fills hearts with delight, >> > As we ponder the chances that life would arrive >> > From strong and weak forces ideally aligned >> > To tickle our minds with the presence divine. >> > >> > Now let's nurture gratitude deep in our hearts, >> > So good gene expression will sure do its part >> > To lengthen out full lives for you and for me >> > To create our very own divinity! >> > >> > copyright (c) 2005 lynn johnson - distribution is encouraged and will be >> > gratefully appreciated. Direct criticism to whocares at deadletter.com >> > >> > Useful graphic: http://universe-review.ca/I02-21-multiverse3.jpg >> > >> > Lynn D. Johnson, Ph.D. >> > Solutions Consulting Group >> > 166 East 5900 South, Ste. B-108 >> > Salt Lake City, UT 84107 >> > >> > Tel: (801) 261-1412; Fax: (801) 288-2269 >> > >> > Check out our webpage: www.solution-consulting.com >> > >> > Feeling upset? Order Get On The Peace Train, my new solution-oriented >> > book on negative emotions. >> >> >> >> >> >> > >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych From shovland at mindspring.com Fri Dec 30 05:40:47 2005 From: shovland at mindspring.com (Steve Hovland) Date: Thu, 29 Dec 2005 21:40:47 -0800 Subject: [Paleopsych] WebMd: Genes May Help ... (religion, belief, and well being) In-Reply-To: <43B4925F.2040106@solution-consulting.com> Message-ID: I looked at Elane Durham's stuff. Gordon Michael Scallion sees similar things. I think these things happen in different places at different times. So everyone knows about the floods that occur and cleanse the Earth. On balance I don't lose too much sleep over these visions. I'm more inclined to live full speed ahead these days. If the wave comes I'd just as soon go fast rather than starve in a Mad Max scenario, although I do have my gun :-) And then I put on my Jungian hat and think about it in symbolic terms, which I think some of those people tend not to do. Imagine a great flood of consciousness inundating the Earth, transforming our common view of the world in what would seem like the blink of an eye. Is the instant global communication of the web that flood? steve -----Original Message----- From: Lynn D. Johnson, Ph.D. [mailto:ljohnson at solution-consulting.com] Sent: Thursday, December 29, 2005 5:50 PM To: Steve Hovland Subject: Re: [Paleopsych] WebMd: Genes May Help ... (religion, belief, and well being) Interesting. I have never heard that. I took some geology, here is the real deal. The GSL is the reminant of a huge inland sea called Lake Bonneville. At some time in the past, there was another catacalysmic breach on the north-most point, and Lake Bonneville ran out through what is now the Snake River drainage. The flood was 400 feet high, at least, raging down the Snake drainage to the Pacific ocean. It had to be a huge event; it apparently happened about 15,000 years ago. I believe there are indian legends about it. I found an interesting site: http://imnh.isu.edu/digitalatlas/hydr/lkbflood/lbf.htm also: http://vulcan.wr.usgs.gov/Glossary/Glaciers/IceSheets/description_lake_bon neville.html One can argue that there have been several Noah-type devastating floods. I think what we should learn from this is that the past is far more catastrophic and dramatic than we had ever believed. So it is not unlikely that the future will mirror that. Maybe you should move to Reno. Did you look at the Elane Durham stuff? What did you think of her visions? Lynn Lynn D. Johnson, Ph.D. Solutions Consulting Group 166 East 5900 South, Ste. B-108 Salt Lake City, UT 84107 Tel: (801) 261-1412; Fax: (801) 288-2269 Check out our webpage: www.solution-consulting.com Feeling upset? Order Get On The Peace Train, my new solution-oriented book on negative emotions. Steve Hovland wrote: I have heard that the Great Salt Lake is the remains of a wave that came in from the Pacific and didn't stop until it hit the Rocky Mountains... -----Original Message----- From: Lynn D. Johnson, Ph.D. [mailto:ljohnson at solution-consulting.com] Sent: Thursday, December 29, 2005 2:53 PM To: shovland at mindspring.com Subject: Re: [Paleopsych] WebMd: Genes May Help ... (religion, belief, and well being) Steve, I am sympathetic to your thoughts. There is pretty good evidence that a monstrous flood destroyed at least one civilization in what is now the black sea area. They have found buildings on the floor of that sea, and the notion is that it was probably wiped out when a natural dam holding back the sea gave way, and within days the whole area was under water. Orson Scott Card wrote a science fiction piece about a machine called Pastwatch where historians could watch the past, and they watch that event which becomes the basis for the Noah story in the bible (that is, a man figures out the geography and how the natural dam is failing, and builds a big boat to try to ride out the ensuing flood). He later turned the idea into his book, Pastwatch: The Redemption of Christopher Columbus, which is in my view one of his most inventive books ever. Obviously, a catastrophy would send almost all of us back to subsistence living, and most of us - me included - don't have many of the skills needed (farming, hunting/gathering, pottery, food preserving, shelter building, and so on). I truly feel we live on the edge of a knife, a civilization balanced on a pyramid that could collapse. An astroid strike or two would clearly do the trick. This is way off the topic, but in my own religious tradition, there is a lot of catacalysmic prophecy, breakdown of society, Mad Max groups warring one against another, and so on, so Mormons are often more interested in the scenarios you suggest. See this link for an interesting catastrophy prophecy: http://www.near-death.com/forum/nde/000/75.html Elane is a personal friend. I met her when I started a local NDE support group. She was not LDS (Mormon) at the time of the vision, but the idea of a mid-continent center of power is right from LDS prophecies, in which Jackson County, MO will become the location of a New Jerusalem in the post-apocalyptic last days. So her vision fits in with our own ideas. Happy new year, and let us pray for another year of dodging the bullet! I am not too interested in being in the middle of catastrophies! Lynn Lynn D. Johnson, Ph.D. Solutions Consulting Group 166 East 5900 South, Ste. B-108 Salt Lake City, UT 84107 Tel: (801) 261-1412; Fax: (801) 288-2269 Check out our webpage: www.solution-consulting.com Feeling upset? Order Get On The Peace Train, my new solution-oriented book on negative emotions. shovland at mindspring.com wrote: What we call "spiritual" may be memories of ancient science lost in "the flood." If this seems strange, why would we think this is the first time high civilization has arisen on this planet? Most people live close to the ocean, and last year we saw what can happen. What would happen if multiple large meteor strikes caused tidal waves in both the Atlantic and Pacific, wiping out the major cities on the coasts? How long would it be before the survivors were reduced to a much simpler life style? -----Original Message----- From: "Lynn D. Johnson, Ph.D." Sent: Dec 29, 2005 11:46 AM To: "Todd I. Stark" , The new improved paleopsych list Subject: Re: [Paleopsych] WebMd: Genes May Help ... (religion, belief, and well being) Todd, I think you might be misunderstanding my argument. Anyway, you are supposed to send criticisms to another email address, I think Comments below Todd I. Stark wrote: Lynn, this is beautiful, in its own gently cynical way. It shows dramatically the *emotional* level of the misunderstnding between theists and atheists that I think _also_ drives those misguided attempts to save our souls from science, and drives even some more or less educated folks over to loyalist political movements like intelligent design. No, you misread. The key is gratitude, optimism, and so on. Such emotional states drive a positive hormone environment. BTW, you can get a lot of that from owning dogs and cats that you pet, since that also elevates DHEA. I brought in the business about religion because Frank had recently said to me he couldn't see the value in it. Since he is not going to become an adherent, I came up with a quasi-mystical hymn he could sing at random times throughout the year, so as to raise his good hormones. Or maybe he should pet a dog. The most interesting thing about it is the compelling intuitive assumption that the meaning of life must be found somewhere in universal laws of physics or biology, of all places. Personally, I find it absolutely astonishing that anyone could find reassurance in any sense in the "fine tuning" of constants of the universe. Sounds like your right temporal lobe is going to waste, hummm??? My suspicion is that there is a fine edged wedge that we all teeter on in our early development, and we all either fall down on one side or the other as we mature. On one side of the wedge are those of us who imagine nature as having a spiritual presence and finding that reassuring. I'm guessing that most people are on that side of the wedge. On the other side are those of us who have a lot of trouble imagining nature having a spiritual presence, and aren't very much reassured by it when we do imagine it. Agreed. Most people are on the believing side, 80% - 90% in the US, less in godless europe, but what the heck, they'll all be muslim within 100 years anyway. Perhaps 40% - 50% of serious scientists are theists. http://en.wikipedia.org/wiki/Religion 2.1 billion christians 1.1 moslems 1 secularists lots of other stuff. I think when we look seriously at the theory that religion "reminds" us to be forgiving and grateful, I agree that it may have some validity in some abstract sense, but not in the straightforward way we tend to think of it intuitively. Well, if you go to church, you will be very literally reminded of it, and quite straightforwardly. Emotional response patterns are influenced by a mixture of temperament and cognitive habits, and religious belief by no means has a consistent effect on cognitive patterns regarding emotional response! Nothing has a consistent and straightforward effect, but generally there is a strong elevating message there. It does have an effect, if I look at my own life and that of others. Think about it. That would be like saying that Christians all tend to respond the same way to the same situations because of certain particular religious beliefs they hold, and that Jews respond to the same situations systematically differently because their particular religious beliefs are different. No, they are about the same. There is very little difference in core values, except that christians have a stronger injunction to forgive. Not absolute, just stronger. See recent essays by Dennis Prager on that, a professing jew who points out how very similar his values are to christian values, which he sees as proceeding from the jewish foundation. He recently wrote about being criticized by his jewish friends for supporting christians, but he thinks such divisiveness is silly. Even in the case of theology this doesn't hold up. There are liberal Jews who think more like liberal Protestants than like conservative Jews on most issues, for example. Yes, but they aren't the happy ones (come on, it is a joke!) We can find common patterns whereby beliefs cluster, but I don't think they cluster around particular items of creed that religions find so important in distinguishing themselves. Even the very belief in a deity doesn't particularly distinguish us morally or ethically. The hypothesis that religious beliefs in particular guide cognition in any global way just doesn't seem very plausible to me. Hum . . . evidence? Surveys? So how to explain the pro-social benefits of religious adherence? That was my topic, I thought. Maslow found spiritually committed people survived concentration camps better than secular and non-believing. That has been recently supported in various meta analyses cf., http://archfami.ama-assn.org/cgi/content/full/7/2/118 Do an APA lit search on _religion_ and _benefits_. I am too lazy to get to it right now. Religion and its embedded culture do have all sorts of aspects that affect social conditions and how we develop. Our temptation is to overemphasize the "belief" or "creed" aspect of religion, and attribute everything to that, when in reality, I think it is one of the less important aspects of religon in terms of its effects on our well being. Not what Maslow found. Belief had a very positive effect. Belief is amazingly robust as a driving force in our behavior. See Seligman's work on learned depression, learned optimism, and the attitudinal (belief-oriented) components of resistence to depression. I strongly suspect that our temptation to focus on "belief" is driven by an instinct to segregate ourselves based on different ways of thinking, we try to discern each others' "beliefs" in order to help predict their behavior. So when we think about each other, we tend to think of them in terms of what we imagine people to believe, and we want to attribute their goodness or well-being to what they believe as well. NOt a bad arugment, but too limiting. It could be one factor, but there are more powerful benefits of a robust belief system. To put it another, perhaps melodramatic way, there are an awful lot more forgiving, humble, grateful, ethical atheists and a whole lot more unforgiving, arrogant, dishonest theists than we should expect as a prediction of the theory that religion particularly reminds us to be good or reminds us to be humble. Citations? Surveys? Some of the evidence you may offer would be rather suspect, such as Adorno et al., the F-scale which I think turns out to have no real validity. Adorno was a True Believer, and knew what he wanted before starting his research (see Robert Rosenthal). We truly need to look farther than people's religious beliefs to find the real source of human goodness and the relationship between culture and well-being, in my opinion. Have you read the stuff on vertical and horizontal religion by Alport? It appears to me that the world's religions are collectively like a huge canvas which we all look at in our own selective way for the pieces we need to reassure ourselves of what we already believe. Thanks again for the beautiful prose. Now if you can come up with a cool humanist holiday to rival Christmas, I'll be really impressed! warm regards, Todd Lynn D. Johnson, Ph.D. wrote on 12/29/2005, 1:57 AM: Somewhere (I need to look it up) I have some research reports about optimism and gratitude affecting gene expression - people who are very optimistic, happy, and grateful will live longer and healthier lives. Frank Forman and I were discussing the value (or lack of it, as Frank see it) of a religious life, and Steve's gene expression emphasis suggests one of the values. Since both Christianity and Buddhism strongly emphasize gratitude as a vital virtue (and I believe Islam, not as sure), that may account for religious people tending to live longer. Religions remind one to feel forgiving and grateful. Grateful people are low in cortisol, high in dhea, stronger immune systems, and so on. So religion may help healthy gene expression. Of course, Frank, you can always also be grateful to the big bang et seq. but somehow it doesn't seem as soul-satisfying. So here is my effort at a hymn for materialists. Lynn A hymn for Frank and Sarah "We thank thee, dear Darwin, down under our feet, For all life's developing complexity. We thank thee for frontal lobes mighty and full, And right temporal lobes where we feel mystery's pull. Oh, dear father Hubble, as stars rush away, We're glad they have given us an earth where we stay. And for a world tilted just twenty-one degrees, That makes life adjust to the changes we need. The Anthropic Principle fills hearts with delight, As we ponder the chances that life would arrive >From strong and weak forces ideally aligned To tickle our minds with the presence divine. Now let's nurture gratitude deep in our hearts, So good gene expression will sure do its part To lengthen out full lives for you and for me To create our very own divinity! copyright (c) 2005 lynn johnson - distribution is encouraged and will be gratefully appreciated. Direct criticism to whocares at deadletter.com Useful graphic: http://universe-review.ca/I02-21-multiverse3.jpg Lynn D. Johnson, Ph.D. Solutions Consulting Group 166 East 5900 South, Ste. B-108 Salt Lake City, UT 84107 Tel: (801) 261-1412; Fax: (801) 288-2269 Check out our webpage: www.solution-consulting.com Feeling upset? Order Get On The Peace Train, my new solution-oriented book on negative emotions. _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych -------------- next part -------------- An HTML attachment was scrubbed... URL: From guavaberry at earthlink.net Fri Dec 30 19:12:50 2005 From: guavaberry at earthlink.net (K.E.) Date: Fri, 30 Dec 2005 14:12:50 -0500 Subject: [Paleopsych] Netizens expose scientific fraud in South Korea Message-ID: <7.0.0.16.0.20051230141227.033eba88@earthlink.net> Netizens expose scientific fraud in South Korea "South Korean 'Netizens of the Year': The online scientific community and Internet media challenge old hierarchies" ,http://english.ohmynews.com/articleview/article_view.asp?menu=c10400&no=266352&rel_no=1> and "Korean Cloning Hero Deconstructed Online: Online Scientific Community in South Korea Uncovers Fabrication of Data in Acclaimed Stem Cell Research Papers" http://www.heise.de/tp/r4/artikel/21/21647/1.html best, Karen Ellis <>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<> The Educational CyberPlayGround http://www.edu-cyberpg.com/ National Children's Folksong Repository http://www.edu-cyberpg.com/NCFR/ Hot List of Schools Online and Net Happenings, K12 Newsletters, Network Newsletters http://www.edu-cyberpg.com/Community/ 7 Hot Site Awards New York Times, USA Today , MSNBC, Earthlink, USA Today Best Bets For Educators, Macworld Top Fifty <>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<>~~~~~<> From checker at panix.com Fri Dec 30 19:29:23 2005 From: checker at panix.com (Premise Checker) Date: Fri, 30 Dec 2005 14:29:23 -0500 (EST) Subject: [Paleopsych] NYT: Gene That Determines Skin Color Is Discovered, Scientists Report Message-ID: Gene That Determines Skin Color Is Discovered, Scientists Report http://www.nytimes.com/2005/12/16/science/16gene.html?pagewanted=print [It's significant that the usual warnings against racism were not made by the reporter.] By NICHOLAS WADE A gene that is responsible for the pale skin of Europeans and the dark skin of Africans has been discovered by scientists at Pennsylvania State University. The gene comes in two versions, one of which is found in 99 percent of Europeans and the other in 93 to 100 percent of Africans, the researchers report in today's issue of Science. The gene is unusual because with most human genes, different versions are generally shared, though one version may be more common in one race than another. One exception is the Duffy null allele, a version of a gene that prevents malaria, that is found almost exclusively in one race, sub-Saharan Africans. The new gene falls into the same category as the Duffy gene, and it may shed light on the evolutionary pressures to which Europeans were subjected as their ancestors, who were presumably dark skinned, moved into the northern latitudes some 40,000 years ago. Humans acquired dark skins in Africa about 1.5 million years ago to shield their newly hairless bodies from the sun. Its ultra-violet rays destroy folic acid, a shortage of which leads to birth defects. But when the modern humans who left Africa began to live in northern latitudes, they needed more sunlight to penetrate the skin, to permit the chemical reaction that produces vitamin D. The new gene was first identified not in humans but in a mutant zebra fish, a small striped fish common in aquariums. The mutant fish are known as golden, because their stripes, usually black, are much paler and their bodies more yellow. Dr. Keith C. Cheng, an author of the report, and his colleagues showed that the golden version of the fish gene altered production of melanosomes, the tiny black particles of pigments that give skin its color. The researchers then found that in humans, who have their own form of the gene, the version common in Africans allowed larger melanosomes, which tend to clump together, whereas the version in Europeans produced smaller and more dispersed melanosomes. Asians have the same version of the gene as Africans, so they presumably acquired their light skin through the action of some other gene that affects skin color, said Dr. Cheng. Mark D. Shriver, another author of the article, said his laboratory was trying to assess when the European version of the gene became so common, as well its geographical origin. The intense selective pressure that drove the version to become universal in Europeans may have included sexual selection. "In Africa people are much darker than they need to be for UV protection, so to me that screams sexual selection," Dr. Shriver said. Black skin, in other words, may have been favored by men and women in sexual partners, just as pale skin may have been preferred in sexual partners among Europeans and Asians. From checker at panix.com Fri Dec 30 19:30:00 2005 From: checker at panix.com (Premise Checker) Date: Fri, 30 Dec 2005 14:30:00 -0500 (EST) Subject: [Paleopsych] NPR: Researchers Discover Skin Color Gene Message-ID: Researchers Discover Skin Color Gene http://www.npr.org/templates/story/story.php?storyId=5055391 [This time we do get a warning about racism. No surprise, consider it's National Public Radio.] [14]Nell Boyce Zebrafish Thanks to a mutation in a gene that controls skin pigmentation, the "golden" zebrafish has stripes that are much fainter than the black and white stripes of normal zebrafish. Now researchers have found a similar gene in humans. [15]All Things Considered, December 15, 2005 ? Scientists say they've found a gene that seems to partially control skin color. And they say that a small change in the gene could explain why people with European ancestry tend to have different coloring than people of African or Asian descent. Scientist Keith Cheng says he got drawn into the emotionally charged field of race and genetics because of his interest in a small, tropical fish. "Of course I had trepidations," laughs Cheng, who is a cancer researcher at Penn State College of Medicine in Hershey, Pa. "But my curiosity overwhelmed my trepidation, and this amazing fact that this fish that was found in a pet store might inform us about skin color in a major way was just too much to resist." Cheng normally studies genetic mutations that cause cancer, with the help of minnows called zebrafish. Usually zebrafish are white with black stripes. But there's also a "golden" variety that has much fainter stripes. Cheng noticed that the difference in skin pigmentation patterns between these two fish varieties seemed to mimic the pigmentation differences seen in people with either dark or light skin. It made Cheng wonder if he could use this fish to explore why people have different skin colors. "How can you not be curious about why an Asian might look different from a Caucasian or look different from an African person?" Cheng asks. "That's very interesting. You look different. Why is that?" Cheng may have discovered part of the answer, with the help of his lab fish. His team discovered a skin pigmentation gene that, when mutated, causes the "golden" pattern in zebrafish. And then they looked for a similar gene in humans. In the journal Science, his team reports that people do have a similar gene. In fact, there are two common versions. One showed up in almost all DNA samples taken from small groups of people living in Africa and Asia. The other version appeared in almost all of the people they tested who had European descent. The researchers also used measurements of light reflection to evaluate skin coloration in a group of people with so-called mixed ancestry. "On average, and I need to point out on average, the variation correlates with skin color," says Cheng. People with the European version of the gene tended to have lighter skin. Cheng and his colleagues say this information could be useful for studying skin cancer, or for finding new ways of changing skin color that wouldnt be as damaging as tanning. But police officers are likely to be interested, too. Already, some officials are testing DNA left at crime scenes to get clues about what the culprit might look like. Tony Frudakis runs DNAPrint Genomics, a Florida-based company that uses gene markers associated with geographic ancestry to give police a general sense of whether someone might look more black or white. In one case, he says, such "DNAWitness" testing helped recently track down a serial killer in Louisiana. "They had been targeting a Caucasian individual based on faulty eyewitness testimony," says Frudakis. "We showed that the samples found at the crime scene corresponded to someone who was of predominantly sub-Saharan African ancestry. So this sort of changed the profile of who they were looking for." He says his company has done this kind of testing for a wide variety of law-enforcement agencies, including those in places like New York, Chicago and Los Angeles. Frudakis believes that this method of profiling could be improved by testing genes for more specific features like eye color or height. And he thinks this new gene for skin color is a step towards that goal. Other scientists agree that genes do control a lot of a person's appearance; identical twins are the perfect illustration. But genetic experts emphasize that this new discovery about skin color is a long way from being able to use gene tests to reconstruct exactly what a person looks like. "Having or not having this particular variant will not allow you to say what shade that person's skin might be, within anything other than very wide limits," says Francis Collins, director of the National Human Genome Research Institute. That's because skin color is controlled by multiple genes. And Collins also worries that people will confuse skin color with race. "This is most definitely, and let me emphasize this even more, not the gene for race, which is something I've heard a couple people already say when they heard about this result," Collins says. "There is no gene for race." Collins says the social idea of race depends on all kinds of cues beyond physical appearance and skin color, everything from your neighborhood to your family traditions to your clothes. And thats what leads Pilar Ossorio, a scientist and lawyer at the University of Wisconsin, to question how useful genetic tests for skin color and ancestry will be for profiling crime suspects. Consider this, Ossorio says: What if someone with a lot of genes for Native American ancestry and medium-brown skin speaks Spanish and lives in a Hispanic community? "That person could be living in the world as a Hispanic person and the police would probably not connect that Hispanic person with the profile that they got," she says. "We use a lot of things to understand what race someone is, what ethnicity they are, where they fit in our social world." And for most of them, she points out, there is no genetic test. References 14. http://www.npr.org/templates/story/story.php?storyId=4494969 15. http://www.npr.org/templates/rundowns/rundown.php?prgId=2 16. http://www.npr.org/templates/story/story.php?storyId=5055391#email From checker at panix.com Fri Dec 30 19:30:47 2005 From: checker at panix.com (Premise Checker) Date: Fri, 30 Dec 2005 14:30:47 -0500 (EST) Subject: [Paleopsych] WP: Scientists Find A DNA Change That Accounts For White Skin Message-ID: Scientists Find A DNA Change That Accounts For White Skin http://www.washingtonpost.com/wp-dyn/content/article/2005/12/15/AR2005121501728_pf.html [Another warning against racism, the previous one coming from National Public Radio, while the Times' account had no warning at all. [Actually, it's the Times that has changed. Generally speaking (having read the Times since 1962 and the Post since 1969) the Times is more liberal than the Post when it comes to new programs to uplift the despised, the downtrodden, and the dispossed, whereas the Times is more conservative in cautioning against the potentials for depredations and inroads on civil liberties than the Post. No surprise this last, since the Post published in a government town. [It's on the matter of racial differences that the Times, or certainly Nicholas Wade, has become more open minded.] By Rick Weiss Washington Post Staff Writer Friday, December 16, 2005; A01 Scientists said yesterday that they have discovered a tiny genetic mutation that largely explains the first appearance of white skin in humans tens of thousands of years ago, a finding that helps solve one of biology's most enduring mysteries and illuminates one of humanity's greatest sources of strife. The work suggests that the skin-whitening mutation occurred by chance in a single individual after the first human exodus from Africa, when all people were brown-skinned. That person's offspring apparently thrived as humans moved northward into what is now Europe, helping to give rise to the lightest of the world's races. Leaders of the study, at Penn State University, warned against interpreting the finding as a discovery of "the race gene." Race is a vaguely defined biological, social and political concept, they noted, and skin color is only part of what race is -- and is not. In fact, several scientists said, the new work shows just how small a biological difference is reflected by skin color. The newly found mutation involves a change of just one letter of DNA code out of the 3.1 billion letters in the human genome -- the complete instructions for making a human being. "It's a major finding in a very sensitive area," said Stephen Oppenheimer, an expert in anthropological genetics at Oxford University, who was not involved in the work. "Almost all the differences used to differentiate populations from around the world really are skin deep." The work raises a raft of new questions -- not least of which is why white skin caught on so thoroughly in northern climes once it arose. Some scientists suggest that lighter skin offered a strong survival advantage for people who migrated out of Africa by boosting their levels of bone-strengthening vitamin D; others have posited that its novelty and showiness simply made it more attractive to those seeking mates. The work also reveals for the first time that Asians owe their relatively light skin to different mutations. That means that light skin arose independently at least twice in human evolution, in each case affecting populations with the facial and other traits that today are commonly regarded as the hallmarks of Caucasian and Asian races. Several sociologists and others said they feared that such revelations might wrongly overshadow the prevailing finding of genetics over the past 10 years: that the number of DNA differences between races is tiny compared with the range of genetic diversity found within any single racial group. Even study leader Keith Cheng said he was at first uncomfortable talking about the new work, fearing that the finding of such a clear genetic difference between people of African and European ancestries might reawaken discredited assertions of other purported inborn differences between races -- the most long-standing and inflammatory of those being intelligence. "I think human beings are extremely insecure and look to visual cues of sameness to feel better, and people will do bad things to people who look different," Cheng said. The discovery, described in today's issue of the journal Science, was an unexpected outgrowth of studies Cheng and his colleagues were conducting on inch-long zebra fish, which are popular research tools for geneticists and developmental biologists. Having identified a gene that, when mutated, interferes with its ability to make its characteristic black stripes, the team scanned human DNA databases to see if a similar gene resides in people. To their surprise, they found virtually identical pigment-building genes in humans, chickens, dogs, cows and many others species, an indication of its biological value. They got a bigger surprise when they looked in a new database comparing the genomes of four of the world's major racial groups. That showed that whites with northern and western European ancestry have a mutated version of the gene. Skin color is a reflection of the amount and distribution of the pigment melanin, which in humans protects against damaging ultraviolet rays but in other species is also used for camouflage or other purposes. The mutation that deprives zebra fish of their stripes blocks the creation of a protein whose job is to move charged atoms across cell membranes, an obscure process that is crucial to the accumulation of melanin inside cells. Humans of European descent, Cheng's team found, bear a slightly different mutation that hobbles the same protein with similar effect. The defect does not affect melanin deposition in other parts of the body, including the hair and eyes, whose tints are under the control of other genes. A few genes have previously been associated with human pigment disorders -- most notably those that, when mutated, lead to albinism, an extreme form of pigment loss. But the newly found glitch is the first found to play a role in the formation of "normal" white skin. The Penn State team calculates that the gene, known as slc24a5, is responsible for about one-third of the pigment loss that made black skin white. A few other as-yet-unidentified mutated genes apparently account for the rest. Although precise dating is impossible, several scientists speculated on the basis of its spread and variation that the mutation arose between 20,000 and 50,000 years ago. That would be consistent with research showing that a wave of ancestral humans migrated northward and eastward out of Africa about 50,000 years ago. Unlike most mutations, this one quickly overwhelmed its ancestral version, at least in Europe, suggesting it had a real benefit. Many scientists suspect that benefit has to do with vitamin D, made in the body with the help of sunlight and critical to proper bone development. Sun intensity is great enough in equatorial regions that the vitamin can still be made in dark-skinned people despite the ultraviolet shielding effects of melanin. In the north, where sunlight is less intense and cold weather demands that more clothing be worn, melanin's ultraviolet shielding became a liability, the thinking goes. Today that solar requirement is largely irrelevant because many foods are supplemented with vitamin D. Some scientists said they suspect that white skin's rapid rise to genetic dominance may also be the product of "sexual selection," a phenomenon of evolutionary biology in which almost any new and showy trait in a healthy individual can become highly prized by those seeking mates, perhaps because it provides evidence of genetic innovativeness. Cheng and co-worker Victor A. Canfield said their discovery could have practical spinoffs. A gene so crucial to the buildup of melanin in the skin might be a good target for new drugs against melanoma, for example, a cancer of melanin cells in which slc24a5 works overtime. But they and others agreed that, for better or worse, the finding's most immediate impact may be an escalating debate about the meaning of race. Recent revelations that all people are more than 99.9 percent genetically identical has proved that race has almost no biological validity. Yet geneticists' claims that race is a phony construct have not rung true to many nonscientists -- and understandably so, said Vivian Ota Wang of the National Human Genome Research Institute in Bethesda. "You may tell people that race isn't real and doesn't matter, but they can't catch a cab," Ota Wang said. "So unless we take that into account it makes us sound crazy." From checker at panix.com Fri Dec 30 19:31:01 2005 From: checker at panix.com (Premise Checker) Date: Fri, 30 Dec 2005 14:31:01 -0500 (EST) Subject: [Paleopsych] MSNBC: How the brain tunes out background noise Message-ID: How the brain tunes out background noise http://bloggerman at msnbc.msn.com/id/10300967/from/RS.5/ 'Detector neurons' focus exclusively on novel sounds, scientists say Special neurons in the brain stem of rats focus exclusively on novel sounds and help them ignore predictable and ongoing noises, a new study finds. The same process likely occurs in humans and may affect our speech, and even help us laugh. The "novelty detector neurons," as researchers call them, quickly stop firing if a sound or a pattern of sounds is repeated. They will briefly resume firing if some aspect of the sound changes. The neurons can detect changes in pitch, loudness or duration of a single sound and can also note shifts in the pattern of a complex series of sounds. "It is probably a good thing to have this ability, because it allows us to tune out background noises like the humming of a car's motor while we are driving or the regular tick-tock of a clock," said study team member Ellen Covey, a psychology professor at the University of Washington. "But at the same time, these neurons would instantly draw a person's attention if their car's motor suddenly made a strange noise or if their cell phone rang." Covey said similar neurons seem to be present in all vertebrates and almost certainly exist in the human brain. The novelty detector neurons seem to act as gatekeepers, Covey and her colleagues conclude, preventing information about unimportant sounds from reaching the brain's cortex, where higher processing occurs. This allows people to ignore sounds that don't require attention. The results are detailed this month in the European Journal of Neuroscience. The novelty detector neurons seem able to store information about a pattern of sound, so they may also be involved in speech, which requires anticipating the end of a word and knowing where the next one begins. "Speech fluency requires a predictive strategy," Covey explained. "Whatever we have just heard allows us to anticipate what will come next, and violations of our predictions are often surprising or humorous." From checker at panix.com Fri Dec 30 19:31:13 2005 From: checker at panix.com (Premise Checker) Date: Fri, 30 Dec 2005 14:31:13 -0500 (EST) Subject: [Paleopsych] Commentary: Dan Seligman: Good and Plenty Message-ID: Good and Plenty http://www.commentarymagazine.com/article.asp?aid=12005080_1 The Moral Consequences of Economic Growth by Benjamin M. Friedman Knopf. 592 pp. $35.00 Reviewed by Dan Seligman Booms are better than busts. When the good times roll, people have more money, more options in life, more fun, higher living standards. The material case for prosperity seems incontestable. And now, it appears, there is also a moral case. Max Weber told us that good character promotes economic growth. Benjamin M. Friedman, who has taught economics at Harvard for 33 years, turns this around. He argues that growth not only relies upon morality, but also has "positive moral consequences." What might these be? In Friedman's words: "Economic growth--meaning a rising standard of living for the clear majority of citizens--more often than not fosters greater opportunity, tolerance of diversity, social mobility, commitment to fairness, and dedication to democracy." Conversely, Friedman warns, periods of economic stagnation threaten a country's "moral character"--as, in his view, moral character is threatened right now in the United States: The rising intolerance and incivility and the eroding generosity and openness that have marked important aspects of American society in the recent past have been, in significant part, a consequence of the stagnation of American middle-class standards during much of the last quarter of the 20th century. In tackling the money-morality nexus, Friedman is venturing into a crowded field, and in one sense going against the grain. Many thinkers have emphasized the corrupting effects of wealth, and the tensions between our material interests and our moral sensibilities. Edward Everett, another Harvard sage (but in the early 19th century), argued that "palmy prosperity" was actually a threat to "public virtue." Egalitarians, who regard income inequalities as inherently immoral, worry that high growth rates in some developed countries, like the United States, appear to be associated with less equality. They tend to prefer the French-German-Scandinavian model, currently featuring far less growth but greater equality. By calling for high levels of growth as a means to a greater "commitment to fairness," Friedman is implicitly rejecting these familiar perspectives. In an opening chapter, he offers a kind of armchair summary of his reasons. His main point here is that citizens who feel confident about their own situation will be more generous in supporting those who are less well off. But Friedman's case does not really depend on this proposition, which seems intuitively plausible if not exactly airtight. Ultimately, it rests on a massive exercise in inductive reasoning--specifically, on nine historical chapters that comprise three-quarters of his book and that present numerous examples of moral progress that appears to have been associated with economic growth. This tour is prefaced by a chapter on the 18th-century Enlightenment thinkers who first began to conceive of morality in secular terms. Next, Friedman takes us through the moral consequences (mostly favorable) of the industrial revolution; then he offers four chapters that in effect make up an economic history of the United States; and then he serves up parallel chapters on Britain, France, Germany, and the developing world. With some exceptions, he finds both freedom and tolerance rising with the economy, and repression and bigotry on the march in times of stagnation. The book's final section, less directly relevant to the main thesis, has chapters on the problem of inequality, on the environment, and--finally and inevitably--on what needs to be done in the United States today. Friedman has, without question, an impressive command of worldwide economic/technological history, and this book is a treasure trove of arresting details. One is reminded, for example, that L. Frank Baum's The Wizard of Oz, published in 1909, was widely understood at the time as an allegory supporting the populist free-silver program. (Dorothy's magical shoes are silver in the book, even if not in the movie.) One learns that the invention in 1856 of the Bessemer process for steelmaking suddenly made possible steamships that were far lighter, and therefore able to travel much farther. Also that steel gave men razors to shave with at home. Also that, since 1869, successive generations of Americans have exceeded their parents' living standards by an average of 50 percent. Such historical vignettes make for compelling reading. Yet how well does Friedman's basic proposition about the link between economic growth and morality hold up? The answer hinges in part on his, and our, definition of morality. In his guided tour of morality as it has been conceived by great thinkers in the past, Friedman leans hard on the ethos of self-improvement propagated by the Puritans. He cites Lincoln's contention that in America, the only reason for a worker's failure to rise above wage labor would be his "dependent nature." He includes an intriguing commentary on Adam Smith's The Wealth of Nations. This work, Friedman reminds us, had a powerful moral subtext. In the past, national wealth had been created by war, plunder, slavery, and other forms of exploitation. Now the world had a model for creating wealth via transactions entered into voluntarily, and expected to produce advantages for both sides. For the first time in human history, national wealth creation might be an exercise in virtue. But in later chapters, where Friedman is searching out examples of economic growth encouraging moral behavior, he tosses all this overboard. Here he makes the concept of morality largely synonymous with government intervention on behalf of the underdog--whether or not the intervention has generated a positive result. In writing about recent German history, for example, Friedman speaks enthusiastically of the German chancellor Willy Brandt's economic reforms in the early 1970's, even while conceding that many of them proved counterproductive. It does not matter, says Friedman. The issue is not whether these measures "ultimately represented optimal policy"; what is important is that "they reflected [a] political desire to achieve both a fairer and a more democratic society." In the American context, Friedman takes morality to mean something even more narrowly defined: support for affirmative action, immigration, concern about endangered species, a belief in strong unions and in corporate social responsibility (as opposed to profit maximization). He identifies the federal welfare reforms of the mid-1990's as reverse morality, triggered by the bitterness and bigotry flowing naturally from years of stagnating wages. Friedman's unstated but obvious bottom line: morality is the liberal political agenda. It is worth noting in this connection that Friedman has never made a secret of his politics. A Democratic partisan, he has often advised the party's presidential candidates. His last big book, Day of Reckoning (1988), was a full-bore attack on Reaganomics that got a rave review from the New York Times ("Every citizen ought to read it") and a blast from the Wall Street Journal. Needless to say, political partisanship need not prevent a scholar from writing a convincing book with a controversial thesis. But in this instance, partisanship has clearly skewed the conclusions. Thus, the case can easily be made that egalitarian policies, by undermining the very values like thrift and independence that economic growth hinges upon, often end up injuring their intended beneficiaries and are therefore immoral. When trying to prove a generality by citing examples, it would seem essential that the examples be selected according to some unbiased principle. No such principle appears in the book. Nowhere does Friedman state how much living standards must rise in order to alter the moral climate, nor is it made clear how long a rise must be sustained. Friedman's formulations sometimes assume that decades of prosperity are required, yet he also cites changes brought about in the span of a few short years. Sometimes he uses gross domestic product as his measure of growth. At other times he uses per-capita income. At yet other times, he uses the earnings of the "average worker in American business." So, in making his basic case, he creates a lot of wiggle room. Yet even with all this flexibility, Friedman is obliged to note the existence of numerous exceptions to his rule. By far the most important--he devotes a whole chapter to it--is the Great Depression of the 1930's, when the economy was in tatters but New Deal labor laws, job programs, and welfare initiatives were judged to be scaling new heights of progressive morality. Friedman offers several possible explanations of this antithetical phenomenon, of which the likeliest reason, in his view, was the stark awfulness of the Depression experience. This, affecting people from all walks of life, and representing an unparalleled threat to the entire social structure, ultimately forced Americans to "pull together." But the Depression was equally awful, if not worse, in Germany, and the Germans did not choose progressive policies. They chose Hitler. Another major exception cited by Friedman is from late-19th-century Germany: Bismarck's introduction, after years of economic decline, of social benefits like pensions for the elderly and infirm. Still another is Britain's repeal of the infamous Corn Laws in the 1840's; this step did wonders for ordinary working families, yet it came about as a "response to stagnation." And then there were the British electoral reforms of the mid-1880's, which broadly extended male suffrage--another example of moral deportment that followed fifteen years without significant growth. Our current political landscape would appear to present special difficulties for Friedman. As I noted above, he regards the present period as politically regressive, and relates our supposed moral setbacks to stagnating living standards in the years from 1975 to 2000. In fact, median household income rose by 26 percent in that quarter-century, signifying rising living standards for a clear majority--and presumably passing Friedman's acid test. Nevertheless, and highlighting the wobbly nature of his criteria, he deems this increase insufficient, and attributes our present moral shortcomings to those 25 years of "stagnation." But this brings us back to Friedman's politically self-serving definition of morality. Given his blindness to the possibility that "progressive" polices (like affirmative action) might themselves be unfair, or (like unreformed welfare) positively harmful to the underdog, is it to be wondered that so many of his historical examples fail to uphold the argument laid out, engrossingly if tendentiously, over the 570 pages of this book? Dan Seligman is a contributing editor of Forbes. From checker at panix.com Fri Dec 30 19:31:42 2005 From: checker at panix.com (Premise Checker) Date: Fri, 30 Dec 2005 14:31:42 -0500 (EST) Subject: [Paleopsych] J. Evolutionary Psych.: The Art of Thinking Message-ID: The Art of Thinking by Paul Neumarkt Journal of Evolutionary Psychology, Vol 26, No. 1-2, 2005 We all wear a mask that serves to hide our secrets, to protect us from exposure, to put ourselves in a better light. "A man convinced against his will Is of the same opinion still." This couplet expresses the state of the mass mind. Most people adhere to political convictions that are inaccesssible to reason or change. They would rather preserve their childhood political and religious beliefs, in spite of new and contradictory information. This phenomenon is psychologic, not political. It is characteristic of the party politician to follow the party bias; to find rationalizations by dipping into the party ideology for the correct propaganda, to obfuscate not only the enemy, but also the public. Any new idea or measure to be considered must meet the constant and preconceived airtight and water-tight criteria of the party's ideology. Adam Smith started with a concept that the primary motives of behavior are economic. The only factors considered were gain or profit. In excluding such motives as will to power, ego satisfaction, or mastery over others, psychologic man (generic) was completely ignored. It was customary to measure all things, even human desires, as tangible and quantifiable items, as food, clothing, and shelter. The view was based on the laws of Newtonian physics. It was assumed that in economics nothing is relevant if not quantifiable. The fact that these theories of "economic man" were not adequate enough to probe the depths of "psychologic man" did not occur to the economists. They seemed to be unaware of man as being primarily human--a product of his own impulses, desires, and unconscious motivation. After the Industrial Revolution the idea of pleasure and pain began to dominate the criteria of human action: vanity, pride, status and wealth became as important as food, shelter, and clothing. "I may further say that I have not observed men's honesty to increase with their riches." Thomas Jefferson, 1800 It is easy to indoctrinate people to believe that our economic and social ills are caused by ideology. A society is psychologically immature if greed supersedes empathy. It suggests that there is so much emotionalized propaganda and calculated misinformation in the world that what passes for an informed opinion is merely the expression of a conditioned mass mentality. It should be generally known that politicians are notorious for finding arguments that will support the traditional beliefs of their childhood in spite of contradictory facts to the contrary. Our leaders are untaught in the wisdom of the past; untrained to lead and educate the people to become more self-reliant; uneducated in the art of living; immature in setting an example of intellectual honesty. Thomas Paine had it right when he said, "A long habit of not thinking a thing wrong gives it a superficial appearance of being right." If the condition of our society is the result of a mass neuroses, then we shall see an increase in self-alienation, prejudice, religious and political intolerance, racial hatred and paranoia. This is why we need a new kind of education, one that does not separate feeling from intellect. Political freedom is one thing. There is a second, perhaps greater freedom which is a precurser to the first: the freedom of the mind from turmoil, or fear, or sorrow. The first is freedom from an external tyrant; the second from an inner tyrant. The essence of liberty is the sovereignty of the individual, each person acknowledged to be free and responsible for his or her thought and action, in a society where there is an equality of liberty. Individual freedom means to exercise one's mental powers and enlarge one's scope of intellectual interest. To be in a crowd is to lose one's self-autonomy. Paul Neumarkt, Ph.D. Editor-in-Chief JEP 4625 Fifth Ave. #605 Pittsburg, PA 15213 From checker at panix.com Fri Dec 30 19:32:23 2005 From: checker at panix.com (Premise Checker) Date: Fri, 30 Dec 2005 14:32:23 -0500 (EST) Subject: [Paleopsych] Futures: Anatomy of the Anti-Pluralist, Totalitarian Mindset Message-ID: Anatomy of the Anti-Pluralist, Totalitarian Mindset How to Make Enemies and Influence People: Anatomy of the Anti-Pluralist, Totalitarian Mindset. by Alfonso Montuori Futures, Vol 37, No. 1, 2005 Available online 22 July 2004 Abstract This essay outlines the characteristics of what I call the 'totalitarian mindset'. Under certain circumstances, human beings engage in patterns of thinking and behavior that are extremely closed and intolerant of difference and pluralism. These patterns of thinking and behaving lead us towards totalitarian, anti-pluralistic futures. An awareness of how these patterns arise, how individuals and groups can be manipulated through the use of fear, and how totalitarianism plays into the desire in human beings for 'absolute' answers and solutions, can be helpful in preventing attempts at manipulation and from the dangers of actively wanting to succumb to totalitarian, simplistic, black-and-white solutions in times of stress and anxiety. I present a broad outline of an agenda for education for a pluralistic future. The lived experience of pluralism is still largely unfamiliar and anxiety inducing, and that the phenomenon is generally not understood, with many myths of purity and racial or cultural superiority still prevalent. Finally, as part of that agenda for education, I stress the importance of creativity as an adaptive capacity, an attitude that allows us to see pluralism as an opportunity for growth and positive change rather than simply conflict. Naturally, the common people do not want war, but after all, it is the leaders of a country who determine the policy, and it is always a simple matter to drag people along whether it is a democracy, or a fascist dictatorship, or a parliament, or a communist dictatorship. Voice or no voice, the people can always be brought to the bidding of the leaders. This is easy. All you have to do is to tell them they are being attacked, and denounce the pacifists for lack of patriotism and exposing the country to danger. It works the same in every country. Hermann Goering, in Nuremberg Diary by Gustave Gilbert (1947). Rarely is the question asked: Is our children learning? George W. Bush 1. Introduction Why is it easy, as Goering writes, to get people to do the bidding of their leaders? How was it possible for a sophisticated, educated population like Germany's to follow blindly the dictates of a maniacal leader, and to embark on the horrors of the Nazi regime? How did leaders like Hitler, Stalin, Mao, Mussolini, and others manage to amass so much power and support, and so completely win over huge percentages of their populations that to outsiders, and on hindsight, it seems like they were all participating in a collective consensus trance? How can young men be made to believe that suicide-bombings of civilians are God's work? How can a pluralistic future be safeguarded from what appears to be the human tendency to get lost in a homogenized whole that must destroy human beings who are different, rather than engage them constructively? Why do human beings seem so eager to believe, to wrap themselves around the flag and tall lock-step in line with a black-and-white, simplistic belief system espoused by a strong leader? Arthur Koestler [37] argued that it was not humanity's self-assertive side that is most destructive, but its capacity for self-transcendence, for losing itself in a greater whole and following orders emerging from a closed belief-system. In this paper I explore this capacity for seeking out the consensus trance, how this trance is a profound obstacle to pluralistic futures, and how this tendency can be counteracted. 2. The global context In his 1992 article Jihad versus McWorld. Barber [5] presents two global futures that can be summarized as homogenization versus fragmentation. Neither future is particularly appetizing. One is a unity made up of whitewashed white-bread monoculture, the other a diversity of endless breakdowns and internecine wars, skirmishes and general hostilities. Either we all lose our identity in unity, or our diversity will lead to endless war. But in both cases, the existence of cultural and religious pluralism (what might be called descriptive pluralism) is given. In the case of McWorld homogenization, the issue is the elimination of pluralism through global capitalism. In the case of Global Jihad, pluralism means differences that inevitably lead to war. Are these anti-pluralist futures the only ones open to us? Or are they rather the manifestation of an anti-pluralist, totalitarian mindset that is unable to deal with the complexity and uncertainty of a pluralistic world, and seeks to drastically reduce difference? Difference and exchange present the possibility for learning, creativity, development, and growth. Indeed, it has been argued convincingly that pluralism is essential for a viable human future, for the evolution of social as well as 'natural' systems [12,11,14,38,39,40,43,44,51,54,62]. Indeed, the term 'evolutionary pluralism' refers to a multi-leveled, multi-perspectival approach to the study of evolution that is light years away of from Victorian evolutionism, whose triumphal Panglossian progressionism is replaced by a more modest--yet more creative--bricolage, or evolutionary 'tinkering' [12,14]. But pluralism does not present easy answers. It brings us face to face with complexity, with the unknown, the uncertain, the 'Other'--and it challenges human beings to think, feel, and act differently. Discussing the role of pluralism and uncertainty in Europe after 1492, Kane [36] writes that pluralism means recognizing the possibility that there are many correct senses of right and wrong, and also that there may in fact be no absolute right and wrong. Pluralism, he goes on, does not necessarily mean that there might not be absolute values. But this is of little comfort. The uncertainty created by pluralism means that it is not at all clear how to assess different claims and resolve the disagreements between conflicting points of view, or how one should live one's life while figuring it all out. It has been argued that the anxiety and uncertainty created by pluralism can lead to three fundamentally different kinds of responses: a return to absolutism, a fall into nihilistic relativism, and an embrace of uncertainty and complexity in the opportunity for, and the responsibility of, social creativity and the creation of alternative futures [40,9]. I shall concern myself here with the anatomy of the dangers of absolutism--the totalizing quest for certainty as manifested in what I am calling the totalitarian mindset--and the possibility of a creative alternative. It is beyond the scope of this paper to explore the complex interrelationship between nihilism and absolutism, particularly in the context of Western consumer cultures. Individuals all over the world have sought relief from the uncertainty of a pluralistic world in the arms of absolute belief systems of a religious fundamentalist and/or political/nationalistic nature. In this paper I want to focus on the totalitarian mindset as an approach to addressing pluralism and uncertainty. This mindset manifests in a specific way of thinking and discourse, focusing on the elimination of ambiguity, complexity, and difference. It is fundamentally anti-pluralist and totalitarian. Pluralism is viewed as a source of complexity, ambiguity, and uncertainty. Totalitarianism is, in this sense, a form of anti-pluralistic monism, with all power and authority vested in one place, and with one, clearly defined goal. I will conclude by suggesting some alternatives to this apparently perennially popular condition. 3. The elimination of pluralism and uncertainty A government or group seeking compliance and the elimination of dissent from the population can create conditions that affect the nature of the society's discourse, and the psychology of the individual citizens. Conditions can be created whereby any form of dissent from the established government view is considered unpatriotic, no alternative perspectives are accepted, let alone encouraged, and discourse and collective thinking processes become simple, black-and-white processes of conformity. Conditions in the Soviet Union. Mao's China, Saddam Hussein's Iraq, Hitler's Germany and Mussolini's Italy were clearly designed to enforce a certain mindset through active political and psychological propaganda backed by institutional terror. And in fact Hitler and Mussolini were very familiar with LeBon's work on the psychology of crowds, and drew from it extensively, to the point that it has been argued that practically all Nazi propaganda was based on LeBon's principles [53]. But we need not only look at governments with reputations for totalitarianism in order to see the totalitarian mindset in action. Discussing the post September 11 climate, the following excerpt from an article in the Manchester Guardian, cited in Sardar and Davies [64] provides a useful example of how a totalitarian mindset can be created where alternatives are silenced and pluralism is rejected out of hand: Anyone, it seemed, who had ever been publicly critical of America or globalization suddenly found themselves accused of complicity with Osama bin Laden--and worse. In the British press alone, they have been described as 'defeatist' and 'unpatriotic', nihilist and masochistic', and both 'Stalinist' and 'fascist'; as 'Baader Meinhof gang' 'the handmaidens of Osama' and 'auxiliary to dictators'; as 'limp', 'wobbly', 'heartless and stupid'; and 'worm eaten by Soviet propaganda'; as 'full of loose talk', 'wilful self-delusion' and 'intellectual decadence'; as a collection of 'useful idiots', 'dead-eyed zombies'; and 'people who hate people' (p. 36). In situations that are perceived as emergencies, and particularly ones that are perceived as life-threatening, there is a tendency in social systems to drastically reduce ambiguity and complexity and fall back on a form of very simplistic, black-and-white, totalitarian thinking. This process applies to the entire political spectrum [56]. This kind of thinking has characteristics very similar to those found in research on the authoritarian personality, as outlined by Adorno and colleagues, and subsequent research [1-3,8,22-27,29,32,33,57,56,59,60,64,65,66,70,69]. The situation discussed in this example was obviously the result of an extremely dramatic and horrific set of events. Such totalitarian responses are by no means always simply the result of government propaganda, manipulation, or other forms of intervention. Along with a top-down manipulation of public opinion through propaganda, there can also a bottom-up response that embodies totalitarian thinking and discourse, and demands a totalitarian response from leadership. A totalitarian response may self-organize by tapping into a population's fears and anxieties, which spark a perceived need for clear, decisive, unambiguous and simple solutions as a form of anxiety alleviation and complexity reduction. The great emotional arousal needs release and finds it in any perceived opposition. As we shall see the totalitarian response is marked by the creation of an out-group, an either-or, black-and-white logic, and a hierarchization that is expressed through subservience to leaders and punitiveness towards those viewed as 'other'. Such a spontaneous process can simultaneously be supported and enhanced by authority figures using the same kind of unambiguous response, further modeling totalitarian thinking and discourse. The totalitarian mindset should not be assessed purely by its content and purpose, but also by the way it creates a paradigm or organizing framework for thought and discourse that is effective regardless of the actual nature of the content. While in recent years there has been an increasing drive towards media literacy regarding issues such as race and gender, there is a real need for a deeper understanding of the workings of the totalitarian mindset. Beyond a focus on understanding the veracity and meaning of messages and their ideological positioning or content [15], it is important to understand the underlying structure of reasoning of thinking and discourse, which structures and organizes the framework for thinking about, and discussing the issue at hand, and the conditions that are likely to precipitate such a mindset--conditions which can, and have been, manipulated and engineered by governments and groups seeking to control public opinion. 4. The conditions and characteristics of anti-pluralism 4.1. Three levels: physical, affective, cognitive In his review of brainwashing and mind control techniques, Wilson [69] points out that most approaches work at three key levels: the physical, the affective and the cognitive. Whereas brainwashing an individual involves making their physical safety completely dependent on the brainwasher, through the creation of physical dependency for food and water, or through direct physical threats or torture, beatings, etc. in social settings this is somewhat harder to achieve. It is not always possible to directly impact the physical level, but a real or perceived physical threat is typically extremely effective. An attack by a foreign power like Pearl Harbor or the attack on the Twin Towers, a nuclear meltdown, such as the one at Chernobyl which led to the shutting down of Italy's nuclear energy program (despite the fact that the threat was not immediate it was clearly physical in nature), or, as in Germany after Versailles, the threat of extreme economic hardship and resentment after Versailles--can align public opinion by being the key to the arousal of strong emotion. Affectively there is the combination of fear, anger, and outrage induced by the perception of an attack that creates in the individual and the society an emergency. Emotional arousal is key, and this can be achieved successfully if there is in fact the perception of a tangible threat. Fear-arousing appeals may simply be ignored without tangible and dramatic evidence, as environmentalists know all too well, but the presence of one dramatic example of the threat--physical evidence, in other words--makes a considerable difference in terms of whether the appeals will be taken seriously or simply ignored. Cognitively, this kind of emergency can lead to a complexity-reduction through drastic simplification. This works particularly well in complex situations where there are a number of interrelated factors at work, and it is not easy to untangle all the varied ramifications of the process at work. The population is emotionally aroused and dealing with a lot of complexity, and is eager to reduce that complexity and have clear, unambiguous interpretations of the situation that suggest simple course of action. 4.2. The immediacy of threat and fear and the compression of mental space and time With an external threat, the level of emotionality and anxiety rises. In such situations one might say that time and space are drastically compressed. In emergency situations, or situations that are framed as such, there is a tendency to suggest there is no time to lose: decisions and actions have to be taken immediately. A situation of great anxiety can be created, where, despite the fact that the actual threat may not be imminent, it appears as if there simply is no time for deliberation, only action [16]. There is no time to debate whether the enemy is an actual enemy, or whether there are alternative modes of resolution because by the time the discussion occurs, the enemy may be at the door and it is actually the discussion that has ultimately lead to defeat. Note again that in this 'emergency logic' of immediate either/or, discussion about frames for understanding the situation--in fact, any form of discussion--is viewed as playing into the hands of the enemy. A drastic complexity-reduction takes place, and for this reason it is important to keep the perception of emergency and emotional arousal high. 4.3. Response to pluralism and ambiguity: susceptibility, to situational pressures Kane and others have suggested that pluralism is the source of complexity, uncertainty, and ambiguity. Block and Block, [10] discussing the reaction of authoritarian individuals to ambiguous, unstructured, and new situations describe the following sequence of events: Ambiguous situation [right arrow] uneasiness or anxiety reflected as intolerance of ambiguity [right arrow] need to structure [right arrow] structuring [right arrow] an established frame of reference. As Block and Block state, "the rapidity with which an ambiguous situation is structured represents an operational manifestation of intolerance of ambiguity" (p. 304). Persons who are intolerant of ambiguity impose pre-existing frames of reference on situations, and are not open to new information. Barron [6] points out that although it is the combination of organization and complexity that generates freedom, a system's organization may 'operate in such a fashion as to maintain maladaptive simplicity' (p. 150). He reminds us that in totalitarian social systems, as in neurotic individuals, suppression is used to achieve unity. Suppression is appealing because in the short run it seems to work: Increasing complexity puts a strain upon an organism's ability to integrate phenomena; one solution to the difficulty is to inhibit the development of the greater level of complexity, and thus avoid the temporary disintegration that would otherwise have resulted. [6] A consistent attempt to reduce complexity through maladaptive simplicity is characteristic of the closed-mindedness of the authoritarian personality. It manifests in the suppression of discourse that reflects a plurality of views, strangled by the fear created by the perception of anxiety in emergency. Sampson's [61] discussion of authoritarianism and intolerance for ambiguity helps to explain why authoritarian individuals are anti-pluralist. Discussing authoritarian individuals, he writes, First, when confronted by an ambiguous situation, one allowing for a variety of meanings or shades of gray, they feel discomfort. Second, they deal with this discomfort by seeking a quick and easy solution that minimizes the subtleties that exist. In short, they make their world into simple black or simple white. From time to time, all of us show aspects of this intolerance. The mark of the high authoritarian, however, is the tendency to deal uncharitably with ambiguity most of the time. (p. 85) Intolerance for ambiguity manifests in the rejection of the unstructured, and the complex, and in a desire to be in an environment where rules and expectations have been clearly set and there are not a plurality of perspectives and possibilities. Uncertain, ambiguous situations cause stress and anxiety because the authoritarian personality wants a clear set of rules and regulations to be imposed by whoever is in charge. In fact, being in charge means 'laying down the law'. The stress is on order, almost at all costs, and any deviation on the existing order is seen as a potential plunge into chaos. It is certainly at the cost of novelty and originality. The focus on order and predictability literally prevents anything new, anything surprising, anything different, and anything that disturbs the existing order from appearing. The authoritarian order is therefore a deeply homogeneous order, such as manifested classically in China during the Maoist era, where homogeneity and conformity (most dramatically, albeit superficially, in dress, in ideology, in the reciting of the Little Red book, even in mealtimes and the disappearance of time zones in a country that should have three) were elevated to unassailable virtues. In the authoritarian attitude, there is also a punitive attitude towards those who appear to be going against the rules in some way that may be related to hierarchy, authoritarian submissiveness, and projection. Sampson then goes on to say of authoritarians that diversity is like ambiguity for them: It provides too many options and alternatives. They show a preference for getting rid of diversity and muting differences. This is the very quality that fits persons who want to keep their own family, neighborhood, community, and nation pure by not allowing various outside groups to gain entry. Second, we all form quick impressions of others, usually based on simple stereotypes we hold about them. Some people, however, allow later knowledge to recast their first impressions. Those who are highly intolerant of ambiguity, by contrast, do not take kindly to new information that does not fit the impression they have already formed. Thus, they may persist in maintaining their first impressions of others and disregard conflicting new information. (p. 89) Sanford [61] has described authoritarianism as a concept to explain "the varying degrees of susceptibility in individuals to situational pressures" p. 157. Clearly authoritarians find in pluralism a deeply disturbing situational pressures and their response to it is to eliminate it. Key to my argument here is that, under certain kinds of situational pressures, even individuals who may not normally exhibit authoritarian tendencies do so to be able to cope with a world they perceive to be chaotic and dangerous. The situational pressures can lead to a knee-jerk totalitarian response, in terms of the search for an enemy, black-and-white thinking, and the desire for strong leadership. This response from the population in turn creates a great susceptibility to propaganda 4.4. The authoritarian attitude and the totalitarian mindset Instead of thinking of the research on authoritarianism exclusively in terms of the deep-seated tendencies of a certain kind of personality with fixed beliefs and attitudes, we might think of a contextually-based authoritarian or anti-pluralist attitude, and I will refer to it here as the totalitarian mindset. The original study of the authoritarian personality was critiqued in much the way that the trait-based personality research of the early part of the last century was. Whilst it was generally agreed that the study described accurately the phenomenology of authoritarianism, it was far less clear whether there was in fact an authoritarian personality "type'. Regardless of whether such a type exists, a different way of approaching that research is to see it as outlining features of a general and generic human attitude that is related to certain contexts, and is a response to certain situational pressures [27,7,30]. Sanford [61], one of the original researchers on the authoritarian personality, pointed out that a person may not, in general, display certain attitudes characteristic of authoritarianism unless a situation of great complexity and/or (perceived) danger elicits substantial anxiety, at which point the generally non-authoritarian individual may resort to the kind of black-and-white thinking, scapegoating, and submission to authority that is characteristic of the authoritarian attitude. In other words, whether or not an authoritarian personality type exists, an authoritarian attitude is a characteristic that most humans can, to some degree or other, share when exposed to certain circumstances. Next, I outline the correlation between external circumstances and attitudinal characteristics that combine to create the context for the totalitarian, anti-pluralist mindset. 5. The totalitarian, anti-pluralist-mindset We too have the right to preach a mystery, and to teach them that it is not the free judgment of their hearts, not love that matters, but a mystery they must follow blindly even against their conscience. So we have done. We have corrected Thy work and founded it upon miracle, mystery, and authority. The Grand Inquisitor, In Brothers Karamazov by Dostoevsky 5.1. Out-group, scapegoating, and superstition The perception of an out-group as a threat and an enemy is the glue that holds this mindset together. Positing an out-group as enemy, as Goering suggests, is a key strategy for uniting a people and getting them to set aside internal differences. This strategy also applies to groups, and indeed one can even see it at work in families, where relatives who may be at loggerheads since infancy will suddenly close ranks when one of them is threatened by an outsider. Chomsky [15] among many others, also points to the way this tactic has been part and parcel of politics throughout history, and has indeed been omnipresent in the American political landscape. An out-group does not have to be outside society. It can be created within an existing society, as was the case with Jews in Germany in the 1930s. Chinese Communists held up the external threat of the USA and the internal threat of counter-revolutionary landowners, merchants, bankers and others. Sargant [65] has argued for the importance of the internal threat. In cases where open conflict is lacking or has been expected for a long time but has not yet materialized, having the internal out-group provides an immediate source of danger. When asked whether he thought Jews should be annihilated, Hitler replied no, because then "we should have to invent him. It is essential to have a tangible enemy, not merely an abstract one." A member of a Japanese mission to Berlin in 1932 is said to have remarked that the National Socialist movement was "magnificent. I wish we could have something like it in Japan, only we can't, because we haven't got any Jews." (Cited in Hoffer, [32; 91]) Sanford's [61] enumeration of the characteristics of the authoritarian personality includes 'superstition'. Superstition indicates a tendency to shift responsibility from within the individual onto outside forces beyond his control; these forces appear to the individual as mystical or fantastic determinants of his fate. (p. 145) The qualities of the out-group typically do have something of the supernatural about them--Jews who control the German economy and indeed the world economy, for instance because everything must be blamed on them. 'Racial', cultural, and other differences are emphasized to exaggerate the 'otherness" of the out-group. They are not like us, and in fact are quite the opposite of who we are. In their otherness they become the recipients of projection, and of peculiar mystery. Images of dirt, pollution, vermin, of a virus, are often used to emphasize not only the difference but the association of the other with all that is sick, unpleasant, and rejected by 'us'. The out-group makes scapegoating possible, since everything that goes wrong can be blamed on them, and therefore distracts attention from one's own complicity in the state of affairs. Scapegoating allows for a massive reduction of complexity, and eliminates the need to look at the whole, at interdependencies, at the way complex issues have many determining factors (which is precisely what makes them so difficult to address), at one's own participation and complicity in the present state of affairs, and focuses all attention unambiguously on the out-group. The creation of an out-group to scapegoat is essentially a giant cop-out that allows governments to redirect attention from internal conditions to external foes, and allows citizens to avoid having to deal with the complexity of life, with all too complex economic, social, and political woes. 5.2. Either/or logic, black-and-white thinking The people in their overwhelming majority are so feminine in their nature and attitude that sober reasoning determined their thoughts and actions far less than emotion and feeling. And this sentiment is not complicated, but very simple and all of a piece. It does not have multiple shadings; it has a positive and a negative; love or hate, right or wrong, truth or lie, never half this way and half that way, never partially, or that kind of thing. (Hitler [31:183]) Once the out-group enemy has been located, an inexorable logic of either/or follows. Either you are for us, or you are against us. If you are against us, you are betraying your country. This creates a powerful cocktail of a simple choice, anchored by a deep emotional resonance and framed with an either/or logic that leaves no alternatives. It is interesting to see that the 'us' in this case is typically the leadership of the 'in-group" with which the population is asked to/wants to identify. In other words, it is the leadership policies one is either for or against, and the leaders are the ones that get to define the parameters of what constitutes being 'for' or 'against'. More compellingly, it is now also up to the leaders of the in-group to define what is real and true what is not, what is, from their perspective, factual information and 'enemy propaganda'. This kind of either/or, black-and-white logic is a classic characteristic found in the authoritarian personality, and is technically known as 'stereotypy'. Stereotypy is the tendency to think in rigid, oversimplified categories, in unambiguous terms of black and white, particularly in the realm of psychological or social matters. We hypothesized that some people, even those who are otherwise 'intelligent', may resort to primitive explanations of human events at least partly because they cannot allow many of the ideas and observations needed for an adequate account to enter into their calculations; because these ideas are affect-laden and potentially anxiety-producing, they cannot be included in the conscious scheme of things [61; 145]. As Sanford points out, even intelligent people can resort to black and white thinking when they are overwhelmed and look for ways to drastically reduce complexity. At a certain threshold of complexity and anxiety, many people succumb to the simplicity of the totalitarian mindset. Either A or B. It is possible to relinquish responsibility, follow the leader, and direct the anxiety turned to anger onto an external group. Eliminate all variables, except one that can be easily measured. "You're either for me or against me," (which translates into, "my way or the highway,"). This kind of thinking is successful at pseudo-simplification: it creates the illusion of clarity, decisiveness, and power. Either/or, black-and-white, dichotomous thinking appears to cut through ambiguity. Such polarizing thinking does not allow for creativity and complexity, and the exploration of alternative approaches. But one has to remember that it is precisely the anxiety caused by a plurality of approaches, and the time taken to explore them, that the anti-pluralist, totalitarian attitude seeks to eliminate. "The situation is clear: X is to blame (Jews, Osama bin Laden, American capitalism, etc.)." Black-and-white thinking is a key way of maintaining cognitive authoritarianism in the discourse of a system large or small. 5.3. Authoritarian submission/hierarchy At times of great anxiety, the fear of imminent threat also elicits a demand for a savior who will point out exactly what needs to be done, why, by whom, and to whom (a committee does not quite do the trick and is far less reassuring). A dramatic feature of the authoritarian attitude is the submission to authority and the domination of those perceived to be lower on the hierarchy. The authoritarian attitude is very concerned with hierarchical power structures, and in fact sees the world in terms of a rigid hierarchy from strong down to weak. It involves submissiveness to those above, a longing for strong leadership, and a willingness to sacrifice much for the group, the organization, or the nation. Authoritarian individuals are paternalistic, patronizing, and punitive to those below them in the hierarchy. The combination of conventionalism, with a focus on hierarchy, sets up a rigid, unchanging framework that cannot be challenged. The notion of heterarchy, or shifting centers of power based on context and competence, is deeply disturbing in an authoritarian system. Not knowing what the fixed 'chain of command' is causes great anxiety. A more open, democratic structure seems chaotic and impossible, because it appears there are no rules, no clarity, no order, and there is 'no respect'. The case of Adolf Hitler is extremely instructive. Nazi Germany provides us with a textbook example of authoritarian manipulation. Hitler came to power in difficult times, and presented himself as the visionary savior. For leaders who are already in power and whose popularity is severely challenged, a war can be extremely useful. In other words, leaders who lack charisma can be granted charismatic qualities through circumstances. One only needs to look at the sudden popularity of leaders who in peacetime may have been wildly unpopular, as a war begins. Margaret Thatcher's dismal ratings before and after the Falkands war are a case in point. A peculiar shift occurs as the nation rallies around the leader who may previously have been despised or simply ridiculed. Through a process that seems almost magical the leader is soon viewed as decisive, powerful, and even wise. The literature of social psychology provides us ample research into the dynamics of conformity and conversion. Particularly when there is great anxiety, the forces of conformity come into play and an increasing alignment occurs to what is perceived to be the voice of authority. Psycho-dynamically, a process of collective projection occurs, endowing the leader with all the clarity and power individuals seem to lack--and playing into the leader-as-father role. In Germany this was achieved through incredibly effective but low-tech spectacle and propaganda, which was itself influenced by early research on mass psychology. The Nuremberg rallies were remarkable, hypnotic efforts in mass hypnosis and hysteria that created a ritual to forge the common identity of the new Germans, which was represented in the mythical figure of Hitler. Similar dynamics occur in cults with guru-figures as their leaders, and indeed the dynamics are remarkable similar. Mao also played an unambiguous savior role, and after 1949 rode on the wave of his revolutionary success. Perhaps no greater cult of personality was ever seen, and it is important to note that the attachment to Mao, and indeed the dependence on his leadership, became so great that, as with many cases of guru cult-leaders, many found it hard to believe he had made mistakes--even in such egregious and monstrous cases as the Great Leap Forward, when tens of millions died of famine because of what can only be called gross, ego-driven mismanagement. In years of bumper crops, people-power was diverted to the one single Mao-defined goal of those years, steel production, and consequently not enough food was available. The provincial propaganda held that there had been crop-failures in every other province. 5.4. Unification/anti-introception Through the definition of an out-group, an in-group is created. The 'us,' the 'we', is defined in opposition to 'them'. The complexity of identity, particularly in societies with many different ethnic and religious groups, is reduced to a generic 'us' by virtue of the threat. Suddenly 'we are all in this together', for "survival'. Intra-societal differences are reduced to the status of squabbles and quickly set aside when a common threat is perceived (Sherif, 1988). A simple identity overcomes differences: it is simple because the key uniting factor is the external enemy, and the perception is that identity forged by external threat demands a clear hierarchy and well-defined leadership. At the same time, the focus is almost entirely external. There is little or no real attention placed on what goes on inside the system, and this reflects an authoritarian attitude called anti-introception. Anti-introception means being unwilling to look inside, not approaching an issue in a 'psychological' way, in the sense that there is no attempt to understand the nature of subjectivity--feelings, thoughts, motivation, or generally look within. As Sanford [61] wrote: Self-awareness might threaten his whole scheme of adjustment. He would be afraid of genuine feeling because his emotions might get out of control, afraid of thinking about human phenomena because he might think 'wrong' thoughts (p. 144). Authoritarians want things 'plain and simple', do not have time for feelings or for 'idle speculation'. The authoritarian's world is completely 'objective', in the sense that the way they see the world is not their own unique view of the world but THE right way--nothing else is conceivable. Their own 'subjectivity', and its particular bias, plays no role in this at all, and therefore in reality deeply colors everything they see and do. The authoritarian attitude is therefore very open to self-deception. At a social level, the development of this characteristic is important. In the same way that the authoritarian individual does not explore his or her motives and feelings, the creation of a totalitarian mindset and system requires as little 'collective introspection' as possible. No questioning of motives, no attention to the hysterical nature of some of the feelings expressed (hatred, love of country/in-group, and so on), only a focus on the positive, idealized symbols of the in-group. Attention is diverted from internal divisions, and critics of government spending can suddenly become wildly supportive when huge unbudgeted sums are spent on war and defense efforts. The 'patriot bypass' makes all forms of critical thinking dormant. Atrocities in the name of 'the good' become the devastating example of what Jung [35] called 'enantiodromia', the extreme polarization whereby actions in the name of 'the good' turn into the 'evil' they are attempting to destroy. A related characteristic is 'pseudo-conservatism', or the desire to safeguard (conserve) the in-group's status quo at all costs. The term 'pseudo-' points to the tendency to be so extreme and unreflective about preserving the in-group that one is willing to actually destroy what one is trying to save in the process. This manifests, for instance, in democratic countries resorting to the same tactics as anti-democratic nations in order to fight them. It is also manifested in the classic attack on dissenters--"they wouldn't let you do that in the Soviet Union/Afghanistan/etc." which ironically attempts to deprive dissenters of the very freedom that makes the country worth fighting for and differentiates it from undemocratic countries. 6. The totalitarian paradigm of certainty and simplicity The underlying structure of thinking or paradigm of the totalitarian mindset can be summarized in the following way. It reflects, as I have suggested, a particular way of thinking and discourse. Out-group/scapegoating: This is a drastic form of reductionism, reducing the complexity of the situation to one, easily identifiable variable. Either/or: A logic of disjunction creates binary opposition that cannot be reconciled or 'thought together'. Hierarchy/centralization: The hierarchy of domination and centralization of authority is focused on power, and indeed the multi-dimensionality of the world is reduced to the uni-dimensional, central construct of power. Unification/identification: In the focus on the out-group what becomes profoundly obscured is the role of the observer in the observation. Self-reflection and self-inquiry can easily lead to uncertainty, ambiguity, and doubt, and this is precisely what the totalitarian mindset rejects, because its focus is on certainty and simplicity. Underlying these central elements of the totalitarian mindset is a stress on simplicity at the expense of complexity and a quest for certainty. In fact, authoritarianism is correlated with a preference for simplicity over complexity [67]. 6.1. The return of the regressed The era of McCarthyism is remembered as a period of collective consensus trance by many. The United States was swept away by the self-aggrandizing rhetoric of a paranoid senator, and turned the 'Red Scare' into a rabid witch-hunt. In 1950, the previously undistinguished McCarthy rose to prominence when he claimed that there were 205 Communists subversives in the State Department. He was unable to present any proof for his statement, but, in an interesting and familiar move, stepped up his rhetoric and started an anti-Communist crusade. It amounted to little more than the persecution and vilification of many Americans. It is important to note that other Government offices at the time actually successfully prosecuted cases against Communists, but McCarthy never made a plausible case against anyone. McCarthy's fall occurred during 36 days of televised hearings in 1954. His rabid and increasingly offensive interrogation methods were displayed nationally. McCarthy embarked on a diatribe against a junior defense lawyer on whom he had found some 'dirt'--participation in a left-leaning student association at age 15--which was embarrassing in its pettiness. A senior liberal lawyer, appalled by his methods, presented a spirited and devastating counter-attack, and the faces of those present showed the general degree of embarrassment at the depths to which McCarthy had fallen. The meeting was wisely adjourned at that point, but a camera was left rolling as a fuming and furious McCarthy responded hysterically while the room quietly emptied, and the entire nation saw the discredited Senator's last, pathetic stand [55]. As if a hypnotist had snapped his fingers, Americans awoke from the nightmare of McCarthyism on that day. Suddenly the deeply misguided nature of the mixture of fear, patriotism, and witch-hunts McCarthy served up became crystal clear. Will we all be able to learn from the lessons of those years, and that day, 50 years ago, and insist on a creative response to the consensus trance of the totalitarian mindset? In 2003 a polemical book of McCarthy revisionism, accusing all US liberals of treason, is a New York Times bestseller [17]. 7. The paradigm of complexity: creative attitude, creative discourse 7.1. Complexity, pluralism and the future In this essay I have presented the notion of a totalitarian, anti-pluralist attitude. I have illustrated some of the core characteristics of the totalitarian attitude, and argued that it is simply not clear that human beings are prepared to live in an increasingly complex and pluralistic world. The urgency of an education that prepares human beings for pluralism and complexity becomes clear. The surprising eagerness with which totalitarianism has historically been embraced in the democratic countries of the West [11,63] suggests that it is a complex phenomenon requiring much more research. I have shown it is possible to outline in broad strokes the factors behind totalitarian responses. The complexity of pluralism can all too easily lead to a desire for simplification and anxiety reduction. This manifests as reductive, black and white solutions that present themselves as unambiguous, forceful, and lucid, guided by overarching values that allow one to 'take a stand' in the face of 'enemies' internal and external. The simplistic, black and white future lies at the heart of both McWorld and the Global Jihad. Both of Barber's options cannot accept the existence of a pluralistic world in which people with different beliefs, behaviors, traditions, worldviews co-exist. Both are totalitarian inasmuch as they are driven by the single-minded pursuit of one or two selected goals--whether economic or military conquest, and in the case of the corporate fascism of the McWorld scenario, both apply. Everything that moves towards the goal(s) is supported, what does not support them is rejected and indeed eliminated. A complex world is reduced to stark simplicity with an either/or logic. Either you are for us, or you are against us. In the McWorld scenario, the totalitarian element would manifest most clearly in the necessity for perpetual war and perpetual threat, in order to raise the anxiety and fear of the population. The stress on the presence of an external enemy would paint any attempt at presenting alternatives at best as playing into the hands of the enemy, or simply as treason. The encroachment on basic civil liberties would be forced, and eventually accepted, in the name of 'national security', and indeed patriotism. Support of the activities against the enemy would be considered not simply a badge of honor, but a basic prerequisite of citizenship. During periods of economic health, this would lead to a condition where silence around some political issues--typically foreign policy issues--would be considered an acceptable sacrifice, with the proviso that economic prosperity should continue. Given the dismal understanding of foreign policy and international affairs in many countries, and particularly in the US, where the interest in foreign affairs, is minimal anyway, this would not be a huge sacrifice. With the onset of economic hardship, the situation would likely become more unstable and more dramatic enforcement and allegiance would have to be won as the population might begin to question the legitimacy of the government's activities and their resource allocation. An alternative to these bleak scenarios requires an education in pluralism, complexity, and creativity. 1. Education for Pluralism--a recognition of difference and the possibility for creativity and unity in diversity rather than unity at the expense of diversity or vice versa [12,62]. 2. Education for Complexity--the capacity to go beyond reductive thought and black and white logic towards what Morin has called "complex thought [52];" 3. Education in Media Literacy and the psychology of mass manipulation and self-deception, to create a vigilance regarding the possibility of totalitarian mindset [63]; 4. Education that should include the relationship between reason and emotion, anxiety, and the human capacity for self-deception [29]. This suggests the need for an education that is not just cognitive but addresses the whole person [50]. 5. Education that includes a new emphasis on creating the conditions for co-existence, for mutual understanding, and for viewing pluralism as an opportunity for creativity [42,21,43]. 6. Education for creativity--for the capacity to go beyond what is and integrate new perspectives, new solutions, the capacity to create new futures. Developing the capacity to approach pluralism as an opportunity for creativity [7]. 7.2. Pluralism My stress on education for pluralism--understood not as schooling, but as a process of lifelong learning--emerges out of the previously cited evidence that pluralism is still an extremely problematic phenomenon, particularly cultural and political pluralism [13]. Both cognitively and affectively, pluralism and difference are more often than not considered disturbing, and the disequilibrium caused by this disturbance is seen as something to be reduced or eliminated. Worldwide, schooling is still largely ethnocentric. Research into genetics, language, evolution, cultural history, migration, and other areas has shown the incredible intertwining and interweaving of human beings over thousands of years [12]. And yet myths of cultural, 'racial', religious, and genetic purity are perpetuated by socialization and education, and contribute to extremely dangerous ideologies of superiority, inferiority, and profound intolerance [14]. Our planetary understanding of pluralism and diversity are still deeply flawed, and must be explored, engaged, and dialogued about if we are to create pluralistic futures. 7.3. Complex thought Morin [52] has argued that the problem facing present Western educational system is not a lack of available information, but a fundamentally problematic way of organizing knowledge. Morin argues that in the West, the organization of knowledge is based on certain underlying principles he calls "simple thought'. Simple thought is reductive, disjunctive and uni-dimensional. Such thought is incapable of articulating and understanding the complexity of pluralism. Morin's magnum opus, the five volume Method [45-51], has consisted of the development of 'complex thought'. Complex thought offers the possibility for an alternative to the totalitarian attitude: its organizing principles are dialogical, complex (in the sense of focusing on both part and whole, rather than one or the other, as in reductionism or holism), and multidimensional. Morin's articulation of a paradigm of complexity avoids reductionism, whether reduction to the part, as in atomism, or to the whole, as in holism, and stresses unity in diversity and the interrelationship between part and whole. It avoids disjunction in favor of distinction and dialogical relations: rather than the separation of disjunction it distinguishes, without destroying the connection that makes a dialogical relationship (both/and) possible. The stress is also on multi- as opposed to uni-dimensionality, recognizing, for instance, the plurality of human manifestations, for instance, as homo faber, homo ludens, homo economicus, etc. or the capacity for both independence of judgment and conformity. Finally, the re-integration of the observer into the observed forces us to take a long hard look at the role we play in creating our own universe of meaning, and the possibility of rotor and self-deception. Complexity, disorder and uncertainty are not viewed as elements to be eliminated at all costs, but rather as inherent in our knowledge of the world, and the very source of change and transformation that can potentially keep an individual or a social system open and alive. Crucially for an understanding of pluralism, Morin stresses the notion of unitas multiplex, of unity in diversity. Unitas multiplex does not privilege unity over diversity or diversity over unity, but recognizes that the two can be dialogically linked in a way that is mutually beneficial. 7.4. Creativity Creativity is often thought of as a phenomenon confined to the arts, or at best the arts and sciences. Studying the research on authoritarianism and creativity, it is clear that the characteristics of the authoritarian attitude are in fact the mirror image of those of the creative person. If intolerance of ambiguity is central to the anti-pluralist attitude, we find that tolerance of ambiguity is central to the creative attitude [40,6,18]. Tolerance for ambiguity is a central characteristic of the creative attitude. Creative persons are intrigued, stimulated, and motivated to explore the unfamiliar and unstructured, by situations and things for which there is no one, clear solution or approach. It is the opposite of a fear of the unstructured and unfamiliar. It means enjoying and being attracted enjoy situations for which there are no clear rules, no established roadmaps. Ambiguity destabilizes the mental equilibrium. It forces inquiry, exploration, and the creation of new ways of dealing with a situation. An unwillingness to allow or accept ambiguity means the person confronted with ambiguity will immediately attempt to impose a pre-existing framework or set of rules on the situation, and not remain open to the situation long enough to create a situation-specific way of dealing with it. Tolerance for ambiguity involves wanting to create one's own rules and roadmaps, and not immediately applying pre-existing ones. It means remaining open to possibilities, potentials, novelty, change, and difference. Openness to experience, independence of judgment, a willingness to challenge assumptions, the exploration of possibilities, the refusal of premature closure, and paradoxical (as opposed to dichotomous, black-and-white) thinking, these are some of the characteristics of the creative person which, as Barton [6,7] went to great lengths to point out, should be seen as qualities that can be cultivated rather than fixed, innate traits that one either has or has not. Already in 1941 Erich Fromm [25] discussed the inherent ambiguity in freedom, in the sense that freedom means precisely that there is no unambiguous way one should think/feel/act, and the human impulse to escape from this freedom. Barron [6] has written eloquently about the relationship between creativity and freedom precisely because a broader view of creativity, as a creative attitude, rather than as a gift confined to the arts and sciences, pertains to the creation of meaning and the possibility to create to be free. For Barron, being able to create meant being able to choose between habit, and the existing order, and difference, innovation and change. Freedom means the ability to create a plurality of choices for oneself and for others. Whatever one chooses to do, creativity gives us the choice because it is the capacity to articulate and express our freedom, to explore alternatives. The tolerance for ambiguity creative individuals show lies precisely in the ability to suspend the need for pre-established ways of doing things and attempt to make sense of the situation themselves. If the totalitarian mindset seeks simplicity through the elimination of complexity and uncertainty, an alternative does present itself, one that thrives on complexity and creativity. Research on creative individuals, and by extension what I am calling the creative attitude [27], shows that the characteristics of the creative individual are the mirror image of those of the authoritarian person/totalitarian attitude. They include: Tolerance for Ambiguity [7,18,34] Independence of Judgment [6] Openness to Experience [27] Preference for Complexity [6,7] Paradoxical or "Janusian" (both/and) thinking [58] Challenging of Assumptions [6] The valorization and cultivation of these characteristics, and of a creative attitude, can serve as a safeguard against the totalitarian mindset, and assist us in developing an attitude that recognizes pluralism as an essential characteristic of non-totalitarian futures. Again, rather than seeing these as the fixed personality traits of creative geniuses, we can see them as components of a creative attitude, and a heuristic device to remind us to avoid self-deception and consensus trance by making a choice to, for instance, challenge assumption, remain open, tolerate ambiguity, not recoil from complexity, explore possibilities beyond black-and-white options, and so on. 7.5. Media literacy The term media literacy has been used increasingly to refer to a process of education about the way the media can inform attitudes towards issues of race and gender. A pluralistic society must include a greater understanding of the nature of political and media manipulation of opinion, and of the human capacity for self-deception, and the willingness to "escape from freedom'. I have tried to outline some of the basic factors in the creation of anti-pluralist conditions and the totalitarian mindset. Pluralism requires the ability to respond creatively to the challenge of complexity, not only through reduction (which may at times be necessary) but also through ongoing creation of new frameworks for making sense of the world and incorporating the new, rather then falling back on pre-existing ways of knowing [42]. Understanding the way that the media shape our present and our understanding of possible futures, and also understanding how the proliferation of media resources can be navigated to obtain a number of different perspectives on an issue, are becoming key competencies in a 'media-ted' world. 7.6. Creative dialogue Pluralism also requires a form of dialogue and exchange that does more that immediately totalize and dichotomize, but rather is open to the dialogue of ambiguity and openness to other perspectives without seeking immediate closure and the suppression of the voices of pluralism. The anti-pluralist approach to discourse is to eliminate the other's position, and if necessary, the very possibility of alternatives. The black-and-white, 'simple' logic of anti-pluralism is at the heart of what Tannen [68] calls the culture of argument. In his research on the debate about the Vietnam war, Garrett [28] pointed out the following 'conceptual obstacles' that arose as two sides confronted each other on the issue. They are (a) the either/or syndrome, the simple logic of black and white; (b) disguising the first principles, or not making one's own assumptions and underlying beliefs transparent; (c) not seeing the other's principles, or not attempting to understand those of the other side; (d) partial approaches, with the focus on only a small aspect of the debate which comes to represent the whole (pars pro toto), or apples and oranges, where the sides are debating about what are in fact different issues. Garrett's important research clearly demonstrates the characteristics of what I have been calling an anti-pluralist discourse. We must remember that in the emergency situation created by the totalitarian mindset, conflict is always made to look as if it always appears in the image of extremity, whereas, in fact, it is actually the lack of recognition of the need for conflict and provision for appropriate forms for it that leads to danger. This ultimate destructive form is frightening, but it also is not conflict. It is almost the reverse; it is the end result of the attempt to avoid and suppress conflict [4; 130]. In this way, civic discourse loses all creativity, all exploration and consideration of possibilities all respect for pluralism and the expression of different voices that can contribute to the development of alternative futures. It is this aspect of the totalitarian mindset that needs to be challenged, the identification with one position, one perspective, one view of the world at the exclusion of others that is actually concerned largely with shutting down other voices. This deeply anti-democratic, anti-freedom, 'pseudo-conservative' perspective must be challenged if we are to retain pluralism in our discourse, and cherish the value of the very democracy and pluralism we are trying to preserve. Democracy is based on the respect for difference. Pluralism is a cornerstone of democracy. And yet there is little or no effort made to explore and educate for better, more creative ways for these inevitable, surely desired, differences to coexist and communicate in mutually beneficial ways. In a pluralistic society, increasing emphasis must be paid on the development of basic skills in conflict resolution, dialogue, and communication [68,19-21,41]. 7.7. Conclusion In this essay I have outlined the characteristics of what I have called the totalitarian mindset. Under certain circumstances, human beings engage in patterns of thinking and behavior that are extremely closed and intolerant of difference and pluralism. These patterns lead us towards the creation of totalitarian futures. An awareness of how these patterns arise, how they can be generated and manipulated through the use of fear, and how totalitarianism plays into the desire in human beings for 'absolute' answers and solutions, can be used to increase awareness and prevention of attempts at manipulation, and from the dangers of actively wanting to succumb to totalitarian solutions in times of stress and anxiety. I have also suggested a broader educational agenda for a pluralistic future, based on the assumption that the lived experience of pluralism is still largely unfamiliar and anxiety inducing. Pluralism is generally not understood, with many myths of purity and racial or cultural superiority still prevalent. Finally, as part of that agenda for education, I have stressed the importance of creativity as an adaptive capacity, as an attitude that allows individuals and groups to see pluralism as an opportunity for growth and positive change rather than simply for conflict. Pluralism also requires a form of dialogue and exchange that does more that immediately totalize and dichotomize, but rather is open to the dialogue of ambiguity and openness to other perspectives without seeking immediate closure and the suppression of the voices of pluralism. The anti-pluralist approach to discourse is to eliminate the other's position, and if necessary, the very possibility of alternatives. The black-and-white, 'simple' logic of anti-pluralism is at the heart of what Tannen [68] calls the culture of argument. In his research on the debate about the Vietnam war, Garrett [28] pointed out the following 'conceptual obstacles' that arose as two sides confronted each other on the issue. They are (a) the either/or syndrome, the simple logic of black and white; (b) disguising the first principles, or not making one's own assumptions and underlying beliefs transparent; (c) not seeing the other's principles, or not attempting to understand those of the other side; (d) partial approaches, with the focus on only a small aspect of the debate which comes to represent the whole (pars pro toto), or apples and oranges, where the sides are debating about what are in fact different issues. Garrett's important research clearly demonstrates the characteristics of what I have been calling an anti-pluralist discourse. We must remember that in the emergency situation created by the totalitarian mindset, conflict is always made to look as if it always appears in the image of extremity, whereas, in fact, it is actually the lack of recognition of the need for conflict and provision for appropriate forms for it that leads to danger. This ultimate destructive form is frightening, but it also is not conflict. It is almost the reverse; it is the end result of the attempt to avoid and suppress conflict [4; 130]. In this way, civic discourse loses all creativity, all exploration and consideration of possibilities all respect for pluralism and the expression of different voices that can contribute to the development of alternative futures. It is this aspect of the totalitarian mindset that needs to be challenged, the identification with one position, one perspective, one view of the world at the exclusion of others that is actually concerned largely with shutting down other voices. This deeply anti-democratic, anti-freedom, 'pseudo-conservative' perspective must be challenged if we are to retain pluralism in our discourse, and cherish the value of the very democracy and pluralism we are trying to preserve. Democracy is based on the respect for difference. Pluralism is a cornerstone of democracy. And yet there is little or no effort made to explore and educate for better, more creative ways for these inevitable, surely desired, differences to coexist and communicate in mutually beneficial ways. In a pluralistic society, increasing emphasis must be paid on the development of basic skills in conflict resolution, dialogue, and communication [68,19-21,41]. 7.7. Conclusion In this essay I have outlined the characteristics of what I have called the totalitarian mindset. Under certain circumstances, human beings engage in patterns of thinking and behavior that are extremely closed and intolerant of difference and pluralism. These patterns lead us towards the creation of totalitarian futures. An awareness of how these patterns arise, how they can be generated and manipulated through the use of fear, and how totalitarianism plays into the desire in human beings for 'absolute' answers and solutions, can be used to increase awareness and prevention of attempts at manipulation, and from the dangers of actively wanting to succumb to totalitarian solutions in times of stress and anxiety. I have also suggested a broader educational agenda for a pluralistic future, based on the assumption that the lived experience of pluralism is still largely unfamiliar and anxiety inducing. Pluralism is generally not understood, with many myths of purity and racial or cultural superiority still prevalent. Finally, as part of that agenda /'or education, I have stressed the importance of creativity as an adaptive capacity, as an attitude that allows individuals and groups to see pluralism as an opportunity for growth and positive change rather than simply for conflict. References [1] T.W. Adorno, in: T.W. Adorno, E. Frenkel-Brunswik, D.J. Levinson, R.N. Sanford (Eds.), Prejudice in the Interview Material: The Authoritarian Personality Abridged, W.W. Norton, New York, 1982, pp. 297-345. [2] G. Allport, The Nature of Prejudice, Anchor, Garden City, NY, 1958. [3] S.E. Asch, Effects of group pressure upon the modification and distortion of judgments in: E.E. Maccoby, T.M. Newcomb, E.L. Hartley (Eds.), Readings in Social Psychology, 3rd ed., Holt, Rinehart, and Winston, New York 1958, pp. 174-183. [4] J. Baker Miller, Toward a New Psychology of Women, Beacon, Boston, 1976. [5] B. Barber, Jihad vs McWorld. The Atlantic Monthly 1992:53-65. [6] F. Barron, Creativity and Psychological Health, Creative Education Foundation, Buffalo, NY, 1990. [7] F. Barron, No Rootless Flower Thoughts on an Ecology of Creativity, Hampton Press, Creskill, NJ, 1994. [8] E. Berne, The Structure and Dynamics of Organizations and Groups, Ballantine, New York, 1963. [9] R. Bernstein, Beyond Objectivism and Relativism, University of Pennsylvania Press, Philadelphia, 1983. [10] J. Block, J. Block, An investigation of the relationship between intolerance of ambiguity and ethnocentrism, Journal of Personality 18 (1965) 303-311. [11] G. Bocchi, M. Ceruti, Solidarity of Barbarism. A Europe of Diversity Against Ethnic Cleansing, Peter Lang, New York, 1997. [12] G. Bocchi, M. Ceruti, The Narrative Universe, Hampton, Cresskill, NJ, 2002. [13] G. Bocchi, M. Ceruti, E. Morin, Turbare il futuro Un inizio per la civilta, planetaria, Moretti and Vitali, Bergamo, 1990. [14] M. Callari Galli, M. Ceruti, T. Pievani, Pensare la diversita. Per un educazione alia complessita, umana. [Thinking diversity: towards an education for human complexity.], Meltemi, Roma, 1998. [15] N. Chomsky, Media Control, Seven Stories Press, New York, 2002. [16] G. Claxton, in: J. Henry (Ed.), The Innovative Mind: Creative Management, Sage, London, 2001. [17] A. Coulter, Treason, Crown Forum, New York, 2003. [18] J.S. Dacey, Fundamentals of creative thinking, Lexington Books, Lexington, 1989. [19] R. Eisler, Cultural evolution: Social shifts and phase changes in: E. Laszlo (Ed.), The New Evolutionary Paradigm, Gordon and Breach, New York, 1991. [20] R. Eisler, The Partnership Society: Social Vision, Futures Spring, 21 (1989) 18-19. [21] R. Eisler, The Chalice and the Blade, Harper and Row, San Francisco, 1987. [22] J. Ellul, Propaganda. The Formation of Men's Attitudes, Random House, New York, 1977. [23] S. Freud, Group Psychology and the Analysis of the Ego, W.W. Norton, New York, 1959. [24] S. Freud, Character and Culture, W.W. Norton, New York, 1963. [25] E. Fromm, Escape from Freedom, Avalon, New York, 1941. [26] E. Fromm, The Sane Society, Fawcett, New York, 1955. [27] E. Fromm, in: H.H. Anderson (Ed.), The Creative Attitude Creativity and its Cultivation, Harper and Row, New York, 1959, pp. 44-54. [28] S. Garrett, Ideals and Reality: An Analysis of the Debate Over Vietnam, University Press of America, Washington DC, 1979. [29] D. Goleman, Vital Lies, Simple Truths The Psychology of Self-deception, Simon and Schuster, New York, 1985. [30] F.I. Greenstein, Personality and Politics Problems of Evidence, Inference, and Conceptualization, W.W. Norton, New York, 1975. [31] A. Hitler, Mein Kampf, Houghton Mifflin, Boston, 1943. [32] E. Hoffer, The True Believer Thoughts on the Nature of Mass Movements, Harper and Row, New York, 1951. [33] C.I. Hovland, I.L. Janis, H.H. Kelley, Communication and Persuasion Psychological Studies of Opinion Change, Yale University Press, New Haven, 1953. [34] P.W. Jackson, S. Messick, The Person, the Product, and the Response, Conceptual Problems in the Assessment of Creativity, Journal of Personality 33 (3) (1965) 309-329. [35] C.G. Jung, The portable Jung. R.C. Hull (Trans.), Penguin, Harmondsworth, 1971. [36] R. Kane, Through the Moral Maze Searching for Absolute Values in a Pluralistic World, North Castle Books, Armonk, NY, 1994. [37] A. Koestler, Janus: A Summing up, Picador, London, 1979. [38] J.F. Lyotard, The Postmodern Condition: A Report on Knowledge, University of Minnesota Press, Minneapolis, 1984. [39] M. Maruyama, in: E. Jantsch, C.H. Waddington (Eds.), Toward Cultural Symbiosis Evolution and Consciousness: Human Systems in Transition, Addison-Wesley, Reading, MA, 1976, pp. 198-213. [40] A. Montuori, Evolutionary Competence, Gieben, Amsterdam, 1989. [41] A. Montuori, Social Creativity, Academic Discourse, and the Improvisation of Inquiry, Revision 1998: 21-24. [42] A. Montuori, Planetary Culture and the Crisis of the Future, World Futures, The Journal of General Evolution 54 (1) (1999) 232-254. [43] A. Montuori, I. Conti, From Power to Partnership, Harper, San Francisco, 1993. [44] E. Morin, B. Kern, Homeland Earth A Manifesto for the New Millennium, Hampton Press, Cresskill, NJ, 1998. [45] E. Morin, La Methode La Nature de la Nature. [The nature of nature], vol. 1, Seuil, Paris, 1977. [46] E. Morin, La Methode La Vie de la Vie. [The life of life], vol. 2, Seuil, Paris, 1980. [47] E. Morin, Science Avec Conscience. [Science with conscience], Fayard, Paris, 1984. [48] E. Morin, La Methode Les Idees. [Ideas], vol. 4, Seuil, Paris, 1991. [49] E. Morin, Method: Towards a Study of Humankind The Nature of Nature, Peter Lang, New York, 1992. [50] E. Morin, La complexite humaine. [Human complexity], Flammarion, Paris, 1994. [51] E. Morin, La Methode L'identite humaine. [Human identity], vol. 5, Seuil, Paris, 2001. [52] E. Morin, On Complexity, Hampton Press, Cresskill, NJ, 2004 in press. [53] S. Moscovici, The Age of the Crowd. A Historical Treatise on Mass Psychology, Cambridge University Press, Cambridge, 1985. [54] J. Ogilvy, Many-dimensional Man Decentralizing Self, Society, and the Sacred, Harper, New York, 1977. [55] M. Piattelli Palmarini, L'arte di persuadere. [The art of persuasion], Mondadori, Milano, 1995. [56] M. Rokeach, The Open and Closed Mind, Basic Books, New York, 1960. [57] M. Rokeach, The Nature of Human Values, Free Press, New York, 1973. [58] A. Rothenberg, The Emerging Goddess: the Creative Process in Art, Science, and Other Feilds, University of Chicago Press, Chicago, 1979. [59] D. Rushkoff, Coercion: Why We Listen to What They Say, Riverhead, New York, 1999. [60] E.E. Sampson, Dealing with Differences. An Introduction to the Social Psychology of Prejudice, Harcourt Brace, Fort Worth, 1999. [61] N. Sanford, in: J. Knutson (Ed.), Authoritarian Personality in Contemporary Perspective Handbook of Political Psychology, Jossey Bass, San Francisco, 1973, pp. 139-170. [62] Z. Sardar, Rescuing all our Futures, Praeger, Westport, CT, 1999. [63] Z. Sardar, Postmodernism and the Other, Pluto Press, London, 1999. [64] Z. Sardar, M.W. Davies, Why do People Hate America?, The Disinformation Press, New York, 2002. [65] W. Sargant, Battle for the Mind How Evangelists, Psychiatrists, Politicians, and Medicine Men can Change your Beliefs and Behavior, Malor Books, Cambridge, MA, 1997. [66] W.F. Stone, G. Lederer, R. Christie, Strength and Weakness The Authoritarian Personality Today, Springer, New York, 1993. [67] P. Suedfeld, P.E. Tetlock, S. Streufert, in: C. Smith Ed.), Conceptual/Integrative Complexity Motivation and Personality: Handbook of Thematic Content Analysis, Cambridge University Press, New York, 1992, pp. 393-400. [68] D. Tannen, The Argument Culture Moving from Debate to Dialogue, Random House, New York, 1998. [69] R.A. Wilson, Prometheus Rising, New Falcon Press, Phoenix, 1983. [70] R.A. Wilson, Quantum Psychology, New Falcon Press, Phoenix, 1993. Alfonso Montuori * California Institute of Integral Studies, San Francisco, CA, USA * Tel.: + 1-415-398-6964; fax: + 1 415-398-6964. E-mail address: amontuori at ciis.edu From checker at panix.com Fri Dec 30 21:10:55 2005 From: checker at panix.com (Premise Checker) Date: Fri, 30 Dec 2005 16:10:55 -0500 (EST) Subject: [Paleopsych] Re: Meme 052: The Inverted Demographic Pyramid In-Reply-To: <62710.216.194.122.18.1133583859.squirrel@mail.npsis.com> References: <62710.216.194.122.18.1133583859.squirrel@mail.npsis.com> Message-ID: Thanks for this, belatedly. I don't see how what you are saying is incompatible with my general idea that the trade-off function was largely set in the Stone Age. Maybe you need to elaborate on what particularity is. On 2005-12-02, jgardner at effectiveaction.com opined [message unchanged below]: > Date: Fri, 2 Dec 2005 21:24:19 -0700 (MST) > From: jgardner at effectiveaction.com > To: Premise Checker > Cc: paleopsych at paleopsych.org > Subject: Re: Meme 052: The Inverted Demographic Pyramid > > How about a simpler explanation? > > Sub-groups of many species weed themselves out by being too particular. > (One thing about the rich: they are definitively too particular.) > > Particularity is a good way to arrive at individual success, but a very > poor one to regenerate group dominance. > > This is why the future belongs to China and india. Their top 2% of the > ability hierarchy isn't any greater than North America, the Mideast or > Europe. But it's a lot bigger. And shitloads less particular. > > > >> Meme 052: The Inverted Demographic Pyramid >> by Frank Forman >> sent 5.12.2 >> >> The inverted demographic pyramid, those richer and more able having >> fewer children, has been a problem for evolutionary theory ever since >> Francis Galton. My solution is that the decision about whether to have >> more or fewer children is determined by a trade-off function set in the >> Stone Age. What parents consider to be adequate support for a child is >> determined more by what their peers do than the objective facts of the >> situation today, which would indicate a much larger advantage for the >> better off than in the Stone Age where incomes were far more equal. But >> we listen to the "whispering genes within" rather than accept any >> factual studies that back up the Ninety-Six Percent Rule, namely that >> 96% of parents don't matter much one way of the other. >> >> An article in the New York Times, shown below, about a surprising 26% >> increase in the number of children age 0-5 in Manhattan between 2000 >> and 2004 induced these reflections. >> >> This increase is probably just an effect of greater income inequality >> in recent years, not a sudden reversal of the inverted demographic >> pyramid This paradox, as we all know, has caused some to question the >> whole selfish gene-sociobiological paradigm, and with good reason, >> though I try to make a good crack at saving the paradigm here. >> >> Animals in any species can chose, within limits, whether to pursue an r >> strategy (mnemonic: reproduce, reproduce, reproduce) of many offspring >> with little parental investment per child or the K (mnemonic: Kwality) >> strategy of few children but high investment per child. >> >> The trade off *function* was mostly set in the Stone Age. Conditions >> have changed and rich parents should be able to have far, far more >> children than the poor, since income inequality is far greater today >> than then by I think every account by anthropologists. >> >> But when you ask rich parents why they don't pack them in like they do >> in the barrios, and you get told that that would be indecent and >> inadequate with such vehemence that befit moral absolutes. >> >> What's going on is that one's standard of decency or adequacy is not >> set by thinking about Stone Age environments, nor by comparison with >> those who lead far longer lives in the barrios and ghettos and whatever >> Asian immigrants cram themselves into than Stone Age man ever did, but >> by comparison with one's peers. Your neighbors surround their children >> with a big house, give them an expensive education, and so on. The >> Stone Age genes within you whisper to you that if you don't do these >> things for your kids, they will not have their own children and you >> will have no grandchildren. You will ignore any studies by Judith Rich >> Harris that affirm the Ninety-Six percent rule that only the worst and >> best two percent of parents make a measurable difference in how your >> kids turn out. You will reject showings by economists that educational >> credentials count for little beyond helping you children get their >> first jobs. You look at only a small slice of the population, namely >> your peers, in which effort does seem to matter more than innate >> factors. >> >> Indeed the big brains of primates are primarily geared more to getting >> along with your fellows (thus allowing for greater and more complex >> social cooperation) rather than for maneuvering the physical >> environment by finding out what is really out there. It's an accidental >> byproduct of blue eyes and flaming red (or blond) hair (my "Maureen >> Dowd Theory of Western Civilization") that triggered off a larger >> regard for objectivity. Mr. Mencken was often given to noting how weak >> this regard is, even in America, especially in America, but he did not >> know the rest of the world. >> >> There are other factors involved in the inverted demographic pyramid. >> Our drives work only remotely, and there is no drive for maximizing >> inclusive reproductive fitness directly. (I don't need to beat yet >> another drum for group selection here.) Of the Sixteen Basic Drives >> Steven Reiss has identified through factor analysis, Romance certainly >> seems closely related, this drive (no. 2 on my personal list) including >> acts of coitus and also having aesthetic experiences. (I can't logic >> out the connection, but these three are correlated so much on >> questionnaires that they are cluster into a single drive. >> >> The desire to raise one's own children (NOT clustered with the drive to >> raise adopted children) would also seem to weigh heavily in the selfish >> gene model. (It's no. 6 on my list, ranked that high, not because I >> have spent a great deal of time, Kwality or not, with my children, but >> because I chose to give up the teaching job I really would have much >> preferred. Spending lots of time with them does not satisfy my no. 1 >> drive, Curiosity, all that well. I'd rather read books! >> >> Indeed, Curiosity, which is so much more satisfiable today than in the >> Stone Age (a supply side change) could well be responsible for a large >> part of the inverted demographic pyramid. I suggest that those having >> higher incomes (correlated 0.5 with intelligence but making a huge >> difference between populations then and now) will purchase relatively >> more satisfaction of this drive today, with the result of having >> relatively fewer children, than they would have back in the Stone Age. >> >> There's also the drive for Status (no. 14 out of 16 on my list, which >> explains why we chose to live in an inexpensive apartment in a >> high-toned neighborhood and let the neighbors snub as they may, as some >> did), which means that parents will spend on their children to impress >> their peers as well as to actually help their kids. This may also be >> more readily satisfiable today than then. I don't know. >> >> And so on, through the rest of the Reiss list. I resend my meme on them >> at the end by the simple expedient of typing crtl-R||enter. >> Neat, isn't it, which is what a UNIX shell account gives you. I'm just >> giving a framework for speculation. The hard work of empirically >> weighing the changes in supply and demand for the drives, which as I >> said are only loosely connected to reproductive success, begins. It >> will be a nearly impossible task to do with full scientific vigor, >> since we don't know all that much about the EEA. But, once again, don't >> compare your findings against a perfectionist model but merely with >> *competing* explanations, any more than you should compare the actual >> workings of the market with an ideal government that would correct >> market defects. P.S. I'm not a Reissian fundamentalist: it's just that >> he has provided me with one of my many filters with which to view the >> world.] >> >> Some of the respondents to Dan Vining's 1986 Brain and Behavioral >> Sciences target article, "Social versus Reproductive Success: The >> Central Theoretical Problem with Human Sociobiology" (9:167-216), did >> hint at the trade-offs among desires, but only indirectly, as none were >> economists. My own allegedly expertise in the subject at least urges me >> to look at a trade-off function that may have changed not >> inconsiderably on the demand side: that for curiosity and objectivity >> caused by the Maureen Dowd factor may be hugely important for the West >> versus the Rest. But the biggest changes are in the supply of ways to >> satisfy the Reiss desires. It is the changes on the supply side that >> apparently outweigh the changes in demand, since the inverted >> demographic pyramid is common to rich countries and not just to the >> West. >> >> In any case, I hope I've managed to introduce some economic reasoning >> to more fully explain the inverted demographic pyramid. Enthusiastic >> eugenicists will have a terrific task ahead to change the demand and >> supply curves. One of Reiss' drives is Idealism (no. 7 on my list), but >> the sorts of questions he asked were heavy into redistribution. We >> know, or should know, that the enthusiasm for redistribution is hyped >> up with the the huge influence of 20th century leftists in the >> education business. Issues were--and still are, there being a lot of >> momentum (a/k/a culture lag)--largely framed in these terms, much like >> debates in the Middle Ages were framed in Christian terms. Hauling in >> manufactured emotions will be easier than changing underlying >> biologies, at least until Designer Children come along. >> >> ------ >> >> Manhattan's Little Ones Come in Bigger Numbers >> http://www.nytimes.com/2005/12/01/nyregion/01kids.html >> >> By EDUARDO PORTER >> >> The sidewalks crowded with strollers, the panoply of new clubs catering >> to the toddler set and the trail of cupcake crumbs that seem to >> crisscross Manhattan are proof: The children are back. >> >> After a decade of steady decline, the number of children under 5 in >> Manhattan increased more than 26 percent from 2000 to 2004, census >> estimates show, surpassing the 8 percent increase in small children >> citywide during the same period and vastly outstripping the slight >> overall growth in population in the borough and city. >> >> Even as soaring house prices have continued to lift the cost of >> raising a family beyond the means of many Americans, the borough's >> preschool population reached almost 97,000 last year, the most since >> the 1960's. >> >> This increase has perplexed social scientists, who have grown used to >> seeing Manhattan families disappear into Brooklyn and New Jersey, and >> it has pushed the borough into 11th place among New York State >> counties ranked by percentage of population under 5. In 2000, fewer >> than one in 20 Manhattan residents were under 5, putting the borough >> in 58th place. >> >> "Potentially this is very good news for New York," said Kathleen >> Gerson, a professor of sociology at New York University. "It depends >> on whether this is a short-term blip or a long-term trend. We must >> understand what explains the rise." >> >> Indeed, nobody can say for sure what caused the baby boom, but several >> factors clearly played a part. >> >> The city's growing cohorts of immigrants may have contributed, as the >> number of children in Manhattan born to foreign-born parents has risen >> slightly since the 1990's. But other social scientists say that the >> number of births is growing at the other end of the income scale. >> >> "I wouldn't be surprised if it had to do with more rich families >> having babies and staying in Manhattan," said Andrew A. Beveridge, a >> professor of sociology at Queens College. >> >> According to census data, 16.4 percent of Manhattan families earned >> more than $200,000 last year, up from 13.7 percent in 2000. >> >> Kathryne Lyons, 40, a mother of two who left her job as a vice >> president of a commercial real estate firm when her second daughter >> was born three years ago, acknowledges that having children in the >> city is a tougher proposition if one cannot afford nannies, music >> lessons and other amenities, which, as the wife of an investment >> banker, she can. "It's much more difficult to be here and not be well >> to do." >> >> Over the past few years, New York has become more family-friendly, >> clearly benefiting from the perception that the city's quality of life >> is improving. Test scores in public schools have improved, and >> according to F.B.I. statistics, New York is the nation's safest large >> city. >> >> Sociologists and city officials believe that these improvements in the >> quality of life in Manhattan may have stanched the suburban flight >> that occurred in the 1990's. And while Manhattan lacks big backyards >> for children to play in, it offers a packed selection of services, >> which can be especially useful for working mothers. >> >> In fact, the baby boomlet also may pose challenges to a borough that >> in many ways struggles to serve its young. According to Childcare >> Inc., day care centers in the city have enough slots for only one in >> five babies under age 3 who need it. >> >> And while census figures show that children over 5 have continued to >> decline as a percentage of the Manhattan population, if the children >> under 5 stay, they could well put extra stress on the city's public >> and private school systems, already strained beyond capacity in some >> neighborhoods. Private preschools and kindergartens "are already more >> difficult to get into than college," said Amanda Uhry, who owns >> Manhattan Private School Advisors. >> >> So who are these children? Robert Smith, a sociologist at Baruch >> College who is an expert on the growing Mexican immigration to the >> city, argued that the children of Mexican immigrants - many of whom >> live in the El Barrio neighborhood in East Harlem - are a big part of >> the story. >> >> But this is unlikely to account for all of the increase. For example, >> in 2003, fewer than 1,000 babies were born to Mexican mothers living >> in Manhattan. And births to Dominicans, the largest immigrant group in >> the city, have fallen sharply. >> >> Some scholars suspect that a substantial part of Manhattan's surge is >> being driven by homegrown forces: namely, the decision by >> professionals to raise their families here. >> >> Consider the case of Tim and Lucinda Karter. Despite the cost of >> having a family in the city, Ms. Karter, a 38-year old literary agent, >> and her husband, an editor at a publishing house, stayed in Manhattan >> to have their two daughters, Eleanor and Sarah. >> >> They had Eleanor seven and a half years ago while living in a >> one-bedroom apartment near Gracie Mansion on the Upper East Side. Then >> they bought the apartment next door and completed an expansion of >> their home into a four-bedroom apartment two years ago. A little less >> than a year ago, they had Sarah. >> >> "Manhattan is a fabulous, stimulating place to raise a child," Ms. >> Karter said. "We didn't plan it but we just delayed the situation. We >> were just carving away and then there was room." >> >> The city's businesses and institutions are responding to the rising >> toddler population. Three years ago, the Metropolitan Museum of Art >> began a family initiative including programs geared to children 3 and >> older. >> >> The Museum of Modern Art has programs for those as young as 4. In >> January, Andy Stenzler and a group of partners opened Kidville, a >> 16,000-square-foot smorgasbord of activities for children under 5 - >> and their parents - on the Upper East Side. >> >> "We were looking for a concentration of young people," Mr. Stenzler >> said. "There are 16,000 kids under 5 between 60th and 96th Streets." >> >> Many of the new offerings reflect the wealth of the parents who have >> decided to call Manhattan home. Citibabes, which opened in TriBeCa >> last month, provides everything from a gym and workplaces with >> Internet connections for parents, to science lessons, language classes >> and baby yoga for their children. It charges families $6,250 for >> unlimited membership for three months. >> >> Manhattan preschools can charge $23,000 a year. Ms. Uhry, with Private >> School Advisors, charges parents $6,000 a year just to coach them >> through the application process to get their children in. >> >> Yet in spite of the high costs, small spaces and infuriating extras >> that seem unique to Manhattan - like the preschools that require an >> I.Q. test - many parents would never live anywhere else. >> >> "Manhattan has always been a great place for raising your children," >> said Lori Robinson, the president of the New Mommies Network, a >> networking project for mothers on the Upper West Side. "It's easier to >> be in the city with a baby. It's less isolation. You feel you are part >> of society." >> >> ------------- >> >> Meme 023: Steven Reiss' 16 Basic Desires >> 3.9.21 >> >> Here's the results of research into the basic human desires. I've >> ordered them by what I think is my own hierarchy and invite you to do >> the same for yourself and for historical personages, like Ayn Rand. >> This list is not only important in its own right but has great >> implications for one's political concerns. Curiosity being my highest >> desire, I am an advocate of what I call the "information state," >> whereby the major function of the central government is the production >> of information and reserach. (Currently, it occpies at most two percent >> of U.S. federal spending.) And since independence is no. 3 for me, I am >> close to being a libertarian, in the sense that I'd vote with Ron Paul >> on most issues. But someone for whom independence is his most basic >> desire, he'd be advocating a full liberatarian order and impose it on >> states and counties. On the other hand, an idealist could advocate >> massive redistribution programs from rich to poor and military >> intervention in foreign countries that do not live up to his standards. >> I simply care much less than he does about such matters. >> >> The task of designing a state, or a world federal order, that reflects >> the diversity of desires and not just "this is what I want the world to >> be" continues. >> >> STEVEN REISS' 16 BASIC DESIRES >> >> Curiosity. The desire to explore and learn. End: >> knowledge, truth. >> Romance. The desire for love and sex. Includes a desire >> for aesthetic experiences. End: beauty, sex. >> Independence. The desire for self-reliance. End: >> freedom, ego integrity. >> Saving. Includes the desire to collect things as well >> as to accumulate wealth. End: collection, property. >> >> Order. The desire for organization and for a >> predictable environment. End: cleanliness, stability, >> organization. >> Family. The desire to raise one's own children. Does >> not include the desire to raise other people's >> children. End: children. >> Idealism. The desire to improve society. Includes a >> desire for social justice. End: fairness, justice. >> Exercise. The desire to move one's muscles. End: >> fitness. >> >> Acceptance. The desire for inclusion. Includes reaction >> to criticism and rejection. End: positive self-image, >> self-worth. >> Social Contact. The desire for companionship. Includes >> the desire for fun and pleasure. End: friendship, >> fun. >> Honor. The desire to be loyal to one's parents and >> heritage. End: morality, character, loyalty. >> Power. The desire for influence including mastery, >> leadership, and dominance. End: achievement, >> competence, mastery. >> >> Vengeance. The desire to get even. Includes the joy of >> competition. End: winning, aggression. >> Status. The desire for social standing. Includes a >> desire for attention. End: wealth, titles, attention, >> awards. >> Tranquility. The desire for emotional calm, to be free >> of anxiety, fear, and pain. End: relaxation, safety. >> Eating. The desire to consume food. End: food, dining, >> hunting. >> >> Source: Steven Reiss, _Who am I?: the 16 basic desires >> that motivate our actions and define our personalities. >> NY: Penguin Putnam: Jeremy P. Tarcher/Putman, 2000. I >> have changed his exact wordings in a few places, based >> upon the fuller descriptions in his book and upon his >> other writings. The ends given in the table are taken >> directly from page 31. >> >> The desires are directed to the psychological (not >> immediately biological) ends of actions, not to actions >> as means toward other actions. He has determined the >> basic ends by the use of factor analysis, a technique >> pioneered by Raymond B. Cattell. Spirituality, for >> example, he finds is distributed over the other desires >> and is not an end, statistically independent of other >> ends. And he finds that the desire for aesthetic >> experiences is so closely correlated with romance that >> he subsumes it thereunder. >> >> Reiss' list is in no particular order, and so, after >> much reflection, not only upon my thinking but upon my >> actual behavior, I have ranked the desires by what I >> think is my own hierarchy. >> >> A few remarks, directed to those who know me, are in >> order: >> Saving: Not much good at keeping within my budget, I >> have a relatively big pension coming and have a large >> collection of recordings of classical music and books. >> Order: While my office and home is in a mess, I have >> written a number of extremely well-organized >> discographies. >> Family: Not always an attentive father, I have kept at >> a job I've not always liked, instead of starting over >> again as an assistant professor. >> Idealism: I took the description from an earlier >> article by Reiss, so as not to restrict it to state >> redistribution of income. >> Exercise: I am well-known for my running and having >> entered (legally) the Boston Marathon, but I usually >> just set myself to a daily routine and don't go >> canoeing, for example, when on vacations. In high >> school, I was notorious for avoiding exercise. >> Acceptance: I can be rather sensitive to being ignored, >> though I don't do much about it in fact. >> Social Contact: Fun, for me, is intellectual >> discussion, often with playful allusions on words and >> ideas. >> Honor: I'm very low on patriotism, but I do like to >> think of myself as having good character. >> Vengeance: I've been told I love to win arguments for >> their own sake, but I have only a small desire ever to >> get even and never act upon it. >> >> [I am sending forth these memes, not because I agree wholeheartedly >> with all of them, but to impregnate females of both sexes. Ponder them >> and spread them.] >> > > >