From HowlBloom at aol.com Thu Sep 1 01:59:14 2005 From: HowlBloom at aol.com (HowlBloom at aol.com) Date: Wed, 31 Aug 2005 21:59:14 EDT Subject: [Paleopsych] Eshel, Joel, Paul, and Pavel--not to mention Ted and Greg Message-ID: Pavel, Joel, Paul, and Eshel? See if I?ve understood the following article correctly. In this cosmos things don?t follow the sort of random spread of probabilities Ludwig Boltzmann believed in. Instead, old patterns jump from one level to another, showing up in new news. To understand the size and nature of the jumps, we have to understand something even deeper?the search strategies with which the cosmos explores what Stuart Kaufman calls ?possibility space?. The key quote from the article below is this one: ?if physicists can adequately understand the details of this ?exploring behaviour?, they should be able to predict values of q from first principles?. Now please bear with me. What I?ve been digging into for many decades is the manner in which the cosmos feels out her possibilities?the search strategies of nature. So have Eshel Ben-Jacob, Paul Werbos, Pavel Kurakin, and Joel Isaacson. Pavel and I, in our paper ?Conversation (dialog) model of quantum transitions ? (arXiv.org) suggest that we may find applications all up and down the scale of nature to one search strategy in particular, that used by a cloud of 20,000 smart particles?bees. Power laws help us move from the human-scale to the very big. They help us understand how patterns visible on one scale?the scale of the spiral of water that flushes your toilet, for example, can be scaled up to hurricanes, to vortex of the Red Spot on the surface of Jupiter, to hurricanes on Jupiter the size of thirty earths, and to the spirals of billions of stars called galaxies. Power laws or their equivalent also allow us to predict that if we give the cosmos another six or seven billion years, the spirals from your toilet will show up in swirls of multitudes of galaxies?they will show up in today?s potato-shaped, still-embryonic galaxy clusters. Power laws can be used in forward or reverse. In addition to going from the small to the very big, they can help us move from the human-scale to the very small. Power laws help us understand how the swirl in your bathtub shows up in the swizzles of electrons twisting through a channel on a superconductor. On the level of life, we can see search patterns at work, search patterns in Dennis Bray?s clusters of receptors on a cell wall, search patterns in Eshel Ben-Jacobs multi-trillion-member bacterial colonies, search patterns in Tom Seeley?s colonies of bees, search patterns in E.O. Wilson?s colonies of ants, and search patterns in colonies of termites. We can see search patterns in the motions of birds, and in the way these patterns have been modeled mathematically in Floys (mathematically-generated flocks of carnivorous Boids?see http://www.aridolan.com/ofiles/JavaFloys.html). We can see search patterns in Martha Sherwood?s vampire bats, and search patterns in the areas of my fieldwork--human cultural fads and fashions and the multi-generational search patterns of art, religion, ideology, world-views, and science. If search patterns are the key to understanding the factor q, if they are the key to comprehending the magic factor that scales things up and down in giant, discontinuous leaps, then let?s by all means take search patterns at the scale of life and apply them like hell. That?s exactly what Pavel Kurakin and I have done in our paper. And it?s what I?ve done in much of my work, including in a book that?s been growing in the Bloom computer for fifteen years-- A Universe In Search Of Herself?The Case of the Curious Cosmos. Now the question is this. Have I misinterpreted the material below? And even if I?ve mangled it utterly, could an understanding of search patterns at one scale in the cosmos echo the patterns at other levels big and small? If the search patterns of life are reflected in the inanimate cosmos, do the search patterns of life in turn reflect the search patterns of the particles and processes of which they are made? And do the search patterns of an organism reflect the search patterns of her flock, her tribe, her culture, and of the total team of biomass? To what extent are competing search patterns a part of the cosmic process? Did competing search patterns only show up 3.85 billion years ago with the advent of life (assuming that the advent of life on earth took place at the same time as the advent of life?if there is any?elsewhere in the universe)? Are the gaps between competing search patterns also big ones, with their own chasms and jumps spaced out by their own mysterious q? Biomass has been probing this planet for 3.85 billion years now, and we are the fingers with which she feels out her possibilities. But we are also the fingers through which social clusters of protons 13.7 billion years old feel out their future. Is q related to the discipline of a probing strategy? Retrieved August 31, 2005, from the World Wide Web http://www.newscientist.com/channel/fundamentals/mg18725141.700 NewScientist.com * HOME * |NEWS * |EXPLORE BY SUBJECT * |LAST WORD * |SUBSCRIBE * |SEARCH * |ARCHIVE * |RSS * |JOBS Click to Print Entropy: The new order * 27 August 2005 * From New Scientist Print Edition. Subscribe and get 4 free issues. * Mark Buchanan CONSTANTINO TSALLIS has a single equation written on the blackboard in his office. It looks like one of the most famous equations in physics, but look more closely and it's a little bit different, decorated with some extra symbols and warped into a peculiar new form. Tsallis, based at the Brazilian Centre for Research in Physics, Rio de Janeiro, is excited to have created this new equation. And no wonder: his unassuming arrangement of symbols has stimulated hundreds of researchers to publish more than a thousand papers in the past decade, describing strange patterns in fluid flows, in magnetic fields issuing from the sun and in the subatomic debris created in particle accelerators. But there is something even more remarkable about Tsallis's equation: it came to him in a daydream. In 1985, in a classroom in Mexico City, Tsallis was listening as a colleague explained something to a student. On the chalkboard they had written a very ordinary algebraic expression, pq, representing some number p raised to the power q In Tsallis's line of work - describing the collective properties of large numbers of particles - the letter "p" usually stands for probability: the probability that a particle will have a particular velocity, say. Tsallis stared at the formula from a distance and his mind drifted off. "There were these pqs all over the board," he recalls, "and it suddenly came to my mind - like a flash - that with powers of probabilities one might do some unusual but possibly quite interesting physics." The physics involved may be more than quite interesting, however. The standard means of describing the collective properties of large numbers of particles - known as statistical mechanics - has been hugely successful for more than a century, but it has also been rather limited in its scope: you can only apply it to a narrow range of systems. Now, with an insight plucked out of thin air, Tsallis may have changed all that. Developed in the 19th century, statistical mechanics enabled physicists to overcome an imposing problem. Ordinary materials such as water, iron or glass are made of myriad atoms. But since it is impossible to calculate in perfect detail how every individual atom or molecule will move, it seems as if it might never be possible to understand the behaviour of such substances at the atomic level. The solution, as first suggested by the Austrian physicist Ludwig Boltzmann, lay in giving up hope of perfect understanding and working with probabilities instead. Boltzmann argued that knowing the probabilities for the particles to be in any of their various possible configurations would enable someone to work out the overall properties of the system. Going one step further, he also made a bold and insightful guess about these probabilities - that any of the many conceivable configurations for the particles would be equally probable. Deeper beauty Boltzmann's idea works, and has enabled physicists to make mathematical models of thousands of real materials, from simple crystals to superconductors. But his work also has a deeper beauty. For a start, it reflects the fact that many quantities in nature - such as the velocities of molecules in a gas - follow "normal" statistics. That is, they are closely grouped around the average value, with a "bell curve" distribution. The theory also explains why, if left to their own devices, systems tend to get disorganised. Boltzmann argued that any system that can be in several different configurations is most likely to be in the more spread out and disorganised condition. Air molecules in a box, for example, can gather neatly in a corner, but are more likely to fill the space evenly. That's because there are overwhelmingly more arrangements of the particles that will produce the spread out, jumbled state than arrangements that will concentrate the molecules in a corner. This way of thinking led to the famous notion of entropy - a measure of the amount of disorder in a system. In its most elegant formulation, Boltzmann's statistical mechanics, which was later developed mathematically by the American physicist Josiah Willard Gibbs, asserts that, under many conditions, a physical system will act so as to maximise its entropy. And yet Boltzmann and Gibbs's statistical mechanics doesn't explain everything: a great swathe of nature eludes its grasp entirely. Boltzmann's guess about equal probabilities only works for systems that have settled down to equilibrium, enjoying, for example, the same temperature throughout. The theory fails in any system where destabilising external sources of energy are at work, such as the haphazard motion of turbulent fluids or the fluctuating energies of cosmic rays. These systems don't follow normal statistics, but another pattern instead. If you repeatedly measure the difference in fluid velocity at two distinct points in a turbulent fluid, for instance, the probability of finding a particular velocity difference is roughly proportional to the amount of that difference raised to the power of some exponent. This pattern is known as a "power law", and such patterns turn up in many other areas of physics, from the distribution of energies of cosmic rays to the fluctuations of river levels or wind speeds over a desert. Because ordinary statistical mechanics doesn't explain power laws, their atomic-level origins remain largely mysterious, which is why many physicists find Tsallis's mathematics so enticing. In Mexico City, coming out of his reverie, Tsallis wrote up some notes on his idea, and soon came to a formula that looked something like the standard formula for the Boltzmann-Gibbs entropy - but with a subtle difference. If he set q to 1 in the formula - so that pq became the probability p - the new formula reduced to the old one. But if q was not equal to 1, it made the formula produce something else. This led Tsallis to a new definition of entropy. He called it q entropy. Back then, Tsallis had no idea what q might actually signify, but the way this new entropy worked mathematically suggested he might be on to something. In particular, the power-law pattern tumbles out of the theory quite naturally. Over the past decade, researchers have shown that Tsallis's mathematics seem to describe power-law behaviour accurately in a wide range of phenomena, from fluid turbulence to the debris created in the collisions of high-energy particles. But while the idea of maximising q entropy seems to work empirically, allowing people to fit their data to power-law curves and come up with a value of q for individual systems, it has also landed Tsallis in some hot water. The new mathematics seems to work, yet no one knows what the q entropy really represents, or why any physical system should maximise it. Degrees of chaos And for this reason, many physicists remain sceptical, or worse. "I have to say that I don't buy it at all," says physicist Cosma Shalizi of the University of Michigan in Ann Arbor, who criticises the mathematical foundations of Tsallis's approach. As he points out, the usual Boltzmann procedure for maximising the entropy in statistical mechanics assumes a fixed value for the average energy of a system, a natural idea. But to make things work out within the Tsallis framework, researchers have to fix the value of another quantity - a "generalised" energy - that has no clear physical interpretation. "I have yet to encounter anyone," says Shalizi, "who can explain why this should be natural." To his critics, Tsallis's success is little more than sleight of hand: the equation may simply provide a convenient way to generate power laws, which researchers can then fit to data by choosing the right value of q "My impression," says Guido Caldarelli of La Sapienza University in Rome, "is that the method really just fits data by adjusting a parameter. I'm not yet convinced there's new physics here." Physicist Peter Grassberger of the University of Wuppertal in Germany goes further. "It is all nonsense," he says. "It has led to no new predictions, nor is it based on rational arguments." The problem is that most work applying Tsallis's ideas has simply chosen a value of q to make the theory fit empirical data, without tying q to the real dynamics of the system in any deeper way: there's nothing to show why these dynamics depart from Boltzmann's picture of equal probabilities. Tsallis, who is now at the Santa Fe Institute in New Mexico, acknowledges this is a limitation, but suggests that a more fundamental explanation is already on its way. Power laws, he argues, should tend to arise in "weakly chaotic" systems. In this kind of system, small perturbations might not be enough to alter the arrangement of molecules. As a result, the system won't "explore" all possible configurations over time. In a properly chaotic system, on the other hand, even tiny perturbations will keep sending the system into new configurations, allowing it to explore all its states as required for Boltzmann statistics. Tsallis argues that if physicists can adequately understand the details of this "exploring behaviour", they should be able to predict values of q from first principles. In particular, he proposes, some as yet unknown single parameter - closely akin to q - should describe the degree of chaos in any system. Working out its value by studying a system's basic dynamics would then let physicists predict the value of q that then emerges in its statistics. Other theoretical work seems to support this possibility. For example, in a paper soon to appear in Physical Review E, physicist Alberto Robledo of the National Autonomous University of Mexico in Mexico City has examined several classic models that physicists have used to explore the phenomenon of chaos. What makes these models useful is that they can be tuned to be more or less chaotic - and so used to explore the transition from one kind of behaviour to another. Using these model systems, Robledo has been able to carry out Tsallis's prescription, deriving a value of q just from studying the system's fundamental dynamics. That value of q then reproduces intricate power-law properties for these systems at the threshold of chaos. "This work shows that q can be deduced from first principles," Tsallis says. While Robledo has tackled theoretical issues, other researchers have made the same point with real observations. In a paper just published, Leonard Burlaga and Adolfo Vinas at NASA's Goddard Space Flight Center in Greenbelt, Maryland, study fluctuations in the properties of the solar wind - the stream of charged particles that flows outward from the sun - and show that they conform to Tsallis's ideas. They have found that the dynamics of the solar wind, as seen in changes in its velocity and magnetic field strength, display weak chaos of the type envisioned by Tsallis. Burlaga and Vinas have also found that the fluctuations of the magnetic field follow power laws that fit Tsallis's framework with q set to 1.75 (Physica A, vol 356, p 275). The chance that a more comprehensive formulation of Tsallis's q entropy might eventually be found intrigues physicist Ezequiel Cohen of the Rockefeller University in New York City. "I think a good part of the establishment takes an unfair position towards Tsallis's work," he says. "The critique that all he does is 'curve fitting' is, in my opinion, misplaced." Cohen has also started building his own work on Tsallis's foundations. Two years ago, with Christian Beck of Queen Mary, University of London, he proposed an idea known as "superstatistics" that would incorporate the statistics of both Boltzmann and Tsallis within a larger framework. In this work they revisited the limitation of Boltzmann's statistical mechanics. Boltzmann's models cannot cope with any system in which external forces churn up differences such as variations in temperature. A particle moving through such a system would experience many temperatures for short periods and its fluctuations would reflect an average of the ordinary Boltzmann statistics for all those different temperatures. Cohen and Beck showed that such averaged statistics, emerging out of the messy non-uniformity of real systems, take the very same form as Tsallis statistics, and lead to power laws. In one striking example, Beck showed how the distribution of the energies of cosmic rays could emerge from random fluctuations in the temperature of the hot matter where they were originally created. Cohen thinks that, if nothing else, Tsallis's powers of probabilities have served to reawaken physicists to fundamental questions they have never quite answered. After all Boltzmann's idea, though successful, was also based on a guess; Albert Einstein disliked Boltzmann's arbitrary assumption of "equal probabilities" and insisted that a proper theory of matter had to rest on a deep understanding of the real dynamics of particles. That understanding still eludes us, but Tsallis may have taken us closer. It is possible that, in his mysterious q entropy, Tsallis has discovered a kind of entropy just as useful as Boltzmann's and especially suited to the real-world systems in which the traditional theory fails. "Tsallis made the first attempt to go beyond Boltzmann," says Cohen. The door is now open for others to follow. Close this window Printed on Thu Sep 01 01:17:25 BST 2005 ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Recent Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Institute for Accelerating Change ; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at panix.com Thu Sep 1 00:25:37 2005 From: checker at panix.com (Premise Checker) Date: Wed, 31 Aug 2005 20:25:37 -0400 (EDT) Subject: [Paleopsych] BBC: The fight against Holocaust denial Message-ID: The fight against Holocaust denial http://news.bbc.co.uk/go/pr/fr/-/1/hi/world/europe/4436275.stm Published: 2005/04/14 19:04:14 GMT [Even the BBC spells minuscule miniscule! Where can I get a book that lays out the arguments and counter-arguments on all sides, a book like "Scientists Answer Creationists" or "Scientists Answer Vellikowsky" (?sp). There are books by these approximate titles.] By Raffi Berg BBC News It is 60 years since the full horror of the Nazi Holocaust began to emerge with the liberation of Bergen Belsen concentration camp in Germany. Belsen was the first death camp entered by the Western allies and first-hand accounts of mass graves, piles of corpses and emaciated, diseased survivors spread quickly around the world. The BBC's Richard Dimbleby described dead and dying people over an acre of ground, while US radio correspondent Patrick Gordon Walker described the camp as a "hellhole", adding that this was not propaganda but the "plain and simple truth". But, in the 21st Century, as these events recede into history and the number of Holocaust survivors dwindles, there are still people who deny these crimes happened - and it is a tendency that some experts say is growing. "Holocaust revisionism is spreading, and not only among neo-Nazis," Kate Taylor, of the anti-fascist publication Searchlight, told the BBC News website. "As survivors are increasingly dying out it is much easier to hijack history for whatever cause or purpose." COUNTRIES WITH LAWS AGAINST HOLOCAUST DENIAL Austria Belgium Czech Republic France Germany Israel Lithuania Poland Slovakia Switzerland The internet has played a role in this. While publications peddling Holocaust denial were previously confined to the race-hate paraphernalia of extremist groups, the same material is now readily available on the web. One of the earliest and most infamous publications denying the Holocaust was a 32-page pseudo-academic booklet entitled Did Six Million Really Die?, first printed in England in 1974. It dismisses concentration camps as "mythology", rejects the Diary of Anne Frank as a hoax and claims Jews were not exterminated but rather emigrated from Nazi Germany with the help of a benevolent government. The booklet was widely banned but has resurfaced in electronic form on the internet. At 14-years-old children are not mature enough to make the distinction between a denialist site and a more legitimate site Kay Andrews, of the UK Holocaust Educational Trust, says Holocaust denial sites, subtly questioning the facts, can mislead the young people her group is trying to teach. "With the internet, you've got to be fairly well-educated to see through what revisionist websites are trying to do," she says. "I think as soon as you look at them closely you can work it out, but part of the problem that we find is teachers will send pupils off to do internet research and not guide them to specific sites. "So as a result kids put the Holocaust into a search engine, which comes up with all of this stuff, and at 14-years-old they are not mature enough to make that distinction between a denialist site and a more legitimate site." Denial doomed? However, the eminent British historian Sir Martin Gilbert believes the tireless gathering of facts about the Holocaust will ultimately consign the deniers to history. I saw the gas chambers. I saw the crematoria. I saw the open fires. Oskar Groening SS guard, Auschwitz "I don't think Holocaust denial is really a problem because of the incredible state of survivor memoirs," he told the BBC News website. "The number of deniers and the amount of denial literature is miniscule compared with the serious literature, not only the memoirs but the history books, the specialist books, and books which cater for every age group on the Holocaust. "There is a tremendous range of stuff and some of it is written for young people and teenagers - in that sense the Holocaust deniers have totally lost out." Over a period of many years, Jerusalem's Yad Vashem museum has documented the lives of more than three million Holocaust victims. More recently, Steven Spielberg's Survivors of the Shoah [Holocaust] Visual History Foundation (VHF) has recorded more than 50,000 videotaped interviews with Holocaust survivors and witnesses. Turning point But VHF president Doug Greenberg is less confident about the future than Martin Gilbert. On the positive side, he notes that in 2000 a British judge rejected a libel case brought by a notorious British revisionist, David Irving, against US historian Deborah Lipstadt who had called him one of the "most dangerous spokespersons for Holocaust denial". "The most important thing that's happened in terms of Holocaust denial is the David Irving trial," Mr Greenberg told the BBC News website. "Because a British court of law said in effect Holocaust denial is not a valid way to look at the past." On the other hand, he says, we just cannot tell how far history will be forgotten in years to come. "In 50 years from now, not only will there be no survivors alive, there won't be anybody alive who even knew a survivor, and that is where the real danger lies," he said. The fear that deniers could gain the upper hand led an SS camp guard, Oskar Groening, to break a lifetime of silence earlier this year in a BBC documentary, Auschwitz: The Nazis and the Final Solution. "I saw the gas chambers. I saw the crematoria. I saw the open fires. I was on the ramp when the selections [for the gas chambers] took place," said Mr Groening, now in his 80s. "I would like you to believe these atrocities happened - because I was there." From checker at panix.com Thu Sep 1 00:25:27 2005 From: checker at panix.com (Premise Checker) Date: Wed, 31 Aug 2005 20:25:27 -0400 (EDT) Subject: [Paleopsych] Patricia A. Williams: The Fifth R: Jesus as Evolutionary Psychologist Message-ID: Patricia A. Williams: The Fifth R: Jesus as Evolutionary Psychologist Theology and Science, Vol. 3, Issue 2 (2005), pp 133-43. [Responses appended.] The historical Jesus seems to have known about human nature as described by evolutionary psychology. He addresses the dispositions of human nature that evolutionary psychology says are central: resources, reproduction, relatedness (kinship), and reciprocity. In doing so he answers Aristotle's question, how can human beings flourish? His answer opens a window onto the divine. Patricia A. Williams is a philosopher of biology and philosophical theologian who writes full-time on Christianity and science. Her recent books include, Doing Without Adam and Eve: Sociobiology and Original Sin (2001) and Where Christianity Went Wrong, When, and What you Can do about it (2001). Her mailing address is PO Box 69, Covesville, VA 22931. Her e-mail address is theologyauthor at aol.com; website www.theologyauthor.com. ----------------- I have argued in Doing without Adam and Eve: Sociobiology and Original Sin that Christian doctrines of original sin are only partly true and that most Christian doctrines of the atonement are flatly false.1 These doctrines depend on the historicity of Adam and Eve, and science shows us that Adam and Eve cannot be historical figures. In my more recent book, Where Christianity Went Wrong, When, and What You Can Do About It,2 a work based on historical Jesus scholarship, I argued further that Jesus did not perceive his own death as a sacrifice for sin; indeed, he did not consider sacrifices for the forgiveness of sin necessary. Since these arguments undermine doctrines previously considered central to Christianity, they appear to make Jesus irrelevant. However, to draw that conclusion would be wrong. I argue here that Jesus is relevant at least in part because he is an astonishingly perceptive evolutionary psychologist. As such, he answers Aristotle's famous ethical question, "how can human beings flourish?" and offers us a window onto the divine. To answer Aristotle's question or, indeed, questions in ethics in general, requires a theory of human nature. We need to know who we are before we can figure out how to flourish. Now, for the first time in history, we have a scientific theory of human nature. It is evolutionary psychology. Evolutionary psychology emerged in the early 1970s. Now the subject has its own textbook,3 a plethora of laborers in its vineyard, and considerable empirical support. Moreover, evolutionary psychology is rooted in sociobiology, a scientific theory that has had great heuristic value and made successful predictions about the social behavior of other animals for almost 40 years. The four central concepts of evolutionary psychology derive from sociobiology and they are well established. They are the four R's of human nature and much of the rest of nature as well: resources, reproduction, relatedness, and reciprocity. Human nature's four Rs People pursue resources. To survive, all organisms must do so. People reproduce. To continue the existence of their gene line, all mortal creatures must do so. These two Rs, resources and reproduction, are essential to the continued existence of the organic world. For sexually reproducing organisms like ourselves, sex is also essential (although not for every individual). Relatedness as such is, of course, essential too. Reproduction produces related organisms, by definition. Here, however, relatedness refers to inclusive fitness, the concept at the foundation of sociobiology. In classic evolutionary theory, an organism is fit if it survives to reproduce. In sociobiology, fitness moves from the individual organism to include its close relatives. In inclusive fitness theory, organisms help close relatives survive to reproduce. The classic case is parents helping dependent offspring grow to maturity. Biologists from Darwin on knew that organisms also help organisms other than offspring, but they did not know why. Sociobiology explained why. Organisms help close relatives because close relatives carry copies of the helper's genes. It turns out that the most accurate way to view evolution is from the point of view of the gene, and the evolutionary goal is to get as many copies of one's genes into the next generation as possible. An organism does this by reproduction, certainly, but also by helping most those relatives most likely to be carrying copies of its genes. Many organisms know who their close kin are?by smell, by sight, by sharing a common nest or mother, and by chemical cues. Of course, none knows about genes. None needs to. All an organism needs to be able to do is to recognize and help its relatives. Inclusive fitness theory permits helping behavior that is not ultimately egocentric or even a hidden form of egocentricity. The helping organism need not expect help in return for its aid because its reward is built into the situation. In return for helping relatives, the organism gets more copies of its genes into the next generation. Of course, it does not know this, so it cannot be behaving selfishly. Finally, for organisms that can recognize individuals and remember them and their deeds of help and harm, reciprocity becomes salient. Reciprocity entails equal exchange and may occur between organisms that are not kin. Reciprocity is egocentric: the helper expects help in return and in an amount equal to the help given. Few animals have the memories requisite to engage in reciprocity, but we do. We are creatures who reciprocate. Much of our lives are devoted to the exchange of goods and favors, and much of our justice system exists to enforce reciprocal relationships like contracts and marriages. The sense that reciprocity is justice underlies the legalization of the death penalty for murder. Few people doubt that the continued existence of the organic world on Earth is good and, so, logically the things that make its continuance possible are good. This entails that the four R's are good, but suppose we re-label them. If we engage in them too vigorously, the pursuit of resources becomes greed; of reproduction, lust; of relatedness, nepotism; and of reciprocity, justice for me and my group, to the exclusion of justice for you and your group. As the pursuit of resources is the most basic need of any organism, so greed is the simplest excess. It entails hoarding more than a person needs, sometimes to the direct detriment of the person, as when we eat to the point of obesity, but also to the detriment of society, as when the economic system is such that a few become ludicrously rich while the many remain poor. Lust is more complex, for it involves two sexes, and evolutionary psychology demonstrates that, because of their biological roles, male and female differ in their sexual desires. By definition, males produce smaller sex cells. This means that, with a few interesting but irrelevant exceptions, male organisms invest less in their offspring than females. In mammals such as ourselves, females make an additional investment, for they carry and nourish their offspring internally for a period, and then feed them milk their bodies make. With his small investment, the man can walk away from a pregnancy he has caused without great loss, even if his child dies, but the woman loses greatly if her child dies, for she has invested greatly. Usually her best evolutionary strategy is to continue investing until her child is able to take care of itself. The result of these differences is that men's best evolutionary strategy is to impregnate many women, whereas a woman's best strategy is to be impregnated by a healthy, prosperous man who will devote his resources to their children. The result after millions of years of evolution is lustful males and sexually cautious females, on average. Marriage complicates the picture further. If a man is to spend his adult years investing his resources in his wife's children (there are marriage systems where this is not the case, but they are not relevant here), he needs to be certain that they are also his children. Therefore, he must guard against his wife's adultery. Millions of years of evolution have produced jealous males who will punish women vigorously for adultery?sometimes brutally, sometimes fatally. Thus, evolution has burdened women doubly. On average, women invest more than men in offspring and on average men punish women more than they punish each other for adultery. Put simply, men lust more; women suffer more. Nepotism is even more complex, but it is easier to explain. Dependent children need their parents' special love and support in order to survive to adulthood, so special parental love is necessary and good. However, it does not end when the child becomes an adult; indeed parents continue to love their children more than they love other people's children?and therefore more than they love other people?for the life of the parent. However, special love for adult relatives easily becomes nepotism (unfair favoritism). If pursued systematically, it becomes tribalism and, reversed, may result in discrimination against or even murder of non-relatives or members of other tribes. It can turn into genocide. Most complex of all is reciprocity. Although Aristotle knew nothing of sociobiology, he built much of his ethical theory on giving others their due. Being familiar with sociobiology, Richard D. Alexander4 and Matt Ridley5 both explicitly developed ethical theories that place reciprocity at the center of ethics. Yet reciprocity is a double-edged sword. It may call for justice in the abstract and justice for others, but often it cries for justice for myself, for my kin, for my group. Reciprocity endorses an eye for an eye. It recommends vengeance. It may also give rise to paranoid vigilance that keeps asking whether the exchange has really been equal. Was I cheated? Again? Greed, lust, nepotism, and justice exclusively for oneself and one's group are the main vices that spring from the four R's. However, the four R's produce virtues too. The virtue that uses resources is generosity, the ability to give resources freely to others. From the desire for reproduction, love springs, the sort of love that sweeps ego aside and encourages the lover to enhance the beloved's welfare. The reproductive desire results in love for people who are not close kin. The virtue founded on relatedness is love also, a steady love for relatives that we can transfer from relatives to all others by symbolizing all people as related. Reciprocity can beget friendships and other relationships of equality, a personal caring that does not keep a ledger of gain and loss. It, too, might develop into generosity and love. Thus, evolution has given us enormous potential for both good and evil, and it has provided a wide range of choices, from egocentricity that seeks the destruction of others to generosity and love that seek to further their welfare. We are remarkably flexible and free. That is the primary reason we find it so difficult to answer Aristotle's question about how to flourish. If we have such a range of desires and can engage in such an enormous number of activities, then which are those that best promote our flourishing? Therefore, to answer Aristotle's question, we need to know about the four Rs, which are the central themes of human nature. We need to recognize their centrality in our psychological makeup and to know their potential to lead us into vice and virtue. Finally, we need to grasp how best to handle them so that all people may flourish. Without knowing anything about Aristotle?not to mention evolutionary psychology!?these things are precisely what the historical Jesus knows, discusses, and enacts. Jesus and the simplest R's The figure known as "the historical Jesus" is neither the Jesus of the Gospels, who is many contradictory persons, nor the "real" Jesus, whoever that would be. Whoever it was, we cannot recover him now. The historical Jesus is a scholarly reconstruction that most Jesus scholars base primarily on the synoptic Gospels: Matthew, Mark, and Luke. Some scholars also use the Gospel of Thomas, which was recovered with the discovery of ancient documents at Nag Hammadi in 1945. All Jesus scholars also use other historical materials that inform them about the situation in Palestine from about 200 BCE to 100 CE. These materials include Greek and Roman archives, the works of Josephus and other ancient scholars, the Hebrew Scriptures, Jewish intertestamental literature, the Dead Sea Scrolls, and the findings of archaeology. Most scholars exclude the Gospel attributed to John because they think it contains very little historical material going back to Jesus. As philosopher of historical methodology Raymond Martin notes in his book on the works of outstanding Jesus scholars, the historical Jesus scholars are professional historians doing expert work that meets the standards of modern scholarship.6 John P. Meier explains their methodology at length,7 and Funk, Hoover, and the Jesus Seminar explain it more succinctly while also laying out clearly how and why historians view the Gospels as they do.8 The Jesus I refer to is the scholars' reconstruction. The main effect of using their reconstruction here is to restrict the passages of scripture I discuss to those the scholars think go back to the historical Jesus. The historical Jesus perhaps says more about the use of resources than about any other subject. He speaks about resources in short sayings like "Do not worry about your life, what you will eat, or about your body, what you will wear" and "Consider the ravens: they neither sow nor reap, they have neither storehouse nor barn, and yet God feeds them" and "Consider the lilies, how they grow: they neither toil nor spin; yet I tell you, even Solomon in all his glory was not clothed like one of these" (Luke 12:22, 24, 27 NRSV). Jesus says God takes care of them and will take care of us. He says we spend too much time worrying and working over resources. He tells stories like that of the rich man and Lazarus (Luke 16:19 ? 26). The rich man, who dressed and dined lavishly, ignored poor, sick Lazarus at his gate. After both die, the rich man finds himself in hell, staring at Lazarus in heaven. Here Jesus emphasizes our common humanity and the skewed distribution of resources in the ancient world, where the rich got rich by exploiting the poor. Those who neglect their less fortunate neighbors, who consider their own wealth a sign of their favor in God's sight and poverty and sickness signs of disfavor, are wrong. We live together, and we cannot flourish separately. Perhaps the most poignant of the stories scholars attribute to Jesus concerns an unnamed farmer who is blessed with such abundant harvests he decides to tear down his already full barns and build bigger ones to hold his burgeoning produce. Jesus calls him a fool, for he will die that night, and he cannot take his carefully conserved resources with him (Luke 12:16 ? 20). Perhaps he should have considered the lilies and ravens or the suffering poor and used his wealth rather than hoarding it. The Gospels tell us little about what Jesus thought about sex. Moreover, Jesus scholars think the few sayings attributed to Jesus in the Gospels that deal with sex are not certain to go back to the historical Jesus. The main exception is Jesus' prohibition of divorce (Matt. 5:31 ? 32). In Judaism in Jesus' day, women could not divorce their husbands, but husbands could divorce their wives, often for trivial reasons. At the same time, women depended on men for protection and income. A woman without a husband was in trouble, and a divorced woman was tarnished goods. Thus, Jesus' prohibition of divorce protected women. In a different culture, he might have offered different protection?say, equal pay for equal work or heavy penalties against men for spousal abuse. The point of the prohibition is not that divorce is wrong but that women need protection from the power men want to exercise over them, as evolutionary psychology suggests. From the point of view of legality, Jesus' prohibition of divorce is inconsistent with his usual laxness about laws; legal consistency would expect him to allow divorce. However, his prohibition of divorce is consistent with his tendency to protect the disadvantaged, whether the poor, the sick, children, or women. In prohibiting divorce, Jesus was protecting women. His concern for the equality of women appears again in a story about a woman caught in adultery, currently recounted in the Gospel according to John (7:53 ? 8:11). Although found in John, the narrative is not thought to be originally part of John's Gospel: the style is not John's and the passage is not in some of the earliest copies of John that we have. It also floats around in John and even shows up in early versions of other Gospels. Yet it is attested by early church historians and is consistent with other deeds and sayings known to come from the historical Jesus. Scholars think it probably goes back to Jesus. The narrative tells of some men bringing before Jesus a woman who, they claim, they have caught in the act of adultery. They ask whether she should be put to death by stoning, as the law required. Jesus replies that the sinless man should throw the first stone, and the men slowly depart, leaving the woman to live. This story fits evolutionary psychology perfectly. Evolutionary psychology says that men are more lustful than women are but, at the same time, they want to stop their women from committing adultery and may be brutal in order to do so. The story says the men caught the woman in the act. If so, they necessarily caught the man in the act as well, but he is nowhere to be found. The men want to punish only the woman, despite the fact that the Torah calls for the deaths of both parties (Lev. 20:10; Deut. 22:22). Jesus, knowing men's hearts, says okay, stone her if you are sinless, and the men retire, their own lust exposed. Again, Jesus is protecting women and making the battle between the sexes more equal than the men had wished. Less certain to go back to the historical Jesus is the story about the Samaritan woman with whom Jesus converses at the well in Samaria. The story is only in John's Gospel (4:5 ? 42). Yet it is consistent with what scholars know about Jesus, who behaved in ways his society disapproved. He talks to the woman in public, not acceptable behavior for a Jewish man, and she is a Samaritan, member of a group that Jews despised. It turns out that she has had several "husbands," that is, lovers, but although Jesus knows this, he does not withdraw from the conversation. He does not appear to condemn her illicit sexual behavior. The other almost certain item scholars know about Jesus regarding sex is that he was celibate. Among Jews, whom the Torah commanded to be fruitful and multiply (Gen. 1:28), Jesus' celibacy might seem unlikely. However, many of the prophets were celibate, and John the Baptist and those members of the Jewish sect of Essenes who lived in the wilderness probably were as well. Yet Jesus never praises celibacy, and his leading disciple, Peter, is married (Mark 1:30). Nothing more is known about Jesus' attitude toward reproductive relations except that he seems to have liked and protected children, and many women were among his followers and were active among the first generation of Christians, so he must have welcomed them into the group around him. Considering how aroused people get about sexual/reproductive relations the world over, Jesus seems amazingly calm and unperturbed. He calls a married man to be his leading disciple, yet remains celibate. He cares for children, yet has none of his own. He does not get excited about illicit sexual relationships, yet protects women from men's brutality toward them in the crucial issues of adultery and divorce. Indeed, concerning the two least complicated Rs, resources and reproduction, Jesus advises us to be at ease. About resources, he suggests we behave more like other animals, not worrying so much about the future but enjoying the fruits we have today. The prayer attributed to him says, "Give us this day our daily bread" (Matt. 6:11 NRSV) rather than asking for a good harvest to store away. Yet Jesus is not an ascetic. On the contrary, he parties enough to be accused of drunkenness and gluttony (Matt. 11:19). Jesus seems to steer a middle course, and this suggests that he is insufficiently attracted to this R either to pursue or to reject it. He uses resources without being possessed by them. His attitude toward reproduction is similar, except that he seems to have studied this chapter of his evolutionary psychology textbook even more carefully. Knowing of men's lust and their desire to control women's reproduction, brutally if necessary, he tries to protect and help women, making the reproductive relationships equal. Other than that, his attitude seems to be "take it or leave it." Again, he is insufficiently attracted to this R either to pursue or to reject it. Jesus and the other R's To understand Jesus on the other two R's?relatedness and reciprocity?requires some knowledge of Judaism in Jesus' day. The Jews had two ancient beliefs. They believed God had chosen them out of all the nations on Earth to be God's special people, and they believed God had promised them a particular piece of land, that it was God's holy land, and that they were to live on it and to cultivate it as their own. Yet in the first century, Jews were scattered across the Roman Empire and beyond, and Rome was sovereign over the holy land where Jews thought only God should reign. Most Jews who cried for justice wanted to drive Rome out of God's land, their land. A newer belief about chosenness invaded Judaism about the time of the Exile. Some Jews thought God had chosen only a remnant of the Jewish people and had doomed all other Jews. This remnant theology often included apocalyptic eschatology, the idea that the end of the age was near and that it would culminate in holy and devastating war led by God's messiah and fought by his angels and the holy remnant against the Romans and the condemned Jews. In the end, God would establish justice, that is, God would vindicate the remnant and destroy the other Jews and the gentiles who did not convert to worship of the Jewish God. Moreover, all twelve Jewish tribes, including the ten that had disappeared centuries ago, would assemble in the holy land along with the (good) Jews from the Diaspora. These exclusivist and violent beliefs caused three centuries of sporadic civil war among the Jews, when Jews murdered other Jews and called it God's justice. The civil wars culminated in the Roman destruction of the Temple in 70 CE. Jewish themes, then, were land (resources), kinship (relatedness), and justice (reciprocity seen as self-vindication), all under the aegis of the one and only God. (Other Jews had apparently been reading evolutionary psychology, too.) Given Jewish circumstances, these themes provided a recipe for self-destruction. Self-destruction arrived via civil war and Roman exasperation. Jesus stepped into this stew as an itinerant preacher. His career began with John the Baptist (Mark 1:1 ? 11) who was preaching by the Jordan River, announcing the forgiveness of sins through baptism. In this, John was not following Torah, which commanded sacrifices in the Temple for the forgiveness of sins. Jesus' indifference toward the Temple, symbol of Jewish chosenness, relatedness, and covenantal reciprocity with God, implies that he was not attracted to these Jewish themes. Relatedness, in particular, was not high on Jesus' list of sacred subjects. In an extremely well attested incident (Mark 3:31 ? 35), Jesus was talking with his close disciples and friends when his mother and brothers approached and asked to see him. When Jesus' disciples told him his family was outside, Jesus not only refused to see them but also disowned them. He stated, instead, that his friends were his family. In so far as Jesus was unmarried, he also rejected the relatedness that comes with children and in-laws. As a good evolutionary psychologist, he knew that families are naturally hierarchical and promote nepotism. Jesus wanted to emphasize equality and the common kinship of all people. His emphasis on our common kinship stands in stark contrast to the Jewish claim that all Jews were related and special in God's sight because all were the offspring of one man, Abraham. Abraham, they claimed, was their father. Jesus referred to God as father, not Abraham. God, of course, in Jewish theology, is creator of all, the father of all people, not merely the Jews. Jesus tells stories about fathers in which the father represents God. In the story of the prodigal son (Luke 15:11 ? 32), a younger son asks for his inheritance before his father dies, then goes off and violates Jewish law, finally tending pigs, animals the Torah calls unclean. He even envies the pigs. When, broke and hungry, he returns home hoping to become one of his father's servants, his father embraces him, forgives him, and throws a feast for him, much to the chagrin of the prodigal's elder brother who has faithfully remained home and served their father well. If this father represents God, Jesus is implying that God loves and saves the unfaithful as well as the faithful. This is not remnant theology. It is not reciprocity, either. Indeed, the father seems generous to a fault. Jesus seems well aware of the human desire for reciprocity and its offshoot, justice, and he constantly discourages seeking them. Well known are his short sayings denigrating the Torah reciprocity of an eye for an eye. The sayings suggest, instead, that if people batter one cheek, turn the other; if they sue for a coat, give them a cloak also, and if they force a person to go a mile, go two (Matt. 5:38 ? 41). These statements all reject reciprocity. Jesus also tells stories that portray reciprocity and justice negatively. The prodigal son is one such story. It depicts the elder brother as wanting justice. He is angry about his father's embrace of his brother, even after the father assures him that all the father has is his (Luke 15:31). He repudiates and perhaps envies the father's generosity even after the father tells him that he will lose nothing by it. An even more relevant story is that of the day laborers (Matt. 20:1 ? 15). Jesus tells of a landowner who hires some laborers early in the morning and promises them a day's wage?a fair wage, probably, since they accept it. He hires others later, some as late as evening. When time comes to pay the laborers, he pays the late-hired a whole day's wage, and those hired earlier complain. The landowner wants to know what their complaint is. They received the agreed wage. The landowner did not cheat them. Nevertheless, they feel resentful. They expected reciprocity to be the rule the landowner would use to pay his workers. Instead, the landowner displayed generosity, and his generosity angered them. First century history tells who the angry figures represent. They typify the remnant theologians and their followers who expected God to repay their faithfulness with victory and vindication and condemn all the unfaithful, which is to say, all the Jews who disagreed with them. Repeatedly, Jesus rejected reciprocity in favor of generosity and forgiveness. The rabbis had suggested that a person should be forgiven three times. The Gospels report that Jesus recommended seven (Luke 17:4), a symbolic number standing for wholeness or completion. The most extreme report has Jesus saying to forgive 77 times (Matt. 18:22). A figurative doubling of completion or infinity seems to be implied. This is probably Matthew's emendation, but the idea of infinite forgiveness apparently goes back to Jesus. Jesus was wiser than those who want to make ethics center on reciprocity. He knew that placing reciprocity at the center of ethics generates ruinous results. Reciprocity justifies vengeance. It stifles generosity. It encourages self-centeredness, self-righteousness, and paranoia. Borrowing from the Torah, Jesus recommends a better way: love your neighbor; love, he says, is the heart of the Torah and the prophets (Matt. 22:39 ? 40). Love is generous; love forgives; love helps others and casts out fear. In contrast, reciprocity is egocentric. Placing it at the center of ethics encourages people to guard their own interests and mistrust other people. In doing so, it leaves them lonely and fearful, and therefore they seek groups that emphasize conformity, enforce strict rules, and proclaim their own self-described goodness while denouncing outsiders' evil. Jesus knew such people and such groups?the remnant theologians and their followers. He looked around him and saw that a strong emphasis on reciprocity does not lead to a flourishing life. Yet Jesus embraced equality for the poor and powerless. The concept of egalitarianism might spring from reciprocity, but they are not the same. Jesus seems to think that the rich might give to the poor without asking return, and husbands might treat their wives with the same equality they offer to their fellow men. Jesus and the Divine To say that Jesus was an excellent evolutionary psychologist is not to claim that he knew anything about evolution. He was probably a typical Palestinian Jew of his time in his knowledge of the world. He would have known the Torah said God created the world in six days and created Adam and Eve as the first human beings. Jesus probably would not have known much history except as the Hebrew Scriptures represent it, and he would have known no science. Nonetheless, he had remarkable insights into human nature as evolutionary psychology discloses it and profound solutions on how to cope with it, based on compassion, especially for the powerless. His slogan might have been "equality, not reciprocity," which amounts to generosity by those who have power and wealth to those who have neither. Jesus represents God's generosity this way: God gives without requiring return. The Gospels tell us that the divine touched Jesus at his baptism and, after that, he exorcised the possessed, healed the sick, and forgave sinners. Josephus, too, says Jesus was "a doer of wonderful works." 9 Wonderworkers were said to work by divine agency, and there seems little doubt that Jesus was close to God, filled with the divine, a "spirit person," to use historian Marcus Borg's term. Jesus himself felt he was close to the divine. He prayed frequently, sometimes all night, and he called God "father." His insights into human nature and his solutions to the problems it poses for human flourishing probably came from the divine source. If so, Jesus may be for us a window onto the divine. Jesus spoke of love, generosity, and forgiveness. In doing so, he spoke of the nature of God. Christian atonement theology has claimed that Jesus died on the cross as a sacrifice for sins. Jesus, it claims, died to satisfy God's need for justice?a God, it also claims, who has no needs. An innocent man had to die to pay for the sins of the guilty because God required that justice be done. Such is atonement theology. It does not take much insight into the nature of justice to grasp the injustice of killing the innocent to forgive the guilty. The God who allegedly commanded such a deed ruled by reciprocity and had a stingy soul. This is not Jesus' God. Jesus says that God is generous, so generous it angers those whose ethics rest on reciprocity. God is not a God of reciprocity, of contracts and covenants. Nor, according to Jesus, does God demand sacrifice for the forgiveness of sins. The Gospels never show Jesus sacrificing at the Temple. They introduce him as a disciple of John the Baptist, who does not sacrifice at the Temple either. Instead, John baptizes for the forgiveness of sins. Jesus, too, forgives sins without requiring sacrifice?or even baptism. Jesus did not think God requires sacrifices in order to forgive sins. Indeed, Jesus says God gives us what we need when we ask for it. In one of his stories, he tells of an evil judge whom a widow importunes so strenuously he decides her case in her favor (Luke 18:1 ? 5). The story is about an evil judge, not a good one, and yet when asked, he gives what is wanted. How much more then, would Jesus' God, a generous, fatherly God, give what we ask, including forgiveness? In summary, the historical Jesus was an evolutionary psychologist who told us how to flourish in a world where human beings evolved, yet where divinity pervades human life. We flourish, he says, not by egocentricity, with its greed, lust, nepotism, and self-seeking justice, but by love, with its generosity and forgiveness. Since greed and generosity, egocentricity and love arise from the four R's, we have the capacity to choose greed or generosity, egocentricity or love. Jesus asks us to choose love, to act like God rather than like evolved creatures caught in evolutionary overdrive. Jesus says not to be so self-concerned, so harried, and so vigilant. The fifth R, he says, is "Relax." Notes 1. Patricia Williams, Doing without Adam and Eve: Sociobiology and Original Sin (Minneapolis: Fortress Press, 2001). 2. Patricia Williams, Where Christianity Went Wrong, When, and What You Can Do About It (Philidelphia: Xlibris, 2001). 3. David M. Buss, Evolutionary Psychology: The New Science of the Mind (Boston: Allyn and Bacon, 1999). 4. Richard D. Alexander, Darwinism and Human Affairs (Seattle: University of Washington Press, 1979). 5. Matt Ridley, The Origins of Virtue: Human Instincts and the Evolution of Cooperation (New York: Penguin Books, 1996). 6. Raymond Martin, The Elusive Messiah: A Philosophical Overview of the Quest for the Historical Jesus (Boulder, Colorado: Westview Press, 1999). 7. John P. Meier, A Marginal Jew: Rethinking the Historical Jesus. Vol. 1, Roots of the Problem and the Person (New York: Doubleday, 1991). 8. Robert W. Funk, Roy W. Hoover, and the Jesus Seminar, The Five Gospels: The Search for the Authentic Words of Jesus (San Francisco: Harper San Francisco, 1993). 9. Falvius Josephus, Antiquities of the Jews, III:3, in The Complete Works of Josephus (Grand Rapids, Mich.: Kregel Publications, 1981). --------------------------------- A Response to Patricia A. Williams' "The Fifth R: Jesus as Evolutionary Psychologist" Richard F. Carlson and Jason N. Hine We wish to thank The Rev. Bill Maury-Holmes for his insightful suggestions in the preparation of this manuscript. Richard F. Carlson is Research Professor of Physics at the University of Redlands. He is editor of the book, Science and Christianity: Four Views (2000). Jason N. Hine has worked in the area of science and Christian faith for a number of years. Recently he co-led the seminar, "What Can We Teach Our Children About Dinosaurs?" Patricia A. Williams' essay centers on her assertion that the "historical Jesus" (as defined by the work of the Jesus Seminar) exhibited personal characteristics consistent with an understanding of human nature as described by evolutionary psychology. This relatively new enterprise describes human characteristics in terms of David Buss' four "R's": Resources, Reproduction, Relatedness (kinship), and Reciprocity.1 After showing that each R generally contains a spectrum of characteristics, Williams attempts to identify Jesus' position along each spectrum by citing incidents and sayings from the Gospels. We have a few quibbles that we will mention here but not pursue. Williams states that she uses the results of the Jesus Seminar in her characterization of Jesus.2 Yet over half of her Gospel references have been given gray or black classifications by the Seminar (gray or black implying that the sayings in question are most likely not Jesus' words). Two other quibbles relate to Williams' statement that Jesus did not perceive his own death as a sacrifice for sin and her comments on Christian atonement theories. Each of these is worthy of a response, but we have chosen to concentrate on Williams' evaluation of Jesus' character in terms of Buss' four R's. We see Williams' essay as a useful, interesting, and fanciful way to view Jesus. However, we wish that she had followed her own ideas just a bit further. By successfully demonstrating how Jesus' character is consistent with evolutionary psychology, Williams places him in a box of dimensions specified by the four R's. We feel that Jesus' character surpasses the four R's in a number of remarkable ways. While Williams briefly explores intimations regarding the divinity of Jesus in the final section of her article, we find her presentation to be inadequate. Our goal is to highlight areas where we would like to have seen Williams take her ideas further. We will refer to much of the same evidence as used by Williams from the Gospels. In some cases, we provide additional evidence from the Gospels, which for the most part falls under Jesus Seminar categories of red or pink (most likely the sayings of Jesus) or occasionally gray (probably not said by Jesus but close to his ideas).3 As does Williams, we will use black references in a very limited way (black, in the opinion of the Jesus Seminar, implies that Jesus did not say it, as it represents the perspective or content of a later or different tradition).4 In doing so we hope as much as possible to compare oranges to oranges (maybe we should say red grapefruit to red grapefruit). Our understanding is that, when presented with earthly problems, Jesus succeeded in incorporating God's will in his response. Another way of putting this is that Jesus' response was both horizontal (human to human) and vertical (human to God). As indicated by Williams, Jesus' response to every situation was based on "the unmatched quality of God's love, generosity, and forgiveness." 5 The problem is we feel that Williams could have done more to demonstrate this when considering his responses to people or situations. In her discussion of Jesus and his attitudes to the issue of Reproduction (one of the four R's), Williams cites the account of the adulterous woman.6 The religious leaders brought the adulterous woman to Jesus thinking that there were only two possible ways he might respond ? either uphold the Law and condemn the woman to death, or allow her to live and thereby break the Law. However, Jesus' response did not come from among this set; rather his action was profoundly perceptive, wise, and loving. Williams claims that Jesus' intention was to provide protection for women by exposing the lust in the woman's accusers. We agree that this is the main thrust of the narrative. Clearly, Jesus cared for and forgave the adulterous woman, and one may infer from this that Jesus cares for all women. However, more than this, Jesus' response also demonstrated care for the woman's accusers?he did not seek to humiliate them but rather his response served as an invitation to engage in serious self-reflection, and thus the door was left open for any of the accusers to come to Jesus later. Further, Jesus' action here would have likely had a similar effect on each woman and man in the crowd. Even today, his response invites personal reflection, illuminates our shared struggle with sin, and demonstrates the love of God through what is termed "grace"?the free and divine gift of mercy, acceptance, and favor. Hence, we feel that Jesus' approach stretches the scope of what evolutionary psychology considers possible. The next R we examine is Relatedness or kinship. Williams, in asserting that "Relatedness, in particular, was not high on Jesus' list of sacred subjects," 7 cites an "extremely well-attested incident (Mark 3:31 ? 35)",8 a passage rated as gray by the Jesus Seminar scholars. Here Williams sees Jesus as rejecting his family. Referring to his family, she states, "Jesus not only refused to see them but also disowned them." 9 Yes, it is possible to infer from this that Jesus is rejecting his family here. However, our understanding, supported by Williams herself several sentences later, is that Jesus was expanding on what he considers his true family to be?" Whoever does the will of God is my brother and sister and mother" (Mark 3:35?NRSV). Elsewhere in Mark 7:9 ? 13 (black by the Jesus Seminar) Jesus affirms the command to "honor your father and mother" (Matt. 19:19?gray) by condemning the Pharisees' and scribes' use of the Corban offering in order to relieve themselves of the obligation to support their parents. Like Williams, in the Gospels we too see a consistent theme of Jesus' concern for and acceptance of society's rejects, e.g. the blind beggar, the Samaritan woman, the prostitute, tax collectors, in short?the "lepers" of that society. We conclude that an expanded view of relatedness was very high on Jesus' list of sacred subjects, again in line with but stretching the conceptual boundaries of evolutionary psychology in a way that provides us a glimpse of God's all-inclusive love. We next turn to Williams' treatment of the story of the prodigal son (Luke 15:11 ? 32?pink by the Jesus Seminar) and to other Gospel examples she cites in her discussion of another R?Reciprocity.10 Here we affirm Williams' conclusion that, in terms of relationships with others, Jesus rejected reciprocity and instead constantly exhibited extreme generosity, forgiveness, friendship, and love in his teaching and his relationships with a wide array of people. In terms of the fourth R, Resources, we disagree with Williams' characterization of Jesus as being "at ease" and "not worrying"11 about resources. On the contrary, we see Jesus as one who was concerned about the wise and generous use of resources (e.g. see Matt. 25:14 ? 28?pink- and Mark 10:17 ? 22?gray). We feel that Jesus' command to "not worry" (Luke 12:29?gray) about resources is to be understood as an important step in seeking God's kingdom (Luke 12:31?black), a proper prioritization of Relatedness vs. Resources, not as Williams puts it, a general indifference toward resources on the part of Jesus. In closing, we feel that Patricia Williams is addressing a topic of crucial importance: understanding the person of Jesus. This is crucial, because we feel that our clearest understanding of God is through the person of Jesus. In addition, we feel Williams is moving in a helpful direction as she relates the insights of evolutionary psychology to the historical Jesus in a way we see as light-hearted, yet full of opportunities for greater insight into the divine. Jesus not only goes beyond the horizontal (human to human) categories of the four R's, but he also exhibits a vertical (human to God) aspect of his character that stretches the boundaries of evolutionary psychology toward the positive extremes exhibited by God through Jesus. We hope that Williams and others will continue to explore these new ideas further. Notes 1. David M. Buss, Evolutionary Psychology: The New Science of the Mind (Boston: Allyn and Bacon, 1999). 2. Williams' essay above, 136. 3. Robert S. Funk, Roy W. Hoover, and the Jesus Seminar, The Five Gospels, The Search for the Authentic Words of Jesus (New York: Macmillan, 1993), 36. 4. Ibid. 5. Williams essay, 142. 6. Ibid., 138. 7. Ibid., 140. 8. Ibid. 9. Ibid. 10. Ibid. 11. Ibid., 139. --------------- Was Jesus an Evolutionary Psychologist? Joshua M. Moritz Joshua M. Moritz is a Ph.D. student in Theology and Science at the Graduate Theological Union, Berkeley, and Managing Editor of Dialog: A Journal of Theology. His undergraduate and professional background is in evolutionary biology and paleoanthropology. In her article "The Fifth R: Jesus as Evolutionary Psychologist," Patricia Williams casts Jesus in the role of a bio-psychological counselor and seer whose understanding of human nature turns out to be precisely that of the modern field of evolutionary psychology. There is no latent anachronism here, but rather, Williams is pointing out that the Jesus of history understood what makes human beings get up in the morning, what drives us, and what makes us tick. According to Williams, evolutionary psychology posits four primary factors that motivate and orient the vast majority--if not all--of human behaviors: resources, reproduction, relatedness, and reciprocity. The historical Jesus, as she understands him, addressed each of these areas of human life, and in so doing revealed a remarkable intuition, which parallels the findings of sociobiology and evolutionary psychology. Such intuition, concludes Williams, was indeed a product of Jesus' connection with the Divine, and through this connection, he revealed to his followers the egalitarian nature of God. His teachings about this God may empower human beings in the present to establish egalitarian communities and enable them to flourish. In this article, I wish to briefly respond to Williams' essay and her use of evolutionary psychology and sociobiology as they relate to theological anthropology. To begin with, I want to express my appreciation for Williams' work in this area. She has consistently pointed out the difficulties which modern evolutionary biology poses for many classical Western Christian doctrines--such as atonement theology's reliance on "the Fall without the Fall," 1 the doctrine of original sin based on the combination of Lamarckian inheritance and a historical fall, and the problem of evil.2 These problem areas, which Williams develops should be preeminent as constructive theology continues to strive to make itself intelligible in a world dominated by scientific self-understanding. I also am grateful for Williams' attempts to integrate constructively the work of sociobiology and evolutionary psychology into theological anthropology, and her subsequent reformulation of various ancient Christian doctrines in light of these disciplines, which pronounce much on human nature. Her theological engagement with these bio-psychological fields is refreshing because there has been a tendency in the humanities to make light of the findings of evolutionary psychology and sociobiology, and to construct their ideas into caricatures and straw men that are then easily vanquished.3 This happens even though many scholars in the Philosophy of Biology maintain that sociobiology and evolutionary psychology are legitimate extensions of the Neo-Darwinian theoretical framework.4 That being said, I would like to raise several questions and concerns with Williams' essay and her related work. While I agree with Williams in her acceptance of the basic guiding principles of evolutionary psychology--that it is very likely that certain heritable and adaptive human behaviors have been honed by natural selection, and that there are specific cognitive mechanisms resulting from evolution by natural selection which underlie such human behaviors--I must question Williams' uncritical acceptance of the opinions that are championed by these disciplines. Williams treats evolutionary psychology and sociobiology as though they 'have arrived' despite the large number of sympathetic, yet valid critiques of these fields.5 Among other things sociobiology and its descendent evolutionary psychology have been criticized on account of their genic selectionism, genetic reductionism, determinism, and atomism,6 their assumption of massive modularity in the brain, their hyper-adaptationism7 and their confusion regarding moral categories.8 I have not found any citation of such criticisms in Williams work on this subject. She only goes so far as to mention that there is controversy surrounding sociobiology "because it applies to us," and "because some sociobiologists have been inept with metaphors, sowing considerable confusion."9 There is no discussion of the more fundamental criticisms of the methodological and biological assumptions of evolutionary psychology and sociobiology's practitioners. Evolutionary psychology and its predecessor sociobiology claim that humans have a generic nature and that this nature is rooted in our biology--particularly in our genes. Since our genes, as they have evolved to adapt to a specific environment, are the foundation and unconscious directors of our behavior, such behaviors should be seen in light of the ultimate evolutionary purpose and goal of our genes--namely "to get as many copies of one's genes into the next generation as possible." 10 Contained in this ambiguous behavioral inheritance bequeathed to us by our genes are predispositions in the vast majority of humans towards murder, infanticide, child abuse,11 divorce, infidelity,12 pornography,13 xenophobia,14 treating women as commodities,15 rape,16 and even genocide.17 To ensure that each gender gets their maximal fitness reward calculated in genes that make it to the succeeding generation, men are by nature sexually promiscuous and competitive, and women are by nature "coy" and parentally nurturing.18 When our "selfish genes" are in the driver's seat, such is to be expected, and while exceptions may exist, they are just that--exceptions. Cue Jesus. Into such a world of ethically sordid genetic pre-dispositions embodied in immoral animals enters the historical Jesus who, in effect, tells humans to live contrary to their genetically inherited nature. Jesus calls us to "deny ourselves" and in so doing deny to our genes the fitness rewards which they so fervently long for. For the sake of the Kingdom of God, we must be willing to minimize our inclusive fitness and forsake those who share the greatest percentage of our own genes. In fact, our genes are not to be seen as more important than the genes of a total stranger--even those of an unrelated Samaritan or Gentile. We are to spend our precious resources on those who offer us no fitness benefits whatsoever: widows past reproductive age, orphans who are not our kin, the poor who cannot benefit us materially, the sick--who may even harm our own health and fitness potential, and prisoners--who cannot be trusted to reciprocate. Men are called to resist the urge to "diversify their genetic portfolio" and women are called to trust in God for their material resources rather than in their husbands or mates.19 Humans are, in fact, asked to adopt an extremely unstable evolutionary strategy by throwing out reciprocity all together--" give to everyone who asks of you, and whoever takes away what is yours, do not demand it back and lend, expecting nothing in return." 20 The road of the cross which the life of Jesus paves for those who would follow, is a sure evolutionary dead-end--the ultimate self-extinction event. Williams says that such behavior and the wisdom of Jesus "fits evolutionary psychology perfectly." 21 but what does she mean by this? If she means that Jesus understands human nature as it is perceived at the tail end of our evolution and that he calls us to resist the very same dark tendencies bequeathed to us by evolution, then she is right. Christian morality demands a "revolution or a reversal of those priorities" which are given to us by nature.22 Where does such moral courage come from if it is not within human nature? Is it pure grace from the realm of the Divine that actually alters our evolved nature? Or, is it an effort of the will which is transformed once one is encountered by the life and example of Jesus? Either answer poses a dilemma for evolutionary psychology because both behavioral scenarios are outside of its explanatory purview. If we are altered by super-nature, then the categories of nature are no longer adequate. Alternatively, if human nature has enough behavioral wiggle room so that humans may act in ways which are not genetically predisposed, and even in ways directly contrary to our genetic predispositions, then such evolutionary psychological talk of genetic predispositions loses its scientific scope and robustness. Evolutionary psychology seeks to explain altruistic behavior in terms of inclusive fitness in the context of Evolutionarily Stable Strategies, but such explanations lose their relevance when the object of investigation thrusts aside the "things of this world" to pursue an eschatologically stable strategy instead. Deeper than this dilemma, though, is evolutionary psychology's foundational assumption of the "selfish genes" view of evolved biological reality. This "gene's-eye view" of evolution, which Williams presupposes,23 is far from being a safe assumption. In fact, this is precisely the area where many biologists from various sub-disciplines find the most intractable problems relating to the future direction and success of evolutionary research.24 There is a growing consensus that there is a variety of levels of selection in evolution.25 The notion that "naked genes" are the target or primary level of selection, while at first broadly accepted, has since then been "severely criticized, and even its original supporters have now moderated their claims." 26 Such genic selectionism, which is fundamental to the explanatory framework that under-girds evolutionary psychology and its theory of inclusive fitness, is also called into question by genetic pleiotropy27 and "the interaction of genes controlling polygenic components of the phenotype."28 Furthermore, investigations into the roles played by symbiosis,29 self-organization,30 neutral evolution,31 historical and developmental constraints,32 epigenetics,33 and generic principles in evolution34 have demonstrated that other forces are at work both in the generation of evolutionary novelty, and the way in which biological information is inherited. Natural selection and the genocentrism it entails is no longer the sole fiddler bowing the tune of evolutionary change, but now appears to be joined by a symphony of other evolutionary mechanisms each playing at different tempos and in different keys. Conclusion These developments, when taken together, pose a serious obstacle to the future advance of any general theory of evolutionary psychology. While an evolutionary psychology is certainly still possible it will have to be a much mediated evolutionary psychology which can no longer speak of a generic human nature as such, but rather, must aim to describe only elements of human nature that have a definite genetic corollary. Occasions of altruism in nature will no longer create a research problem for this epistemically humbled and less imperialistic evolutionary psychology, and the moral "performance gap"35 between what we are and what we ought to be will lose much of its mysterious quality when considered within a thoroughly supplemented and expanded Neo-Darwinism. Was the Historical Jesus an evolutionary psychologist? He certainly knew enough about human nature to know that selfish motives--if not always selfish genes--orient much of our behavior. Jesus was also familiar, however, with the nature of the Divine, and he knew enough about God's nature to recognize that the One in whose image humans have been made is not far from us when we walk by faith. Notes 1. A phrase coined by Robert John Russell. For Russell's discussion of the problem of "Fall without the Fall" see Robert J. Russell, "Theology and Science: Current Issues and Future Directions," 2000, Part II, Section E, Redemption, Evolution and Cosmology, http://www.counterbalance.net/rjr/erede-body.html. See also Robert J. Russell, "Is Evil Evolving?" Dialog: A Journal of Theology 42:3 (Fall 2003): 311. For Williams' discussion see Patricia Williams, Doing without Adam and Eve: Sociobiology and Original Sin (Minneapolis: Fortress Press, 2001); and Patricia Williams, "Sociobiology and Original Sin" Zygon 35:4 (Dec 2000). 2. Patricia Williams, "Evolution Sociobiology and the Atonement," Zygon 33:4 (1998); Patricia Williams, "The Problem of Evil: A Solution from Science," Zygon 36:3 (2001). 3. Such critiques of sociobiology and evolutionary psychology where the actual views of these disciplines are exaggerated or misrepresented are, Richard C. Lewontin, Steven P. R. Rose, and Leon J. Kamin, Not in Our Genes: Biology, Ideology, and Human Nature (New York: Pantheon Books, 1984); and Hilary Rose and Steven P. R. Rose, Alas Poor Darwin: Arguments Against Evolutionary Psychology (New York: Harmony Books, 2000). For a critical review of the latter which points out several misreadings of evolutionary psychology see Daniel Jones, "Alas Poor Higgs," British Medical Journal, 322 (24 March, 2001), 740ff. http://bmj.bmjjournals.com/cgi/eletters/322/7288/740#13672 . 4. See, for example, Michael Ruse, "I see sociobiology, the study of animal social behavior from an evolutionary perspective, as a natural and an unforced growth and development from orthodox and established neo-Darwinian evolutionary biology. This being so I suggest that because neo-Darwinian biology is a genuine and fruitful branch of science, the respect that it deserves should automatically be transferred to sociobiology." Quoted in Peter Saunders, "Sociobiology: A House Built on Sand" in Evolutionary Processes and Metaphors, Mae-Wan Ho and Sidney W. Fox eds. (New York: Wiley, 1988) 290. 5. See Kim Sterelny and Paul E. Griffiths, Chapter 13 in Sex and Death: An Introduction to Philosophy of Biology (Chicago: University of Chicago Press, 1999); Philip Kitcher, Vaulting Ambition: Sociobiology and the Quest for Human Nature (Cambridge, Mass.: MIT Press, 1985); The Evolution of Minds: Psychological and Philosophical Perspective, Paul Davies & Harmon Holcomb, III, eds. (Norwell, MA: Kluwer Academic Publishers, 2001); Jaak Panksepp and Jules B. Panksepp, "The Seven Sins of Evolutionary Psychology," Evolution and Cognition, 6:2, 108 ; Elisabeth A. Lloyd, "Evolutionary Psychology: The Burdens of Proof", Biology and Philosophy 14 (1999): 211 ? 233; Paul E. Griffiths, 'Evolutionary Psychology' in The Philosophy of Science: An Encyclopedia, Sahotra Sarkar and Jessica Pfeifer eds. (New York: Routledge, 2005). For a criticism that aims at some of evolutionary psychology and sociobiology's more foundational assumptions see Peter Saunders, "Sociobiology: A House Built on Sand." 6. See David Depew and Bruce Weber, Darwinism Evolving: Systems Dynamics and the Genealogy of Natural Selection (Cambridge, MA: MIT Press, 1995) 374 ? 378. 7. Stephen J. Gould, "More Things in Heaven and Earth" in Alas Poor Darwin. 8. David Sloan Wilson, Eric Dietrich, and Anne B. Clark, "On the Inappropriate Use of the Naturalistic Fallacy in Evolutionary Psychology," Biology and Philosophy 18 (2003): 669 ? 682. 9. Williams, Doing Without Adam and Eve, 124. 10. Williams, this issue, 134. 11. Martin Daly and Margo Wilson, Homicide (New York: Aldine, 1988). 12. Helen Fisher, The Anatomy of Love: The Natural History of Monogamy, Adultery, and Divorce (New York: Norton, 1992). 13. "Evolution has built into every red-blooded male a desire to find 'Pornotopia'--the fantasy land where 'sex is sheer lust and physical gratification, devoid of more tender feelings and encumbering relationships, in which women are always aroused, or at least easily arousable, and ultimately are always willing' (Symons, p. 171). The entire cosmetics, fashion, and pornography industries are attempts to create Pornotopia here on Earth". Frank Miele, "The (Im)moral Animal: A Quick & Dirty Guide to Evolutionary Psychology & the Nature of Human Nature," Skeptic 4:1 (1996): 42 ? 49. See also David Buss, The Evolution of Desire (New York: Basic Books, 1994), 49 ? 60. and Donald Symons, The Evolution of Human Sexuality (Oxford: Oxford University Press, 1979), 187 ? 200. 14. Edward O. Wilson, Consilience: The Unity of Knowledge (New York: Knopf, 1998), 253 ? 54. 15. Daly and Wilson, Homicide 188 ? 189; Edward O. Wilson, On Human Nature (Cambridge: Harvard University Press, 1978), 126. 16. Randy Thornhill and Craig Palmer, The Natural History of Rape: Biological Bases of Sexual Coercion (Cambridge, MA: MIT Press, 2000). 17. John Alcock, The Triumph of Sociobiology (New York: Oxford University Press, 2001), 144 ? 146. 18. Martin Daly and Margo Wilson, Sex, Evolution and Behavior (Boston: Willard Grant, 1983), 78 ? 79.; Robert L. Trivers, Social Evolution (Menlo Park, CA: Benjamin/Cummings, 1985), 207; Carl-Adam Wachtmeister and Magnus Enquist, "The Evolution of the Coy Female ? Trading Time for Information," Ethology 105:11 (November 1999): 983 ? 992. 19. Frank Miele, "The (Im)moral Animal," 43; See Jesus' response to "Is it lawful to divorce for any reason?" Matt 19:3 ? 12, and see Mark 10:2 ? 12 and John 4. 20. Luke 6:30 ? 35. 21. See Williams, this issue, 138. 22. John Hare, "Is There an Evolutionary Foundation for Human Morality?" in Evolution and Ethics: Human Morality in Biological and Religious Perspective (Grand Rapids, MI: Eerdmans, 2004), 190. 23. Williams maintains "that the most accurate way to view evolution is from the point of view of the gene" (this issue, 134). She thus appears to adhere to the genic selectionism of G. C. Williams, W. D. Hamilton, and Richard Dawkins. 24. See Gertrudis Van de Vijver, Linda Van Speybroeck, and Dani De Waele, "Epigenetics: A Challenge for Genetics, Evolution, and Development?" Annals of the New York Academy of Sciences 981 (2002): 1 ? 6. 25. Stephen Jay Gould and Elisabeth A. Lloyd, "Individuality and Adaptation Across Levels of Selection: How Shall We Name and Generalize the Unit of Darwinism?" Proceedings of the National Academy of Sciences USA 96:21 (October 1999):11904 ? 11909. 26. Ernst Mayr, "The Objects of Selection," Proceedings of the National Academy of Sciences USA 94:6 (March 1997): 2091 ? 2094. 27. This is where multiple, often seemingly unrelated, phenotypic effects are caused by a single altered gene or pair of altered genes. See Jonathan Hodgkin, "Seven Types of Pleiotropy" International Journal of Developmental Biology 42 (1998): 501 ? 505. 28. Ernst Mayr, "The Objects of Selection," 2092. 29. Lynn Margulis, "Symbiogenesis and Symbioticism," in Symbiosis as a Source of Evolutionary Innovation: Speciation and Morphogenesis, Lynn Margulis and Ren? Fester eds (Cambridge, MA: The MIT Press, 1991). 30. Stuart A. Kauffman, "Self-Organization, Selective Adaptation and its Limits: A New Pattern of Inference in Evolution and Development," in Evolution at the Crossroads: The New Biology and the New Philosophy of Science, David J. Depew and Bruce H. Weber eds. (Cambridge, MA: MIT Press, 1985), 184 ? 185; and David Depew and Bruce Weber, Darwinism Evolving: Systems Dynamics and the Genealogy of Natural Selection (Cambridge, MA: MIT Press, 1995), 446. 31. Motoo Kimura, "Recent Development of the Neutral theory Viewed from the Wrightian Tradition of Theoretical Population Genetics," Proceedings of the National Academy of Sciences USA 88:14 (July 1991): 5969 ? 5973 ; Motoo Kimura, "Evolutionary Rate at the Molecular Level," Nature 17:217 (129) (Feb 1968): 624 ? 626; Motoo Kimura, "The Rate of Molecular Evolution Considered From the Standpoint of Population Genetics," Proceedings of the National Academy of Sciences USA 63:4 (August 1969): 1181 ? 1188. 32. For the historical constraints see Stephen J. Gould and Richard C. Lewontin, "The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme," Proceedings of the Royal Society of London , Series B, 205:1161 (1979): 581 ? 598. For a discussion of Developmental Systems Theory see Susan Oyama, Paul E. Griffiths, and Russell D. Gray, Cycles of Contingency: Developmental Systems and Evolution, (Cambridge, MA: MIT Press, 2001). 33. See Van de Vijver, Van Speybroeck, and De Waele "Epigenetics: A Challenge for Genetics, Evolution, and Development?" For a critique of the selfish genes understanding of evolution from an epigenetic standpoint see especially Richard von Sternberg, "On the Roles of Repetitive DNA Elements in the Context of a Unified Genomic Epigenetic System," Annals of the New York Academy of Sciences 981 (2002): 154 ? 188. See Eva Jablonka and Marion J. Lamb, Epigenetic Inheritance and Evolution: The Lamarckian Dimension (Oxford: Oxford University Press, 1995). 34. See Ricard Sol? and Brian Goodwin, Signs of Life: How Complexity Pervades Biology. (New York: Basic Books, 2000); and Simon Conway Morris, Life's Solution: Inevitable Humans in a Lonely Universe (New York: Cambridge University Press, 2003). 35. For a discussion of the gap between what we actually do and what morality demands of us see John Hare (cited above). -------------------- Jesus and Evolutionary Psychology, Two Agendas Howard J. van Till Howard J. Van Till is Emeritus Professor of Physics and Astronomy, Calvin College, Michigan, USA. His works include Portraits of Creation: Biblical and Scientific Perspectives on the World's Formation (1990) and The Fourth Day: What the Bible and the Heavens are Telling us about Creation (1986). Patricia A. Williams posits the provocative thesis that the historical Jesus' knowledge of human nature--as he experienced it and engaged it 2000 years ago--closely matches the understanding of human nature now offered by evolutionary psychology. This thesis does not entail any frivolous conjectures that Jesus was supernaturally informed about biological evolution or about a scientific psychology based on evolutionary considerations. Rather, the thesis posits that the historical Jesus, a typical Palestinian Jew in his knowledge of the world and unaware of anything resembling modern science, nonetheless "had remarkable insights into human nature as evolutionary psychology discloses it." As I see it, this is a reasonable and modest thesis that can be tested by a comparison of what we know (or at least have good reason to believe) about Jesus' perceptions of human nature and what modern evolutionary psychology offers us regarding its scientific understanding of human nature. Williams summarizes the four central concepts of evolutionary psychology, as derived from sociobiology, in her list of "the four R's of human nature and much of the rest of nature as well: resources, reproduction, relatedness, and reciprocity." Before offering any thoughts regarding a comparison of Jesus' knowledge of human nature and "the four R's of human nature" that Williams draws from evolutionary psychology, I must express a bit of puzzlement concerning the grounds for the similarity thesis that Williams posits. Suppose that Williams is correct (and I am content to let evolutionary psychologists judge whether or not that is the case) to say that evolutionary psychology's assessment of the four major foci of human behavior can be captured in this list of four R's. Suppose that Williams is also correct (and I am content to let scholars of the historical Jesus judge whether or not that is the case) to characterize Jesus' knowledge of human nature as focused on those same four behavioral concerns. Would that provide a sufficient basis for concluding that Williams is warranted in positing that Jesus' knowledge of human nature closely matches the understanding of human nature offered by evolutionary psychology? That is, would Williams be warranted in concluding that Jesus is relevant today partly "because he is an astonishingly perceptive evolutionary psychologist?" I do not see how the case can be settled on the similarities so far granted. Williams may well be correct in drawing parallels in what the historical Jesus saw 2000 years ago and what modern evolutionary psychology now sees as the principal concerns of human nature. However, as I understand it, the primary concern of evolutionary psychology is not merely to list those basic concerns, but rather to posit explanations for those behavioral foci as products of the entire evolutionary process. However, if the positing of evolution-based explanations constitutes the core of the modern science of evolutionary psychology, then the appropriateness of drawing close parallels between Jesus and evolutionary psychology must, it seems to me, be called into question. The historical Jesus offered no explanations of the sort that would interest evolutionary psychology. Jesus, on the contrary, expressed numerous moral and ethical judgments on the manner in which humans ought to act in response to those basic drives for resources, reproduction, relatedness, and reciprocity. To summarize what we have observed so far: even if it is the case that Jesus and evolutionary psychology agree on their identification of the primary concerns that characterize human nature, there is a vast difference in what they offer in response. Evolutionary psychology offers scientific explanations for the origin and presence of the four R's as products of our evolutionary history. Given what cognitive psychology perceives to be core human concerns, evolutionary considerations suggest ways to understand how humans came to be this way. Jesus, on the other hand, offered moral or ethical principles that would encourage humans to choose behavior (whether consistent with evolutionary influences or not) that is "good" by the standards of divine intention for our living as God-conscious creatures. In other words, while it may well be argued, as Williams does, that Jesus and evolutionary psychology proceed because of similar views of human nature, they have radically differing agendas driving their interests in reflecting on the primary foci of human behavioral concerns. Evolutionary psychology's concern for explaining the historical roots of the four R's cannot easily be equated with Jesus' concern to provide moral or ethical guidance in choosing ways to act on those four drives. Evolutionary psychology offers a theory about human behavior and its roots in the practical need for species survival. Jesus posited no such theory, but instead exemplified sound moral and ethical value judgments on behavioral choices, judgments rooted in his extraordinary awareness of the divine intention for human life. Perhaps I am being too critical. Perhaps Williams never intended to make the strong equation that I have just criticized. Perhaps I need to take more seriously Williams' expressed concern to demonstrate that, despite her contention that Jesus "did not perceive his own death as a sacrifice for sin," and despite the fact that this would seem to "undermine doctrines previously considered central to Christianity" and thereby "appear to make Jesus irrelevant," Jesus is nonetheless just as relevant today as ever. Why? Because his understanding of human nature equipped him to offer relevant answers to Aristotle's ethical question, "How can human beings flourish?" True, evolutionary psychology focuses on technical aspects of how human behavior affects human survival and reproduction, while Jesus focused on matters of acting in accord with the divine will for human moral and ethical behavior, but both express a concern for identifying the sort of human behavior that improves the probability for the flourishing of the species. Perhaps I should be content with Williams' case with the continuing relevance of what Jesus said and did. In fact, I think Williams' case for the high degree of relevance that the words and deeds of Jesus still have was eloquently made. Am I ready, then, to set my misgivings aside and accept Williams' references to Jesus as "an astonishingly perceptive evolutionary psychologist?" I must admit that I continue to have reservations about statements worded in this way. One way to express my hesitancy is to note that although Williams appears to justify this language by noting that Jesus and evolutionary psychology share a common agenda in dealing with the question, "How can humans flourish?" I think we need to explore whether or not the term "flourish" is being used in the same way for both. From the standpoint of evolutionary biology, what does it mean to flourish ? Stated as bluntly and succinctly as possible, for a species (or some higher order of categorization) to flourish, means to be reproductively successful over an extended time as a member of an ecosystem in some reasonably stable environment. It is about numbers, about numerical success, about survival. Maintain a stable or growing population, or your category of organisms goes extinct. Flourish, or vanish. Life is tough. Adapt or die; a purely pragmatic reality. From the standpoint of what Jesus said and did, however, what does it mean to flourish ? I would suggest that the species-survival criteria supplied by evolutionary psychology might be seen as necessary, but by no means sufficient from Jesus' standpoint. To flourish as a God-conscious creature would, I believe, sometimes require choosing behavior that conforms to the divine will in spite of the fact that it would fail to contribute to reproductive success. By the moral and ethical standards exemplified by the life and death of Jesus (whether or not these accomplished anything toward atonement for sin), flourishing as a human species is not simply a matter of numbers. On the contrary, Jesus sometimes exemplified behavioral choices that were radically contrarian in nature. In the extreme, Jesus paid the ultimate price of life itself by choosing right behavior over the biological goal of flourishing. I would not go so far as to say that Jesus advocated a generalized disregard for flourishing as a reproductively successful species, but it seems evident that Jesus did advocate the recognition of situations in which reproductive success was to be given secondary, not primary, status. Williams rightly recognizes this in noting that each of human nature's four R's can be pursued with such excessive vigor as to become a vice. Excessive pursuit of resources becomes greed or gluttony. Obsession with reproductive activity becomes lust or abuse of power. Unqualified valuation of relatedness becomes destructive exclusivism. Compassionless application of reciprocity becomes an excuse for vengeance. Jesus spoke and acted in a way that demonstrated such excesses to fall outside the divine will for human behavior. Hence, to engage in a bit of Williams-style commentary, Jesus "knew when to set evolutionary psychology aside and to make behavioral choices on the basis of divine calling rather than on the probabilities for reproductive success." I was especially struck (positively) by Williams' comments on the dangers of compassionless reciprocity in which she called attention to the remarkable and ironic contrast between the example set by Jesus and the distorted portrait of God that has become the display piece of substitutionary atonement theology. Williams says it with great eloquence. "Jesus spoke of love, generosity, and forgiveness. In doing so, he spoke of the nature of God. Christian atonement theology," alternatively, "has claimed that an innocent man had to die to pay for the sins of the guilty because God required that justice be done . It does not take much insight into the nature of justice to grasp the injustice of killing the innocent to forgive the guilty. The God who allegedly commanded such a deed ruled by reciprocity and had a stingy soul. This is not Jesus' God." Would that more contemporary Christians could see what Williams here points out. Seeing this demands no knowledge of evolutionary psychology, however. A sense of justice that transcends the scientific agenda will do. What about the fifth R? Recall Jesus' advice for life, "Be not anxious ." Live by love. Do not be driven by the egocentrism inherited from our evolutionary past. Do not allow yourself to distort any one of the four R's by becoming obsessed with its unqualified satisfaction. In a word, Relax. Great idea. That is the next item on my "to do" list. -------------------------- Counter-response on "The Fifth R: Jesus as Evolutionary Psychologist" Patricia A. Williams Patricia A. Williams is a philosopher of biology and philosophical theologian who writes full-time on Christianity and science. Her recent books include, Doing Without Adam and Eve: Sociobiology and Original Sin (2001) and Where Christianity Went Wrong, When, and What you Can do about it (2001). Her mailing address is PO Box 69, Covesville, VA 22931. Her e-mail address is theologyauthor at aol.com; website www.theologyauthor.com. I am grateful to the editors for the privilege of receiving responses to my article and the opportunity to reply. I also appreciate the sincerity and thoughtfulness characterizing the responses. I might add that evolutionary biology is conceptually difficult; it is a field in which experts make mistakes, and much sociobiology is conceptually confused, partly because it seems a favorite playground for atheists who are ideologically driven. Finally, historical Jesus scholarship is broad, deep, and varied, so one needs to dine, not snack. Keeping it all straight is difficult. Even I make mistakes. Therefore, it may be best to begin by explaining the project I pursue in my books and essays. I want to integrate science, theology, and spirituality. As I come from a Christian background, that generally means I engage some aspect of Christianity. My first step is to take the best, most central, most accepted scientific findings to establish a firm foundation in the sciences. My second is to pursue the best biblical scholarship, especially scholarship on the historical Jesus, Christianity's central figure and a prophet in two other world religions. Thus, two critical, rational enterprises stand at the center of my work. Third, I seek the best in Christian spirituality, which I presently think Quakerism represents. (Quaker theology also smoothes some theological and scriptural issues.) Then I try to integrate them. Some examples from my treatment of science may help. When I discuss cosmology, I avoid string theory or many-worlds theory. Although they may be cutting-edge research subjects, they currently lack mathematical proof and empirical evidence. In biology, I center on the theory of evolution by natural selection since it is the foundational theory of biology. This is not to deny that other mechanisms for evolution exist. Indeed, I consider genetic drift significant in speciation.1 In sociobiology, I concentrate on kin selection (inclusive fitness), because it lies at the heart of sociobiology and is well established theoretically and empirically. For evolutionary psychology, I focus on dispositions applicable to as many organisms as possible, (the exception in the 4Rs being reciprocity, which although not uniquely human, is central to human relationships as it is not to those of other animals). I might add, against Carlson and Hine's assumption that I borrowed the terminology from David Buss, the expression "the four R's" and the arguments for the four R's being fundamental originate with me. To understand where the responders have erred, it will help to return to the basics. Van Till discusses evolution in terms of species survival. Evolution does not promote species survival. On the contrary, natural selection is a negative mechanism, promoting no one's survival, only eliminating the unfit. Evolution depends on three things: that more organisms come to be than survive to reproduce, some characteristics vary, and some of these are inherited. Populations change over time (evolve) because organisms that die before they reproduce fail to pass their characteristics on to future generations, and these characteristics vanish from the population. Meanwhile, mutations may add novel characteristics. Species can be selected (go extinct--contrary to Moritz's assumption, I am not a single-level selectionist and certainly not a genetic-level one), but natural selection cannot promote their survival. Indeed, most have gone extinct, so it fails to promote their survival. On the whole, however, the theory of evolution applies to individuals and their kin and is always local, that is, characteristics fit in one environment will not be so in others. This means evolution cannot promote the flourishing of species. I doubt Jesus promotes it, either. I doubt he thinks that broadly. Rather, his widest interest seems individual and community flourishing in a non-egalitarian but God-suffused world. Van Till assumes sociobiology has a single focus, the explanation of certain behaviors by means of the theory of evolution. In fact, it has three foci. The first, begun by W. D. Hamilton in 1964,2 was to explain biologically altruistic behavior by means of inclusive fitness theory. The second, prominently promoted by Robert Trivers from 1971 and 1972,3 was to predict animal social behavior (including human social behavior) from inclusive fitness theory. The third has been to gather empirical evidence to support or refute the predictions. The third and last has occurred almost since Hamilton published, was famously summarized by E. O. Wilson in 1975,4 and has become a project of evolutionary psychology in recent years. In calling Jesus an evolutionary psychologist, I credit him with understanding by (divine?) intuition and astute observation that human nature is disposed (not determined!) to follow the 4Rs that lie at the foundations of sociobiology and evolutionary psychology. Given Jesus' lack of scientific knowledge, he could not have been doing anything more. Since Moritz fails to find citations in my theological writing to critics of sociobiology, and no explicit criticisms of it, he concludes I accept it uncritically. My theological works do ignore its critics, for I think engaging in intra-scientific squabbles inappropriate in a theological context. However, in a lead review in the Quarterly Review of Biology,5 I criticize selfish gene theory and the idea that sociobiological explanations of behavior provide total explanations. However, in my theological works, I do something different. I interpret sociobiology in a non-reductionist, non-determinist, non-egocentric way, usually without explicitly condemning its reductionist, determinist, and selfishness-promoting proponents who, I think, misconstrue the evidence. I am especially disturbed that Moritz cloaks me in the determinist mantle when I say in summary of human nature's four R's in my article, Thus, evolution has given us enormous potential for both good and evil, and it also has provided a wide range of choices, from egocentricity that seeks the destruction of others to generosity and love that seek to further their welfare. We are remarkably flexible and free. That is the primary reason we find it so difficult to answer Aristotle's question about how to flourish. If we have such a range of desires and can engage in such an enormous number of activities, then which are those that best promote our flourishing? I emphasize choice and freedom. There is no taint of determinism here. Indeed, I find more tendencies toward the assumption of genetic determinism in the responses to my article than I do in my article. Moreover, without citing sociobiology's critics, I explicitly argue against determinism for an entire section in my Doing without Adam and Eve.6 As for "natural selection and the genocentrism it entails [being] no longer the sole fiddler" (Moritz), it never was. Charles Darwin, lord of the theory of evolution, invokes the inheritance of acquired characteristics to aid it, then sexual selection.7 Ernst Mayr, king of the new synthesis, recognizes sexual selection, the Baldwin effect, symbiosis, and genetic drift.8 E. O. Wilson, prince of sociobiology, includes morphological and physiological differences and environmental contingencies.9 A review of the most thoroughly studied genus in the world, Drosophila, adds premating isolation.10 Moreover, we now possess empirical proof that environments restructure organisms' brains, including adult human brains.11 Many things shape organisms and their behaviors. Many people shape historical Jesus scholarship. It is not limited to the Jesus Seminar. Although I respect the Jesus Seminar and find its two volumes12 handy for checking out black, gray, pink, and red sayings and deeds, I nowhere rely on it to tell me which sayings go back to Jesus as Carlson and Hine assert. In contrast, I say I will "restrict the passages of scripture I discuss to those the scholars think go back to the historical Jesus." I have written a book, mentioned in the article, on the historical Jesus13 with a bibliography listing 42 references to works of 35 Jesus scholars and historians of the two first centuries. I compiled that list five years ago, and I have continued reading. In an essay such as "The Fifth R," to summarize such extensive scholarship is impossible. However, to offer one example here, most other scholars think the passage Carlson and Hine mention that the Jesus Seminar colors gray, Mark 3:31 ? 35, goes back to Jesus. If Carlson and Hine researched further in the Seminar's The Five Gospels, they would find even the Seminar colors the parallels in Matthew 12:46 ? 50 and Thomas 99 pink. The event occurs in two sources, Mark and Thomas, so it meets the scholarly criterion of multiple attestation. Matthew and Luke (8:19 ? 21) retain it from Mark, their source for it, so it must have been well known. Moreover, it also fits the strong scholarly criterion that events and sayings embarrassing to the Jesus movement are likely to go back to Jesus. For a son not to honor his mother breaks one of the Ten Commandments, and in the Jesus movement after Jesus' death, some of his family members became his followers. Their change of heart must have aroused criticism of their earlier unbelief. Why include such an embarrassing incident in your narrative unless it is so widely known that excluding it appears fraudulent? Carlson and Hine also comment that I am dealing with the person of Jesus and putting "him in a box of dimensions specified by the four R's." This is false. I am interested in his insights into human nature, God, and ethics. I think he was a person of integrity and, so, his insights probably reflect his character, but his character is not the subject of "The Fifth R" and certainly not limited to the four R's--no one's is. The four R's at most represent some basic human dispositions. Carlson and Hine also misquote me. I never use the expression, "the unmatched quality of God's love, generosity, and forgiveness." Thus, I am unlikely to do "more to demonstrate this." Moritz seems to think the fifth R is "Rebel"14 and jettison the four R's. On the contrary, it is "Relax." In a wonderfully coined phrase, he calls the rebellious approach "an eschatologically stable strategy " to distinguish it from evolutionarily stable strategies. In contrast, I think "Relax" is probably stabilizing for the species. Other species follow evolutionary strategies and go extinct, so evolutionary strategies remain stable only temporarily. Based on the history of other species, if we follow evolutionary strategies, we will go extinct, too. Perhaps there is a better way. Jesus may offer it. Nonetheless, "Relax" does not entail rejecting the four R's. As I note in the article, Jesus is not an ascetic, but is accused of drunkenness and gluttony, enjoys the company of women and children, and calls a leading disciple who is married. Pursuing the four R's inordinately through greed, lust, nepotism, and justice for oneself to the exclusion of others destabilizes community and, so, diminishes human wellbeing. Such pursuits lead to wars that, in the contemporary world, may not only result in the extinction of our species but also the annihilation of life on Earth. Inordinate rebellion against the four R's also promises extinction. Best follow Van Till and make "Relax" the next item on the "'to do' list." Finally, Van Till comments that knowledge of evolutionary psychology is not required to understand that God's killing the innocent in order to forgive the guilty is unjust. I agree. I think evolutionary psychology sheds light here not by explaining justice, but by explaining the attractiveness to many Christians of a God who insists divine justice be satisfied. Theirs is an anthropomorphic God, built from our basic, evolved dispositions. Relaxed as he was about the four R's, Jesus could reflect, instead, a God of generosity and mercy. Notes 1. Patricia A. Williams, Doing without Adam and Eve: Sociobiology and Original Sin (Minneapolis: Fortress, 2001), 108 ? 115. 2. W. D. Hamilton, "The Genetical Evolution of Social Behaviour I and II," Journal of Theoretical Biology 7 (1964): 1 ? 51. 3. Robert L. Trivers, "The Evolution of Reciprocal Altruism," The Quarterly Review of Biology 46 (1971): 35 ? 57 and "Parent-Offspring Conflict," American Zoology 14 (1972): 249 ? 264. 4. E. O. Wilson, Sociobiology: The New Synthesis (Cambridge, Mass.: Harvard University Press, 1975). 5. Patricia A. Williams, "Of Replicators and Selectors," The Quarterly Review of Biology 77 (2002): 302 ? 306. 6. Williams, Doing without Adam and Eve, 143 ? 148. 7. Charles Darwin, On the Origin of Species (Cambridge, Mass.: Harvard University Press, [1859] 1964) and The Descent of Man, and Selection in Relation to Sex (Princeton: Princeton University Press, [1871] 1981). 8. Ernst Mayr, What Evolution Is (New York: Basic Books, 2001). 9. Wilson, Sociobiology. 10. Jeffrey R. Powell, Progress and Prospects in Evolutionary Biology: The Drosophila Model (New York: Oxford University Press, 1997). 11. Jeffrey M. Schwartz and Sharon Begley, The Mind and the Brain: Neuroplasticity and the Power of Mental Force (New York: Regan Books, 2002). 12. Robert W. Funk, Roy W. Hoover, and the Jesus Seminar, The Five Gospels: The Search for the Authentic Words of Jesus (San Francisco: Harper San Francisco, 1993) and Robert W. Funk and the Jesus Seminar, The Acts of Jesus: The Search for the Authentic Deeds of Jesus (San Francisco: Polebridge Press, 1998). 13. Patricia A. Williams, Where Christianity Went Wrong, When, and What You Can Do About It (Philadelphia: Xlibris, 2001). 14. As, famously, in Richard Dawkins, The Selfish Gene (New York: Oxford University Press, 1976), 215, "We, alone on earth, can rebel against the tyranny of the selfish replicators [genes]". From checker at panix.com Thu Sep 1 00:25:48 2005 From: checker at panix.com (Premise Checker) Date: Wed, 31 Aug 2005 20:25:48 -0400 (EDT) Subject: [Paleopsych] BBC: Women cleverer than men, says MP Message-ID: Women cleverer than men, says MP http://news.bbc.co.uk/1/hi/education/4079653.stm [Note the date. Mr. Mencken certainly thought so, at least that women were far more intelligent than men in what mattered. No woman, he said, would be so dumb to want to be a lawyer or a stock broker.] Last Updated: Wednesday, 8 December, 2004, 16:13 GMT GCSE students after receiving their results Girls are getting more top grades than boys at GCSE and A-level Women are brighter than men, according to the Labour chairman of the Commons education committee. Huddersfield MP Barry Sheerman said there was a "danger" of being obsessed about how boys were doing at school. His comments followed a committee discussion about whether girls or boys found it easier to learn to read. "My own personal view is that women are brighter than men," the MP said, adding that women now earned on average more than men as middle managers. First class? He said: "We should celebrate this, shouldn't we? The brightest kids are coming through and they happen to be women." In recent years girls have consistently outperformed boys at all levels of the education system. The "gender gap" at GCSE level in England, Wales and Northern Ireland this year was 5.3 percentage points at grades A* and A and by 8.4 points at grades C and above in girls' favour. Boys' performance had improved more than girls', however. This was even more noticeable at A-level. Even so, 23.7% of girls' entries achieved A grades, compared to 21% of boys'. Ninety-five per cent of boys' entries were passes, against 96.8% of girls'. More young women than men go to university. Schools define many more boys than girls as having special educational problems - which some researchers argue means the schools are failing to meet boys' needs. 'Wrong schooling' In the latest major international study of the performance of 15-year-olds in maths, reading and science tests, boys out-performed girls in almost all of the 40 countries involved in maths. In reading, girls had "significantly higher average performance" in all countries except Liechtenstein. The biggest gap was in Iceland. Science showed the smallest average gender gap, with boys doing a little better. American educational researchers William Draves and Julie Coates have argued that it is not boys who are the problem but schools. While boys are developing the skills they will need in the "knowledge jobs" of the future, schools are still preparing students for a past industrial age, they have said. SEE ALSO: [47]Boys 'fighting back' in A-levels 18 Aug 04 | Education [48]Top grades rising again for GCSEs 26 Aug 04 | Education [49]Finland tops global school table 07 Dec 04 | Education [50]Will boys always be boys? 28 Feb 04 | Education [51]GCSE 'gender gap' sparks concern 22 Aug 02 | Education [52]Addressing the gender gap 22 Aug 02 | Education RELATED INTERNET LINKS: [53]Education committee The BBC is not responsible for the content of external internet sites TOP EDUCATION STORIES NOW [54]Teachers welcome lesson plan deal [55]Boy charged in school arson probe [56]Muslim image 'must change' [57]Icelanders speak up for language References 47. http://news.bbc.co.uk/1/hi/education/3577868.stm 48. http://news.bbc.co.uk/1/hi/education/3597490.stm 49. http://news.bbc.co.uk/1/hi/education/4073753.stm 50. http://news.bbc.co.uk/1/hi/education/3494490.stm 51. http://news.bbc.co.uk/1/hi/education/2208547.stm 52. http://news.bbc.co.uk/1/hi/education/2208596.stm 53. http://www.parliament.uk/commons/selcom/edemhome.htm 54. http://news.bbc.co.uk/1/hi/education/4200238.stm 55. http://news.bbc.co.uk/1/hi/northern_ireland/4202844.stm 56. http://news.bbc.co.uk/1/hi/uk/4197218.stm 57. http://news.bbc.co.uk/1/hi/education/4201706.stm From checker at panix.com Thu Sep 1 00:25:43 2005 From: checker at panix.com (Premise Checker) Date: Wed, 31 Aug 2005 20:25:43 -0400 (EDT) Subject: [Paleopsych] BBC: 'Men cleverer than women' claim Message-ID: 'Men cleverer than women' claim http://news.bbc.co.uk/go/pr/fr/-/1/hi/education/4183166.stm Published: 2005/08/25 09:57:24 GMT [e-mails to the BBC included.] Academics in the UK claim their research shows that men are more intelligent than women. A study to be published later this year in the British Journal of Psychology says that men are on average five points ahead on IQ tests. Paul Irwing and Professor Richard Lynn claim the difference grows when the highest IQ levels are considered. Their research was based on IQ tests given to 80,000 people and a further study of 20,000 students. 'Widening gap' Dr Irwing, a senior lecturer in organisational psychology at Manchester University, told the Today programme on BBC Radio Four the study showed that, up to the age of 14, there was no difference between the IQs of boys and girls. "But beyond that age and into adulthood there is a difference of five points, which is small but it can have important implications," he said. "This is against a background of women dramatically overtaking men in educational attainment and making very rapid advances in terms of occupational achievement." The academics used a test which is said to measure "general cognitive ability" - spatial and verbal ability. As intelligence scores among the study group rose, the academics say they found a widening gap between the sexes. There were twice as many men with IQ scores of 125, for example, a level said to correspond with people getting first-class degrees. At scores of 155, associated with genius, there were 5.5 men for every woman. Nobel prize-winners Dr Irwing told The Times the differences "may go some way to explaining the greater numbers of men achieving distinctions of various kinds, such as chess grandmasters, Fields medallists for mathematics, Nobel prize-winners and the like". The paper will argue that there is evidence that at the same level of IQ, women are able to achieve more than men "possibly because they are more conscientious and better adapted to sustained periods of hard work". Earlier this year, the president of Harvard University, Lawrence Summers, sparked controversy when he suggested at a seminar that one reason men outperformed women in maths and science was genetics. Several guests walked out of the conference after hearing the comments. Dr Summers, who has apologised repeatedly for his remarks, said later that the shortage of senior female academics was partly caused by child-minding duties, which restricted working hours. What is your reaction to this research? Are men more intelligent than women? Send us your comments using the form below. My reaction, coming from a family with a tradition of women who achieve very highly in maths and sciences, is weary disgust. Yet again, what is intelligence? Who is defining it? Have these researchers looked at IQ levels below the average, at gender differentials among prison inmates? Let's have these included for balance, please. Julia Blincoe, Southampton, England All this discussion is fairly irrelevant. Men and women have different and also some similar skills but we are all genetically programmed for survival, together. Basically we need teamwork and to be able to work to each other's strengths and minimise our collective weaknesses in order to make any progress in future. Divisive talk about who is better than who is pointless and smacks of political correctness. Richard, Worksop I think that this study is probably true in a lot of cases, but this is because young girls change their ideals from learning. They start to have maternal thoughts of children and emotional attachment to partners. Therefore they neglect high learning for their natural development of nurturing. In general though I think women are equal to men, but in different roles. Darrell Beck, Jacksonville, Florida Modern IQ tests are no longer biased at all. They have been re-designed to be taken by anyone in the world, with any kind of education (or no education). Before the tests are rubbished, maybe we can establish if they are of the modern variety? I for one am getting tired of the media continually men-bashing and portraying men as incapable. It's nice to have some evidence to the contrary once in a while. Nigel, UK The only thing IQ tests prove is how good you are at doing IQ tests. Matthew, Cheshire, UK Let's not ignore the fact that researchers believe about 20-25 IQ points are influenced by environmental factors. And the fact that test scores are adjusted for gender anyway as males tend to score higher on some factors and females on others. This is not a pure measure of intelligence, but a human-devised Western (and usually male and white) instrument. Flo, Malvern, England I do not believe, on average, that men are more intelligent than women. I'm convinced we often find more men at the extremes like in academia or indeed in the work place simply because we still live in a male-driven society. Women think differently from men, that I do agree with, but more intelligent? From my 'empirical analysis' I find this unlikely. Jason Robinson, Cambridge To throw in another possible factor, remember also the competitive aspect of IQ tests: the average man is possibly more likely to treat a measurement of his mental capacity as a chance to prove himself; the average woman may not push herself as hard as she does not consider the result quite so important. Anne, London, UK I scored relatively high in an IQ test when I was a child. Since then I have done many many many very very very stupid things in my life. I still wonder what that test has to do with intelligence or understanding at all. Alex, Wien, Austria I'm surprised that an academic journal is even considering this publication. A 'scientific' study that only takes into account one measure of intelligence that is well known to be biased towards white European males really shouldn't be taken seriously. I suspect the editor of the journal is male. Maria, Sheffield It really does amuse me that some men need to keep creating these tests to prove to themselves that they are more capable than women. I don't read about a rush of women psychologists doing the same thing. Maybe the women know the truth anyway or maybe they just don't care. Hazel, Sheffield I hope this taken for exactly what it is. A scientific study. Most of these things have little or no bearing on everyday life for most of us. However, as a man, it is nice to hear something positive about us for once. Nick Spiers, London I can easily see this as being true. However, it would be interesting to also look at the bottom IQ levels and see which sex has more at that level before making any judgements. Given that that sections of the media are so keen on denigrating men, and the advertising industry is so addicted to portraying men as buffoons and women as intelligent, perhaps this might re-adjust the balance a little. I find that although many of the women I've known are more socially intelligent, their general knowledge has always been abysmal, hence this being no surprise. Huw Morgan, Cardiff, UK I suspect the tests were formulated to play to men's strengths. Perhaps the tests were even set by men. IQ tests have long been recognised as skewed towards white men of European origin, why do we continue to pay attention to them? IQ tests still don't measure the different ways that intelligence can manifest itself, and until they do, they will continue to provide fodder to those who seek to re-establish man's 'superiority' over women. Roanne, Derby, UK I don't think men are more intelligent than women on average. However, from personal experience I would say that the distribution of intelligence in men is more extreme, that is to say, there are more exceptionally clever men than women, but there are also more exceptionally stupid men than women. Robin, Oxford, IKL It has long been accepted that IQ tests are gender-biased: they are designed by men to test 'male intelligence', such as spatial awareness. They simply do not cover all aspects of intelligence. Therefore it is no surprise that a test designed by men, and a study carried out by men, has found that men are 'more intelligent' than women. Jenny, London If your report is accurate, what this study actually shows is that men are better at IQ tests than women. This is not (necessarily) the same as saying men are cleverer than women. That would require rather more criteria than just an IQ test. Phil Evans, Keele, UK I have the impression that society allows men to develop skills in a focussed way, with less time reserved for repetitive care tasks. IQ can be improved in this way. It is not set and fixed at birth. If men hone skills at the expense of good housekeeping or social responsibilities, perhaps they are granted the time to develop the extra five points where women spend more time looking after house/kids/husband/parents/friends. Marjoline, The Hague, Holland From checker at panix.com Thu Sep 1 00:27:18 2005 From: checker at panix.com (Premise Checker) Date: Wed, 31 Aug 2005 20:27:18 -0400 (EDT) Subject: [Paleopsych] PLoS Computational Biology: Evolution of Genetic Potential Message-ID: Evolution of Genetic Potential http://compbiol.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pcbi.0010032 [Links omitted.] Volume 1 | Issue 3 | AUGUST 2005 Research Article Evolution of Genetic Potential Lauren Ancel Meyers^1,^2^*, Fredric D. Ancel^3, Michael Lachmann^4 1 Section of Integrative Biology, Institute for Cellular and Molecular Biology, University of Texas, Austin, Texas, United States of America, 2 Santa Fe Institute, Santa Fe, New Mexico, United States of America, 3 Department of Mathematical Sciences, University of Wisconsin, Milwaukee, Wisconsin, United States of America, 4 Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany Organisms employ a multitude of strategies to cope with the dynamical environments in which they live. Homeostasis and physiological plasticity buffer changes within the lifetime of an organism, while stochastic developmental programs and hypermutability track changes on longer timescales. An alternative long-term mechanism is "genetic potential"--a heightened sensitivity to the effects of mutation that facilitates rapid evolution to novel states. Using a transparent mathematical model, we illustrate the concept of genetic potential and show that as environmental variability decreases, the evolving population reaches three distinct steady state conditions: (1) organismal flexibility, (2) genetic potential, and (3) genetic robustness. As a specific example of this concept we examine fluctuating selection for hydrophobicity in a single amino acid. We see the same three stages, suggesting that environmental fluctuations can produce allele distributions that are distinct not only from those found under constant conditions, but also from the transient allele distributions that arise under isolated selective sweeps. Editor: Eddie Holmes, Pennsylvania State University, United States of America Received: April 15, 2005; Accepted: July 22, 2005; Published: August 26, 2005 DOI: 10.1371/journal.pcbi.0010032 Copyright: ? 2005 Meyers et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Abbreviation: MHC, major histocompatibility *To whom correspondence should be addressed. E-mail: laurenmeyers at mail.utexas.edu Citation: Meyers LA, Ancel FD, Lachmann M (2005) Evolution of Genetic Potential. PLoS Comput Biol 1(3): e32 Synopsis Variation is the fuel of natural selection. Understanding the mutational processes that underlie evolution has long been a central objective of population genetics. Today, amidst a computational revolution in biology, such understanding is pivotal to progress in many biological disciplines. For example, neutral mutations make the molecular clock tick, and this clock is fundamental to reconstructing phylogenies, measuring recombination rates, and detecting genetic functionality. In this manuscript, the researchers provide an original perspective on a long-standing question in evolutionary biology: to what extent do mutation rates evolve? They argue that to cope with environmental fluctuation, populations can evolve their phenotypic mutation rate without changing their genetic mutation rate. That is, populations can evolve "genetic potential"--a heightened sensitivity to the effects of mutation. The researchers use a simple mathematical model of amino acid evolution to illustrate the evolution of genetic potential, and show that as environmental variability decreases, evolving populations reach three distinct states. In a rapidly fluctuating environment, organisms evolve the flexibility to cope with variation within an individual lifetime; in moderately variable environments, populations evolve the ability to evolve rapidly; and in fairly constant environments, populations evolve robustness against the adverse effects of mutation. Introduction Recent work in evolutionary biology has highlighted the degeneracy of the relationship between genes and traits [1]. For any particular trait value, there will exist a large set of genotypes that give rise to that value. A mutation from one such genotype to another will be neutral, having no noticeable impact on the physiology, behavior, or fitness of organisms. Metaphorically, one can imagine a population moving via mutation through a region of genotype space that maps to a neutral plateau in phenotype space. Near the periphery, mutations are likely to produce different (usually worse and occasionally better) phenotypes, whereas near the center of the neutral plateau, mutations have little impact on the phenotype. Evolutionary theory suggests that populations can harness this variation to achieve phenotypic stability under steady conditions through either mutational insensitivity [2,3] or mutational hypersensitivity [4], or to facilitate phenotypic exploration during adaptation [5,6]. A separate body of evolutionary theory addresses adaptation under fluctuating conditions [7,8]. The rate of the fluctuations will influence the resulting response. If the environment changes rapidly relative to the average generation time, populations may evolve mechanisms such as physiological plasticity and learning by which individual organisms can respond to their conditions [9,10]. As environmental change slows down, viable strategies include stochastic or directed heterogeneity in developmental pathways that give rise to phenotypic variation on the order of once per generation [11]. For even slower rates of change, mutations may produce novel phenotypes at a sufficiently high rate. Hypermutable lineages can produce novelty every few generations, as has been observed in viruses and mutator strains of bacteria [12,13]. When environmental fluctuations are rare, populations may experience extended epochs of directional selection and thus have sufficient time to achieve genetic robustness for any given state. Immediately following an environmental shift, however, such populations may pass through transitional periods of within-individual or between-generation plasticity before completely losing the previously favored phenotype in favor of a currently favored phenotype. This evolutionary transformation--from a trait that is acquired through phenotypic plasticity to a genetically determined version of the same trait--is known as the Baldwin Effect [9,14]. In this paper we show that genetic degeneracy may give rise to an alternative outcome under fluctuating conditions: the evolution of genotypes with heightened sensitivity to mutation. We introduce the term "genetic potential" to describe this state. Metaphorically, populations with genetic potential lie near the edge of neutral plateaus. Although the rate of mutation is unchanged, the likelihood that mutations produce beneficial variation increases. Heightened sensitivity to mutations has been recognized as a critical and transient phase of adaptive evolution [5,15,16]. Here we argue that genetic potential can be a stable condition for a population evolving under changing selection pressures. Using a simple mathematical model, we show that as environmental variability increases, natural selection at first moves populations between genetically robust states, then increasingly favors genetic potential, and ultimately produces mechanisms for environmental robustness within individual organisms. We then present a more biological example of this phenomenon using a model of amino acid evolution. There is evidence that, within viral pathogens, the physiochemical properties of amino acids found within epitopes--regions of proteins that directly interact with the host immune system--can rapidly evolve [17,18]. Likewise, highly evolvable codons have been identified in bacteriophage experiencing shifting hosts [19] and in enzymes experiencing shifting substrates [20]. Motivated by these observations, we model codon evolution at a single amino acid site under fluctuating selection for hydrophobicity. As in the first model, natural selection produces three distinct outcomes with increasing environmental variability. Each outcome corresponds to distinct expectations about the distribution of amino acids and their codons at selected sites. Under infrequent environmental change, populations evolve from one mutationally robust phenotype to another, briefly passing through genotypes that can easily mutate to either state. One might therefore be tempted to equate genetic potential with confinement to the intermediate steps on a path from robustness for one phenotype to robustness for another (Figure 1). While this is true in our simple model, the codon model illustrates that fluctuating environments may drive populations towards significantly greater genetic potential than found during these transient stages of isolated selective sweeps. thumbnail Figure 1. Evolution of Genetic Potential The gray regions represent neutral networks--sets of genotypes that give rise to each phenotype. The degree of shading indicates the likelihood that mutations will impact phenotype, where darker regions are robust to mutations. Under constant conditions, populations evolve toward the most robust regions of neutral networks. Under variable conditions, populations may evolve toward genotypes that easily mutate from one phenotype to the other. These regions of genetic potential do not always lie on the evolutionary path between the equilibrium states for constant environments (arrow). Results Description of Models The simple model. We consider the evolution of a trait in an environment that alternates between two states (E[A] and E[B]), spending exactly l generations per state between shifts. The simple model includes three phenotypes--one optimal phenotype for each of the two environments (A and B) and a third that has intermediate quality in both environments (V)--and a minimal amount of degeneracy in the relationship between the genotype and the phenotype. In particular, there is a single genetic locus, and five allelic possibilities at that locus (Figure 2A). Three of the alleles, g[0], g[1], and g[2], give rise to phenotype A, the fourth, g[3], gives rise to phenotype V, and the fifth, g[4], gives rise to phenotype B. The mutational structure is a pentagon in which g[i] can mutate to g[(i - 1) mod 5] or g[(i + 1) mod 5] for i [isin.gif] {0,1,2,3,4}. thumbnail Figure 2. Mutational Networks (A) Five alleles lie on a mutational pentagon with genetic degeneracy for the A phenotype. Colors indicate phenotypes with blue for A, yellow for B, and gray for V. Edges indicate that an allele on one side can mutate to the allele on the other side. Arrows illustrate the dynamics in equation 2. (B) Each vertex represents an amino acid. The size of the vertex indicates the number of codons coding for the amino acid. Edges indicate point mutations between hydrophobicity classes. Mutations that preserve hydrophobicity class, including those that preserve the amino acid, are included in the model but not depicted here. The color of the vertex corresponds to the hydrophobicity class: blue indicates hydrophobic, yellow indicates hydrophilic, green indicates intermediate, and red indicates stop codons [21]. This network was drawn with PAJEK [50]. The fitness function changes with the environment such that where w[A] and w[B] are the fitnesses in environments E[A] and E[B], respectively, s > 0 is the fitness advantage for the specialized phenotype (A or B) in its preferred environment, and 0 =< k =< 1 determines the intermediacy of the V phenotype. We can write the full model as a set of difference equations for i [isin.gif] {0,1,2,3,4}, where ? is the mutation rate and w[t] denotes the fitness in the current environment (Figure 2A). The number of individuals with genotype g[i] at time t is denoted by g[i,t]. The changing environment is governed by the following rule: To simplify the analysis, this model tracks changes in the absolute population sizes of the various genotypes rather than their relative frequencies. Since the dynamics scale linearly with the total population size, one can achieve the same population dynamics by replacing the absolute sizes with relative frequencies and normalizing appropriately. Variations on the simple model. There are exactly 14 unique mutational networks consisting of five alleles on a pentagon, with at least one encoding A and at least one encoding B (see Materials and Methods). These include, for example, the pentagon with four consecutive alleles coding for A and one for B and the pentagon with alleles alternating in phenotype-A-B-A-V-B-. We are presenting analysis of the -A-A-A-V-B- model because it gives rise to some of the most interesting and generic dynamics found among these 14 models. The codon model. The previous model offers a transparent illustration of evolutionary dynamics under different rates of environmental change. Although somewhat simplistic, we believe that the qualitative predictions of the model will hold for a wide range of more plausible genotype-phenotype maps. To demonstrate this, we consider the evolution of a single amino acid site under fluctuating conditions. In this model, the genotypes are the 64 codons in the standard genetic code and the phenotypes are hydrophobicities of the corresponding amino acids [21]. The environment alternately favors hydrophobic and hydrophilic amino acids. There are three classes of amino acids--hydrophobic, intermediate, and hydrophilic--and all amino acids in a class share the same fitness. The fitnesses are determined as in equation 1, with the fitnesses of all three stop codons equal to zero. Each codon is mutationally connected to the nine others to which it can mutate via point mutation. This gives rise to the genetic network depicted in Figure 2B and the dynamics given by for 1 =< i =< 64, where ? is the overall mutation rate, b is the transition/transversion ratio (2b is the transition/transversion rate ratio), F[i] is the set of three transition point mutations of codon i, and G[i] is the set of six transversion point mutations of codon i. Analysis of the Simple Model We provide an intuitive perspective on evolution in fluctuating environments using the simple model and then demonstrate the generality of the results in the codon model. The first results assume a mutation rate ? = 0.01, and fitnesses 1, 1.5, and 2 for the unfavored, intermediate, and favored phenotypes, respectively. In a constant environment, a population will equilibrate on genotypes that encode the optimal phenotype. In environment E[A], the equilibrium relative frequencies of g[0], g[1], g[2], g[3], and g[4] are 0.291, 0.412, 0.292, 0.003, and 0.002, respectively, and in environment E[B], they are 0.005, 0.000, 0.000, 0.010, and 0.985, respectively. When there is degeneracy, as there is for phenotype A, the populations evolve genetic robustness, that is, more mutationally protected genotypes appear in higher frequency. In particular, g[1], which lies in the center of the three genotypes that code for A, appears in higher frequency than either genotype on the edge of the neutral network for A (g[0] and g[2]) at equilibrium in E[A]. In the absence of degeneracy (phenotype B), we observe a mutation-selection balance around the single optimal genotype. These findings are consistent with and provide a transparent example of the extensive theory on mutation-selection balance, quasi-species, and the evolution of genetic robustness in neutral networks [2,22-24]. Under rapid environmental fluctuations, populations do not have time to reach a stable allele distribution. As the environment becomes more variable, the distributions of alleles go through three distinct phases. Figure 3 shows the frequency of every allele averaged over each environmental condition after the population has reached steady oscillations. For relatively stable environments, the populations swing back and forth between near equilibrium conditions for E[A] and E[B], thereby alternating between genetic robustness for A and a mutation-selection balance around the single allele for B. At intermediate rates of fluctuation, populations hover near g[4] and g[0], where the genotypes for A abut the genotype for B. Thus, mutation between the two phenotypes occurs frequently. We call this outcome genetic potential because of the enhanced potential for mutations to give rise to novel (beneficial) phenotypes. Finally, for highly variable environments, the populations converge on the phenotype V, which has unchanging, intermediate fitness in both environments. Phenotype V corresponds to organismal flexibility--individual organisms tolerate both conditions, but neither one exceptionally well. There are a variety of mechanisms that can give rise to an intermediate phenotype including homeostasis, somatic evolution, physiological plasticity, and behavioral plasticity [7,8]. As originally predicted by Dempster [25], the ascent of V under rapid fluctuations only occurs if the fitness of V is greater than the geometric mean fitness over time for either A or B. Figure 3. Allele Distributions under Environmental Fluctuations The graphs show the stationary allele distributions averaged over an E[A] epoch (top) and an E[B] epoch (bottom) as a function of the variability of the environment. As environmental variability decreases, the population moves from the intermediate phenotype to the genetic boundary between the A and B phenotypes, and eventually to an oscillation between the center of the network for A and the gene for B. Diagrams above the graphs illustrate the frequency distributions in each of the three phases. Vertex areas are proportional to the average frequencies for each allele. (For the data depicted in this figure, s = 1, k = 0.5, and ? = 0.01.) Anaylsis of the Codon Model The codon model gives rise to similar oscillations (Figure 4). Here we have assumed a transition/transversion ratio b = 2, mutation rate ? = 10^ -5, and fitnesses 1, 1.5, and 2 for the unfavored, intermediate, and favored phenotypes, respectively. (We address the impact of mutation rate in the Discussion.) Whereas in the simple model only one of the three phenotypes had multiple genotypes, in this model all three phenotypic classes have genetic degeneracy, and thus can evolve genetic robustness (Figure 4A). For highly variable environments, codons for amino acids with intermediate hydrophobicity dominate, and in particular, those that are least likely to mutate to one of the other two classes (Figure 4B). In a moderately variable environment, the populations exhibit genetic potential, hovering near the edges of the neutral networks for the two extreme classes, thereby enabling rapid evolution upon environmental transitions (Figure 4C). In relatively constant environments, we find alternating genetic robustness for the two extreme classes (Figure 4D). thumbnail Figure 4. Codon Distributions under Environmental Fluctuations (A) gives the robustness for each codon, that is, the fraction of all possible point mutations that leave the hydrophobicity class unchanged. The codons have been ordered to reflect roughly the mutational adjacency of the hydrophobicity classes. (B-D) show the average codon frequency distribution for each epoch type after the population has reached stationary oscillation. These show frequencies for environmental epochs of exactly l generations (thick lines) and epochs of random duration--Poisson distributed with mean l (thin lines). Black corresponds to epochs favoring hydrophobicity and gray corresponds to epochs favoring hydrophilicity. The rate of environmental fluctuations is decreasing from (B) to (D) (l = 10, 10^2, and 10^6, respectively). The genetic potential of a population can be estimated by the probability that a currently favored codon in the population will mutate to a currently unfavored or intermediate codon. This indicates the capacity to bounce back (via mutation and selection) if and when the environment reverts. For populations that have equilibrated in a constant environment and have recently experienced an environmental shift, genetic potential will decrease as the population becomes increasingly robust to the effects of mutation (Figure 5). For populations that have evolved under moderately fluctuating conditions, genetic potential remains noticeably higher. This suggests that the regular oscillations of such populations involve distributions of codons that are quite different (more mutable) than those found during the early stages of adaptation in an isolated selective sweep. thumbnail Figure 5. Faster Environmental Fluctuations Yield Greater Genetic Potential Genetic potential is the likelihood that a mutation to a gene coding for the currently favored phenotype will produce the intermediate or unfavored phenotype. Thick lines correspond to populations that have reached stable oscillations when l = 100, and thin lines correspond to populations that experience a single environmental shift after having equilibrated in a constant environment. The maximum genetic potential after a single shift is significantly less than the minimum under persistent fluctuations. This difference also appears in the distributions of amino acids. We calculated the genetic potential in each generation of a population experiencing fluctuations every l = 10^2 generations. Figure 6 (left) depicts the amino acid distributions for the generations that have the highest genetic potential in E[A] and E[B]. We then compared these two distributions to the evolving amino acid distribution in a population that equilibrates in one of the two environments and then faces an environmental shift. Figure 6 (right) shows the steady state distributions for this population and the transitional distributions that are most similar (i.e., smallest average squared difference in relative frequencies) to those depicted in Figure 6 (left). The distributions of amino acids in regions of genetic potential are strikingly different than those realized in populations evolving after an isolated environmental shift. thumbnail Figure 6. Amino Acid Distributions Reflect Genetic Potential The left figure illustrates amino acid distribution in the generations with greatest genetic potential during each of the two epochs for l = 100. Vertex area is proportional to the relative frequency of an amino acid. The right figure gives the amino acid distributions at equilibrium in the two environments (far left and right networks), and the transitional amino acid distributions that are most similar to those depicted for l = 100 (left). Similarity is measured as mean squared difference in frequencies across all amino acids. The amino acid networks were drawn with PAJEK [50]. Discussion We have provided an intuitive framework for studying the evolutionary implications of heterogeneous environments. Although much is known independently about the evolution of genetic robustness [3] and organismal flexibility [7,8], this model demonstrates that the extent of environmental variability may determine which of these two states evolves, and suggests the possibility of an intermediate state of heightened mutability. The transition points among the three states will be functions of both the environment and the mutation rate. In particular, increasing (decreasing) the mutation rate (within a moderate range) has the same qualitative effect as increasing (decreasing) the duration of an environmental epoch. As the mutation rate decreases, populations take longer to achieve genetic robustness, and therefore evolve genetic potential (rather than robustness) over large ranges of environmental variability. For example, at a mutation rate of ? = 10^ -5 in the codon model, populations evolve genetic potential when environment varies at rates of 10^1 < l < 10^6 generations, approximately (Figure 4). If the mutation rate increases to ? = 10^ -2, the qualitative results are similar, with populations evolving genetic potential when the environmental variability is in the more limited range of 10^0 < l < 10^3 generations, approximately. If, instead, the mutation rate decreases to ? = 10^ -9, then adaptation to genetic robustness proceeds at an exceedingly slow pace, yielding genetic potential throughout the extended range of 10^2 < l < 10^10 generations, approximately. To understand the comparable roles of mutation and environmental variability, note that the model includes three time-dependent processes--mutation, environmental change, and population growth. If one of these rates is changed, the other two can be modified to achieve identical system behavior on a shifted time scale. Since the dynamics only weakly depend on the force of selection, we can change the mutation rate and then scale the rate of environmental change to produce the original qualitative results. The connection between environmental variability and mutation has been noted before, with theory predicting that the optimal mutation rate under fluctuating environmental conditions is ? = 1/l [26,27]. Our results suggest an alternative perspective on the evolution of mutation rates. Theory suggests that the optimal mutation rate should correspond to the rate of environmental change [26,28], yet the extent to which mutation rate can evolve is unclear [12,13,29]. Here we suggest that the genotypic mutation rate need not evolve as long as the phenotypic or effective mutation rate evolves. By evolving toward genotypes with higher genetic potential, populations increase the rate of phenotypically consequential mutations without modifications to the underlying genetic mutational processes. We would like to emphasize that our second model is intended as one possible example of fluctuating selection among many thought to exist in nature. Whether or not one has much confidence in the particular evolutionary scenario, the qualitatively similar outcomes for the simple and complex models presented here suggest that the results may hold for a large class of systems in which there is redundancy in the relationship between genotype and phenotype. Hydrophobicity is just one of several physicochemical properties thought to play a role in the shifting functional demands on amino acids [17-20]. Another example is phase-shifting bacteria that have mutational mechanisms, for example, inversions in promoter regions [30] and slip-stranded mispairing within microsatellites [12], that lead to variation in functionally important phenotypes. The remarkable suitability of the phase-shifting variants to the diverse conditions experienced by the bacteria suggests that phase shifting may have evolved as a mechanism for genetic potential. We hypothesize that the major histocompatibility complex (MHC), which is the component of the immune system responsible for recognizing and binding foreign particles, may also have evolved genetic potential as a by-product of the flucuations arising out of coevolution with pathogens [31]. Studies suggest that several components of the immune system exhibit high overall rates of genetic change. In particular, there are specific amino acid sites within the MHC complex that seem to have experienced rapid evolutionary change [32]. One possible explanation is that each MHC gene as a whole, and these sites in particular, have a history of rapid adaptation to changing distributions of potential antigens. We therefore predict that such sites may have evolved genetic potential. Evolvability has been defined as a population's ability to respond to selection [6,33]. Although the term has only recently taken root, ideas concerning the evolution of evolvability itself date back to the Fisher-Wright debate over the evolution of dominance [34,35] and include the large body of theory on the evolution of mutation rates and recombination [36,37]. Developmental biologists have begun to identify genetic architectures that promote diversification [38] and buffering mechanisms, such as heat shock proteins, that allow the accumulation of cryptic variation [39]. Although one can think of genetic potential as an abstraction of all mechanisms that increase the likelihood that a mutation will have a phenotypic effect, the genetic potential that evolves in our models is a very simple form of evolvability that exploits redundancy in the map from genotype to phenotype. Genetic potential evolves in our models because prior and future environments are identical. If, instead, the environment continually shifts to completely novel states, the evolutionary history of a population may not prepare it for future adaptation. We speculate that some degree of genetic potential may still evolve if there exist genotypes on the periphery of neutral networks with broad phenotypic lability. Biologists often refer to phenotypic plasticity, learning, and other forms of organismal flexibility as "adaptations" for coping with environmental heterogeneity [7,8]. Should genetic potential be seen as an alternative "solution," or should it be viewed as simply a product of fluctuating selection? Although we remain agnostic, we note that this question might be asked of all forms of adaptive variation. Whether or not genetic potential should be viewed as an evolved strategy, we emphasize that it is not simply the truncation of the adaptive path a population follows from the equilibrium state in one constant environment to the equilibrium state in the other. In the codon model, intermediate rates of environmental fluctuations push the population into regions of the codon network where genetic potential is consistently higher than the regions of network through which a population crosses after an isolated environmental shift (Figures 1, 5, and 6). A long-standing technique for identifying selected genes is to compare the frequencies of nonsynonymous and synonymous substitutions (K[a]/K[s]) [40]. Genes experiencing frequent selective sweeps should have relatively large amounts of variation in sites that modify amino acids. Such genes might be in the process of evolving a new function or, more likely, involved in an evolutionary arms race, for example, epitopes in human pathogens [31,41] or genes involved in sperm competition [42]. In the latter case, our model suggests that, in addition to an elevated K[a]/K[s], such genes should employ a distinct set of codons with high genetic potential. Note that this type of genetic potential is not equivalent to codon bias, but rather implies changes in the actual distribution of amino acids. A similar argument also underlies the recent use of codon distributions for detecting genetic loci under directional selection [43]. Codon volatility--the probability that a codon will mutate to a different amino acid class, relative to that probability for all codons in the same amino acid class--is a measure of genetic potential. Genes with significantly heightened volatility will be more sensitive to mutation. Our model suggests a different explanation for codon volatility than that presented in [43]: volatility may indicate a history of fluctuating selection rather than an isolated evolutionary event. If true, then we would not expect the codon distribution to reflect a transient out-of-equilibrium distribution as the population is moving from one constant environment to another [16]. Instead, we expect the distribution to reflect the stationary level of genetic potential that corresponds to variability in the selective environment for that gene. On a practical level, therefore, the isolated selective sweep model assumed in [43] may misestimate the expected volatility at such sites. Codon volatility, however, can arise as a by-product of processes other than positive (or fluctuating) selection. It has been noted that codon volatility may instead reflect selection for translation efficiency, relaxed negative selection, strong frequency-dependent selection, an abundance of repetitive DNA, or simple amino acid biases [44-48]. Therefore, the presence of codon volatility by itself may not be a reliable indicator of either recent directional selection or fluctuating selection. We would like to emphasize that the goal of this study was not to develop a new method for detecting positive (or fluctuating) selection, but rather to develop a theoretical framework for considering the multiple outcomes of evolution under fluctuating conditions. We conclude by suggesting an empirical method to identify loci that have evolved genetic potential under such conditions as distinct from those that have experienced a recent selective sweep. Suppose that a gene experiences fluctuations at a characteristic rate across many species. Furthermore, suppose that multiple sites within the gene are influenced by such fluctuations. For example, there may be fluctuating selection for molecular hydropathy, charge, size, or polarity, and several sites within the gene may contribute to these properties. Such sites should evolve in tandem and equilibrate on similar levels of genetic potential, and thus exhibit similar codon (and amino acid) distributions across species. In contrast, if a gene experiences isolated selective sweeps, then the variation at all sites should correspond to both the history of selective events and the species phylogeny, and the amino acid distributions at sites should correlate only when sites functionally mirror each other. Thus, one can seek evidence for the evolution of genetic potential as follows. First, identify genes that are rapidly evolving, perhaps by calculating K[a]/K[s] ratios. Such sites have been identified, for example, in human class I MHC genes, the HIV envelop gene, and a gene from a human T cell lymphotropic virus (HTLV-1) [31,32]. Within these genes, search for sites for which there is minimal correlation between the species tree and the amino acid distribution. Our model predicts that some of these sites should share similar distributions of amino acids across species. Materials and Methods Mathematical analysis of models. For the two models, we calculate the deterministic, infinite population allele frequency distributions in constant and fluctuating environments. Let M[A] and M[B] be the normalized transition matrices that govern changes in the allele frequencies in E[A] and E[B] epochs, respectively. The entries in these matrices are defined by equations 2 and 4. The left leading eigenvectors for M[A] and M[B] give the equilibrium frequency distributions of alleles in each of the two constant environments, respectively. Under fluctuating conditions with epoch duration of l generations, we iteratively apply the matrices, and then compute the left leading eigenvector of . This vector, which we call v[B], gives the allele frequency distribution at the end of an E[A] epoch followed by an E[B] epoch. We are interested not only in the final allele distributions, but also in the dynamics throughout each epoch. Thus, we calculate the average frequency of each allele across a single E[A] epoch by where G is the total number of alleles in the model (G = 5 for the simple model and G = 64 for the codon model) and the subscript k indicates the kth entry in the vector. Similarly, the average distribution across an E[B] epoch is given by where v[A] is the allele frequency distribution at the end of an E[B] epoch followed by an E[A] epoch and is equal to the left leading eigenvalue of For the codon model, we compare these calculations that assume a regularly fluctuating environment to numerical simulations that assume a Poisson distribution of epoch lengths. In each generation of the simulations, the environmental state switches with probability 1/l and the codon frequencies are then multiplied by the appropriate transition matrix. Proof of 14 unique pentagonal networks. We use an elementary group theoretic result known as Burnside's Lemma [49] to prove that there are 14 distinct mutational networks consisting of five alleles on a pentagon that map to the set of phenotypes {A, B, V} and contain at least one of each specialist phenotype (A and B) (Figure 7). We assume that all rotations and reflections of a network are equivalent to the original network, and that A and B are interchangeable. For example, the six networks with phenotypes -A-A-A-B-B-, -B-A-A-A-B-, -B-B-A-A-A-, -B-B-B-A-A-, -A-B-B-B-A-, and -A-A-B-B-B- are equivalent. thumbnail Figure 7. Pentagonal Mutational Networks These are the 14 possible pentagonal mutational networks consisting of five alleles producing phenotypes A, B, or V, with at least one encoding A and one encoding B. Let X be the set of all pentagons with vertices labeled {A, B, V} having at least one A vertex and at least one B vertex. The size of X is the number of all pentagons with labels {A, B, V} minus the number of pentagons with labels {A, V} or {B, V}, that is, |X| = 3^5 - (2 ? 2^5 - 1) = 180. We define the group G of all actions on X that produce equivalent pentagons (as specified above). G is made up of (1) the identity, (2) the four rotations and five reflections of the pentagon, (3) interchanging all As and Bs, and (4) all the combinations of the above actions. Thus G is equal to the 20-member group {i, r, r^2, r^3, r^4, s[0], s[1], s[2], s[3], s[4], a, ar, ar^2, ar^3, ar^4, as[0], as[1], as[2], as[3], as[4]} where i is the identity, r is a single (72?) rotation, s[i] is a reflection through vertex i, and a is replacement of all As with Bs and all Bs with As. (Note that the reflections are rotations of each other, for example, r^2s[0] = s[1].) The number of distinct mutational networks is equal to the number of orbits of G on X. Burnside's Lemma tells us that this number is where F(g) = {x [isin.gif] X | gx = x} is the set of fixed points of g. For each of the twenty elements of G, we exhaustively count F(g). The identity fixes all elements of X, that is, F(i) = X. Each of the various rotations of a pentagon (through 72?, 144?, 216?, and 288?) has the property that its iterations move a given vertex to every other vertex of the pentagon without changing the letter assigned to that vertex. The same is true of the square of the product of any rotation and an A-B flip. Hence, any fixed point of one of these elements of the group G would necessarily have the same label at each vertex of the pentagon. Since every labeled pentagon in X has at least one A label and at least one B label, then no element of X has the same label at each vertex. Thus, the fixed point set of every rotation and of every product of a rotation and an A-B flip must be empty, that is, F(r^n) = F(ar^n) =  for all n. By a similar argument, the simple A-B flip also has no fixed points. Every reflection fixes 12 elements of X, for example, and every product of a reflection and an A-B flip fixes eight elements of X, for example, In sum, all eight group elements that involve rotations fix no elements of X, all five reflections fix 12 elements of X, and all five combinations of a reflection and an A-B exchange fix eight elements of X. Thus, Acknowledgments We thank Carl Bergstrom and Jim Bull for their valuable insights and comments on the manuscript. Competing interests. The authors have declared that no competing interests exist. Author contributions. LAM and ML conceived and designed the experiments. LAM performed the experiments. LAM, FDA, and ML analyzed the data and contributed reagents/materials/analysis tools. LAM and ML wrote the paper. References 1. Huynen MA, Stadler PF, Fontana W (1996) Smoothness within ruggedness: The role of neutrality in adaptation. Proc Natl Acad Sci U S A 93: 397-401. Find this article online 2. van Nimwegen E, Crutchfield JP, Huynen MA (1999) Neutral evolution of mutational robustness. Proc Natl Acad Sci U S A 96: 9716-9720. Find this article online 3. De Visser JAGM, Hermisson J, Wagner GP, Meyers LA, et al. (2003) Perspective: Evolution and detection of genetic robustness. Evolution 57: 1959-1972. Find this article online 4. Krakauer DC, Plotkin JB (2002) Redundancy, antiredundancy, and the robustness of genomes. Proc Natl Acad Sci U S A 99: 1405-1409. Find this article online 5. Ancel LW, Fontana W (2000) Plasticity, evolvability, and modularity in RNA. J Exp Zool 288: 242-283. Find this article online 6. Schlichting C, Murren C (2004) Evolvability and the raw materials for adaptation. In: Taylor I, editor. Plant adaptation: Molecular biology and ecology. Vancouver: NRC Canada Research Press. pp. 18-29. 7. Meyers LA, Bull JJ (2002) Fighting change with change: Adaptive variation in an uncertain world. Trends Ecol Evol 17: 551-557. Find this article online 8. Schlichting CD, Pigliucci M (1998) Phenotypic evolution--A reaction norm perspective. Sunderland (Massachusetts): Sinauer Associates. 387 p. 9. Ancel LW (1999) A quantitative model of the Simpson-Baldwin effect. J Theor Biol 196: 197-209. Find this article online 10. Kawecki TJ (2000) The evolution of genetic canalization under fluctuating selection. Evolution 54: 1-12. Find this article online 11. Bull JJ (1987) Evolution of phenotypic variance. Evolution 41: 303-315. Find this article online 12. Moxon ER, Rainey PB, Nowak MA, Lenski RE (1994) Adaptive evolution of highly mutable loci in pathogenic bacteria. Curr Biol 4: 24-33. Find this article online 13. Miller JH (1998) Mutators in Escherichia coli. Mutat Res 409: 99-106. Find this article online 14. Baldwin JM (1896) A new factor in evolution. Am Nat 30: 441-451. Find this article online 15. Fontana W, Schuster P (1998) Continuity in evolution: On the nature of transitions. Science 280: 1451-1455. Find this article online 16. Plotkin J, Dushoff J, Deasai M, Fraser H (2004) Synonymous codon usage and selection on proteins. Arxiv.org E-Print Archives Available: http://arxiv.org/PS_cache/q-bio/pdf/0410/0410013.pdf. Accessed 3 August 2005. 17. Yang W, Bielawski JP, Yang Z (2003) Widespread adaptive evolution in the human immunodeficiency virus type 1 genome. J Mol Evol 57: 212-221. Find this article online 18. Bush R, Bender C, Subbarao K, Cox N, Fitch W (1999) Predicting the evolution of human influenza A. Science 286: 1921-1925. Find this article online 19. Crill WD, Wichman HA, Bull JJ (2000) Evolutionary reversals during viral adaptation to alternating hosts. Genetics 154: 27-37. Find this article online 20. Matsumura I, Ellington AD (2001) In vitro evolution of beta-glucuronidase into a beta-galactosidase proceeds through non-specific intermediates. J Mol Biol 305: 331-339. Find this article online 21. Kyte J, Doolittle RF (1982) A simple method for displaying the hydropathic character of a protein. J Mol Biol 157: 105-132. Find this article online 22. Eigen M, McCaskill JS, Schuster P (1989) The molecular quasispecies. Adv Chem Phys 75: 149-263. Find this article online 23. Wagner GP, Booth G, Bagheri-Chaichian H (1997) A population genetic theory of canalization. Evolution 51: 329-347. Find this article online 24. Wagner A, Stadler PF (1999) Viral RNA and evolved mutational robustness. J Exp Zool 285: 119-127. Find this article online 25. Dempster E (1955) Maintenance of genetic heterogeneity. Cold Spring Harb Symp Quant Biol 20: 25-32. Find this article online 26. Lachmann M, Jablonka E (1996) The inheritance of phenotypes: An adaptation to fluctuating environments. J Theor Biol 181: 1-9. Find this article online 27. Leigh EG (1973) The evolution of mutation rates. Genetics 73: 1-18. Find this article online 28. Meyers LA, Levin BR, Richardson AR, Stojiljkovic I (2003) Epidemiology, hypermutation, within-host evolution, and the virulence of Neisseria meningitidis. Proc R Soc Lond B Biol Sci 270: 1667-1677. Find this article online 29. Drake JW, Charlesworth B, Charlesworth D, Crow JF (1998) Rates of spontaneous mutation. Genetics 148: 1667-1686. Find this article online 30. Lederberg J, Iino T (1956) Phase variation in salmonella. Genetics 41: 743-757. Find this article online 31. Nielsen R, Yang Z (1998) Likelihood models for detecting positively selected amino acid sites and applications to the HIV-1 envelope gene. Genetics 148: 929-936. Find this article online 32. Yang Z, Wong WSW, Nielsen R (2005) Bayes empirical Bayes inference of amino acid sites under positive selection. Mol Biol Evol 22: 1107-1118. Find this article online 33. Wagner GP, Altenberg L (1996) Perspective: Complex adaptations and the evolution of evolvability. Evolution 50: 967-976. Find this article online 34. Fisher RA (1922) On the dominance ratio. Proc R Soc Edinb 42: 321-341. Find this article online 35. Wright S (1934) Physiological and evolutionary theories of dominance. Am Nat 68: 24-53. Find this article online 36. Sniegowski PD, Gerrish PJ, Johnson T, Shaver A (2000) The evolution of mutation rates: Separating causes from consequences. Bioessays 22: 1057-1066. Find this article online 37. Feldman MW, Otto SP, Christiansen FB (1997) Population genetic perspectives on the evolution of recombinations. Annu Rev Genet 30: 261-295. Find this article online 38. Schlosser G, Wagner GP, editors. (2004) Modularity in development and evolution. Chicago: University of Chicago Press. 600 p. 39. Rutherford SL, Lindquist S (1998) Hsp90 as a capacitor for morphological evolution. Nature 396: 336-342. Find this article online 40. Yang Z, Bielawski J (2000) Statistical methods for detecting molecular adaptation. Trends Ecol Evol 15: 496-503. Find this article online 41. Endo T, Ikeo K, Gojobori T (1996) Large-scale search for genes on which positive selection may operate. Mol Biol Evol 13: 685-690. Find this article online 42. Torgerson DG, Kulathinal RJ, Singh RS (2002) Mammalian sperm proteins are rapidly evolving: Evidence of positive selection in functionally diverse genes. Mol Biol Evol 19: 1973-1980. Find this article online 43. Plotkin JB, Dushoff J, Fraser HB (2004) Detecting selection using a single genome sequence of M. tuberculosis and P. falciparum. Nature 428: 942-945. Find this article online 44. Dagan T, Graur D (2005) The comparative method rules! Codon volatility cannot detect positive Darwinian selection using a single genome sequence. Mol Biol Evol 22: 496-500. Find this article online 45. Hahn MW, Mezey JG, Begun DJ, Gillespie JH, Kern AD, et al. (2005) Evolutionary genomics: Codon bias and selection on single genomes. Nature 433: E5-E6. Find this article online 46. Nielsen R, Hubisz MJ (2005) Evolutionary genomics: Detecting selection needs comparative data. Nature 433: E6. Find this article online 47. Sharp PM (2005) Gene "volatility" is most unlikely to reveal adaptation. Mol Biol Evol 22: 807-809. Find this article online 48. Zhang J (2005) On the evolution of codon volatility. Genetics 169: 495-501. Find this article online 49. Martin G (2001) Counting: The art of enumerative combinatorics. New York: Springer-Verlag. 50. Batagelj V, Mrvar A (1998) PAJEK--Program for large network analysis. Connections 21: 47-57. Find this article online From checker at panix.com Fri Sep 2 01:44:43 2005 From: checker at panix.com (Premise Checker) Date: Thu, 1 Sep 2005 21:44:43 -0400 (EDT) Subject: [Paleopsych] SW: On the Large Scale Structure of the Universe Message-ID: On the Large Scale Structure of the Universe http://scienceweek.com/2005/sw050819-5.htm The following points are made by David H. Weinberg (Science 2005 309:564): 1) In a large-scale view of the Universe, galaxies are the basic unit of structure. A typical bright galaxy may contain 100 billion stars and span tens of thousands of light-years, but the empty expanses between the galaxies are much larger still. Galaxies are not randomly distributed in space, but instead reside in groups and clusters, which are themselves arranged in an intricate lattice of filaments and walls, threaded by tunnels and pocked with bubbles. Two ambitious new surveys, the Two-Degree Field Galaxy Redshift Survey (2dFGRS) and the Sloan Digital Sky Survey (SDSS), have mapped the three-dimensional distribution of galaxies over an unprecedented range of scales [1,2]. Astronomers are using these maps to learn about conditions in the early Universe, the matter and energy contents of the cosmos, and the physics of galaxy formation. 2) Galaxies and large-scale structure form as a result of the gravitational amplification of tiny primordial fluctuations in the density of matter. The inflation hypothesis ascribes the origin of these fluctuations to quantum processes during a period of exponential expansion that occupied the first millionth-of-a-billionth-of-a-trillionth of a second of cosmic history. Experiments over the last decade have revealed the imprint of these fluctuations as part-in-100,000 intensity modulations of the cosmic microwave background (CMB), which records the small inhomogeneities present in the Universe half a million years after the big bang. Although the visible components of galaxies are made of "normal" baryonic matter (mostly hydrogen and helium), the gravitational forces that drive the growth of structure come mainly from dark matter, which is immune to electromagnetic interactions. 3) By combining precise, quantitative measurements of present-day galaxy clustering with CMB data and other cosmological observations, astronomers hope to test the inflation hypothesis, to pin down the physical mechanisms of inflation, to measure the amounts of baryonic and dark matter in the cosmos, and to probe the nature of the mysterious "dark energy" that has caused the expansion of the Universe to accelerate over the last 5 billion years. The 2dFGRS, completed in 2003, measured distances to 220,000 galaxies, and the SDSS is now 80% of the way to its goal of 800,000 galaxies. 4) The key challenge in interpreting the observed clustering is the uncertain relation between the distribution of galaxies and the underlying distribution of dark matter. If the galaxy maps are smoothed over tens of millions of lightyears, this relation is expected to be fairly simple: Variations in galaxy density are constant multiples of the variations in dark matter density. Quantitative analysis in this regime has focused on the spatial power spectrum, which characterizes the strength of clustering on different size scales [3,4]. The power spectrum describes the way that large, intermediate, and small structures -- like the mountain ranges, isolated peaks, and rolling hills of a landscape -- combine to produce the observed galaxy distribution. The shape of the dark matter power spectrum is a diagnostic of the inflation model, which predicts the input spectrum from the early Universe, and of the average dark matter density, which controls the subsequent gravitational growth. Recent analyses have also detected subtle modulations of the power spectrum caused by baryonic matter, which undergoes acoustic oscillations in the early universe because of its interaction with photons [4,5]. References (abridged): 1. M. Colless et al., Mon. Not. R. Astron. Soc. 328, 1039 (2001) 2. D. G. York et al., Astron. J.120, 1579 (2000) 3. M. Tegmark et al., Astrophys. J. 606, 702 (2004) 4. S. Cole et al., http://arxiv.org/abs/astro-ph/0501174 5. D.J. Eisenstein et al.http://arxiv.org/abs/astro-ph/0501171. Science http://www.sciencemag.org -------------------------------- Related Material: COSMOLOGY: ON THE FIRST GALAXIES The following points are made by Zoltan Haiman (Nature 2004 430:979): 1) Galaxies are thought to be surrounded by massive haloes of dark matter, each outweighing its galaxy by a factor of about eight. The visible part of a galaxy, occupying the inner 10% of the halo, consists of a mixture of stars and gas. Galaxies harbor a giant black hole at their centers, which in some cases is actively fuelled as it sucks in surrounding gas. In especially active galaxies, called quasars, the fuelling rate is so high that the radiation generated close to the black hole outshines the cumulative star-light from the entire galaxy. The sequence of cosmic events that leads to this configuration is still largely mysterious.(1) How does gas condense into the central regions of the dark-matter halo? At what stage of the gas condensation process do the stars and the giant black hole light up? 2) The formation of massive dark-matter haloes is dictated by gravity, and can be described by using ab initio calculations(2). As the Universe expanded from its dense beginning, tiny inhomogeneities in the distribution of dark matter were amplified through the effects of gravity. Regions of space that were slightly denser than average had a higher gravitational pull on their surroundings; eventually, these regions stopped following the expansion of the rest of the Universe, turned around and re-collapsed on themselves. The resulting dense knots of dark matter -- forming the intersections of a cosmic web of less-dense dark-matter filaments -- are believed to be the sites at which galaxies lit up. 3) Dark matter thus dominates the formation of a galaxy, at least initially, and determines the gross properties of the galaxy population, such as their abundance, size and spatial distribution. But it is the trace amount of gas (mostly hydrogen and helium), pulled with the dark matter into the collapsed haloes, that forms the visible parts of galaxies and determines their observable properties. In particular, to condense to the core of the dark halo, the gas must cool continuously so as to deflate the pressure acquired by its compression. A fraction of the gas (typically 10% by mass) eventually turns into stars, and a much smaller fraction (typically 0.1%) into the central massive black hole(3). 4) The composition of the gas inside the galaxy can be studied through the spectrum of radiation that it absorbs and emits. Primordial gas is essentially a pure mix of hydrogen and helium, but the spectra of all of the quasars discovered so far have shown the presence of various heavier elements (such as carbon, nitrogen, oxygen and iron). This indicates that the gas has been enriched by the nucleosynthetic yields from previous generations of stars. Even the most distant quasars, including those that existed about a billion years after the Big Bang (a mere 5% of the current age of the Universe), show a significant heavy-element content(4). This suggests that vigorous star-formation is a necessary condition for any quasar activity. On the other hand, star formation seems to be occurring on relatively small scales, close to the galactic center. A natural inference would then be the following sequence of events: the cosmic gas first contracts to the inner regions of the halo, and only then forms stars --but this is still before the formation (or at least activation) of any central quasar black hole. 5) Not necessarily so, according to Weidinger et al(1). They have detected the faint glow of hydrogen emission enveloping a distant quasar at a radius equivalent to about 100,000 light years --several times the size of the visible part of a typical galaxy. Such emission has a simple physical origin. The hydrogen atoms falling through the halo are ionized by the quasar's light, then recombine with electrons to become atoms again. Each recombination results in the emission of a so-called Lyman-photon (a photon with energy equal to the difference between the ground and first excited states of a hydrogen atom). As a result, when viewed through a filter tuned to the Lyman-alpha frequency, a faint "fuzz" can be seen to surround quasars(5). This fuzz can serve as a diagnostic of whether or not a spatially extended distribution of infalling gas is present around the quasar. If most of the gas has already cooled and settled at the center of the halo, the extended fuzz would be absent. References (abridged): 1. Weidinger, M., Mueller, P. & Fynbo, J. P. U. Nature 430, 999-1001 (2004) 2. Navarro, J. F., Frenk, C. S. & White, S. D. M. Astrophys. J. 462, 563-575 (1996) 3. Magorrian, J. et al. Astron. J. 115, 2285-2305 (1998) 4. Fan, X. et al. Astron J. (in the press); preprint at http://arxiv.org/abs/astro-ph/0405138 (2004) 5. Rees, M. J. Mon. Not. R. Astron. Soc. 231, 91-95 (1988) Nature http://www.nature.com/nature -------------------------------- Related Material: COSMOLOGY: ON THE FORMATION OF GALAXIES The following points are made by Gregory D. Wirth (Nature 2004 430:149): 1) Over the past two decades, astrophysicists have been spectacularly successful in explaining the early evolution of the Universe. Existing theories can account well for the time span from the Big Bang nearly 14 billion years ago until the Universe began to cool and form the first large structures less than a million years later. But detailed explanations of how the original stew of elementary particles subsequently coalesced over time to form the stars and galaxies seen in the present-day Universe are still being refined. Glazebrook et al(1) and Cimatti et al(2) have recently discovered the most distant "old" galaxies yet. and the existence of these objects at such an early epoch in the history of the Universe seems inconsistent with the favored theory of how galaxies formed. 2) That favored theory is the so-called hierarchical model, in which smaller structures gradually accumulate into ever larger structures, ultimately forming galaxies of the sort we see today(3). The most massive galaxies are expected to have formed relatively late in the process, with few existing before the Universe was half its present age. Such predictions can be tested in principle through the observations made of distant galaxies. 3) We have a powerful means of observing the history of the Universe: because the speed of light is finite, as we look out into space we actually peer back in time, seeing distant objects not as they are now, but as they were when their light was emitted millions or billions of years ago. Unfortunately, galaxies more than 6 billion light years away are not only exceedingly faint, but are also particularly difficult to identify. The visible galaxy spectra are "redshifted" to longer, near-infrared wavelengths as a consequence of the expansion of the Universe; at these wavelengths, the Earth's atmospheric emission obscures the key spectral "fingerprints" that are commonly used to identify galaxies. 4) For these reasons, virtually all of the galaxies known from the early days of the Universe are those that are still forming new stars, and hence emitting copious amounts of light(4). Although easier to find, such galaxies are not particularly useful for testing theories of galaxy formation because it is impossible to set strong lower limits on how old they are. However, finding significant numbers of massive, evolved galaxies (which finished forming stars long ago) at distances that correspond to half the present age of the Universe would indicate that such galaxies formed much earlier than the leading theory predicts.(5) References (abridged): 1. Glazebrook, K. et al. Nature 430, 181-184 (2004) 2. Cimatti, A. et al. Nature 430, 184-187 (2004) 3. Blumenthal, G. R. et al. Nature 311, 517-525 (1984) 4. Steidel, C. C., Adelberger, K. L., Giavalisco, M., Dickinson, M. & Pettini, M. Astrophys. J. 519, 1-17 (1999) 5. Dickinson, M., Papovich, C., Ferguson, H. C. & Bud?vari, T. Astrophys. J. 587, 25-40 (2003) Nature http://www.nature.com/nature From checker at panix.com Fri Sep 2 01:44:52 2005 From: checker at panix.com (Premise Checker) Date: Thu, 1 Sep 2005 21:44:52 -0400 (EDT) Subject: [Paleopsych] SW: Complexity and Causality Message-ID: Theoretical Physics: Complexity and Causality http://scienceweek.com/2005/sw050819-6.htm The following points are made by George F. Ellis (Nature 2005 435:743): 1) The atomic theory of matter and the periodic table of elements allow us to understand the physical nature of material objects, including living beings. Quantum theory illuminates the physical basis of the periodic table and the nature of chemical bonding. Molecular biology shows how complex molecules underlie the development and functioning of living organisms. And neurophysics reveals the functioning of the brain. 2) In the hierarchy of complexity, each level links to the one above: chemistry links to biochemistry, to cell biology, physiology, psychology, to sociology, economics, and politics. Particle physics is the foundational subject underlying -- and so in some sense explaining -- all the others. In a reductionist world view, physics is all there is. The cartesian picture of man as a machine seems to be vindicated. 3) But this view omits important aspects of the world that physics has yet to come to terms with. Our environment is dominated by objects that embody the outcomes of intentional design (buildings, books, computers, teaspoons). Today's physics has nothing to say about the intentionality that has resulted in the existence of such objects, even though this intentionality is clearly causally effective. 4) A simple statement of fact: there is no physics theory that explains the nature of, or even the existence of, football matches, teapots, or jumbo-jet aircraft. The human mind is physically based, but there is no hope whatever of predicting the behavior it controls from the underlying physical laws. Even if we had a satisfactory fundamental physics "theory of everything", this situation would remain unchanged: physics would still fail to explain the outcomes of human purpose, and so would provide an incomplete description of the real world around us. 5) Can we nevertheless claim that the underlying physics uniquely causally determines what happens, even if we cannot predict the outcome? To examine whether we can, contemplate what is required for this claim to be true within its proper cosmic context. The implication is that the particles existing when the cosmic background radiation was decoupling from matter, in the early Universe, were placed precisely so as to make it inevitable that 14 billion years later, human beings would exist, Charles Townes would conceive of the laser, and Edward Witten would develop string theory. Is it plausible that quantum fluctuations in the inflationary era in the very early Universe -- the source of the perturbations at the time of decoupling -- implied the future inevitability of the Mona Lisa and Einstein's theory of relativity? Those fluctuations are supposed to have been random, which by definition means without purpose or meaning.[1,2] References: 1. Ellis, G. F. R. Phys. Today (in the press). 2. Bishop, R. C. Phil. Sci. (in the press). Nature http://www.nature.com/nature -------------------------------- Related Material: THEORETICAL BIOLOGY: ON SCALE AND COMPLEXITY The following points are made by Neil D. Theise (Nature 2005 435:1165): 1) Complexity theory, which describes emergent self-organization of complex adaptive systems, has gained a prominent position in many sciences. One powerful aspect of emergent self-organization is that scale matters. What appears to be a dynamic, ever changing organizational panoply at the scale of the interacting agents that comprise it, looks to be a single, functional entity from a higher scale. Ant colonies are a good example: from afar, the colony appears to be a solid, shifting, dark mass against the earth. But up close, one can discern individual ants and describe the colony as the emergent self-organization of these scurrying individuals. Moving in still closer, the individual ants dissolve into myriad cells. 2) Cells fulfill all the criteria necessary to be considered agents within a complex system: they exist in great numbers; their interactions involve homeostatic, negative feedback loops; and they respond to local environmental cues with limited stochasticity ("quenched disorder"). Like any group of interacting individuals fulfilling these criteria, they self-organize without external planning. What emerges is the structure and function of our tissues, organs and bodies. 3) This view is in keeping with cell doctrine -- the fundamental paradigm of modern biology and medicine whereby cells are the fundamental building blocks of all living organisms. Before cell doctrine emerged, other possibilities were explored. The ancient Greeks debated whether the body's substance was an endlessly divisible fluid or a sum of ultimately indivisible subunits. But when the microscopes of Theodor Schwann (1810-1882) and Matthias Schleiden (1804-1881) revealed cell membranes, the debate was settled. The body's substance is not a fluid, but an indivisible box-like cell: the magnificently successful cell doctrine was born. 4) But a complexity analysis presses for consideration of a level of observation at a lower scale. At the nanoscale, one might suggest that cells are not discreet objects; rather, they are dynamically shifting, adaptive systems of uncountable biomolecules. Do biomolecules fulfill the necessary criteria for agents forming complex systems? They obviously exist in sufficient quantities to generate emergent phenomena; they interact only on the local level, without monitoring the whole system; and many homeostatic feedback loops govern these local interactions. But do their interactions display quenched disorder; that is, are they somewhere between being completely random and rigidly determined? Analyses of individual interacting molecules and the recognition that at the nanoscale, quantum effects may have a measurable impact, suggest that the answer is yes.[1-3] References: 1. Theise N. D. & d'Inverno, M. Blood Cells Mol. Dis. 32, 17-20 (2004) 2. Theise N. D. & Krause D. S. Leukemia 16, 542-548 (2002) 3. Kurakin A. Dev. Genes Evol. 215, 46-52 (2005) Nature http://www.nature.com/nature -------------------------------- Related Material: PHYSICS AND COMPLEXITY The following points are made by Gregoire Nicolis (citation below): 1) For the vast majority of scientists physics is a marvelous algorithm explaining natural phenomena in terms of the building blocks of the universe and their interactions. Planetary motion; the structure of genetic material, of molecules, atoms or nuclei; the diffraction pattern of a crystalline body; superconductivity; the explanation of the compressibility, elasticity, surface tension or thermal conductivity of a material, are only a few among the innumerable examples illustrating the immense success of this view, which presided over the most impressive breakthroughs that have so far marked the development of modern science since Newton. 2) Implicit in the classical view, according to which physical phenomena are reducible to a few fundamental interactions, is the idea that under well-defined conditions a system governed by a given set of laws will follow a unique course, and that a slight change in the causes will likewise produce a slight change in the effects. But, since the 1960s, an increasing amount of experimental data challenging this idea has become available, and this imposes a new attitude concerning the description of nature. Such ordinary systems as a layer of fluid or a mixture of chemical products can generate, under appropriate conditions, a multitude of self-organization phenomena on a macroscopic scale -- a scale orders of magnitude larger than the range of fundamental interactions -- in the form of spatial patterns or temporal rhythms. 3) States of matter capable of evolving (states for which order, complexity, regulation, information and other concepts usually absent from the vocabulary of the physicist become the natural mode of description) are, all of a sudden, emerging in the laboratory. These states suggest that the gap between "simple" and "complex", and between "disorder" and "order", is much narrower than previously thought. They also provide the natural archetypes for understanding a large body of phenomena in branches which traditionally were outside the realm of physics, such as turbulence, the circulation of the atmosphere and the oceans, plate tectonics, glaciations, and other forces that shape our natural environment: or, even, the emergence of replicating systems capable of storing and generating information, embryonic development, the electrical activity of brain, or the behavior of populations in an ecosystem or in an economic environment. Adapted from: Gregoire Nicolis: in: Paul Davies (ed.): The New Physics. Cambridge University Press 1989, p.316 From checker at panix.com Fri Sep 2 01:45:02 2005 From: checker at panix.com (Premise Checker) Date: Thu, 1 Sep 2005 21:45:02 -0400 (EDT) Subject: [Paleopsych] NYT: In Chimpanzee DNA, Signs of Y Chromosome's Evolution Message-ID: In Chimpanzee DNA, Signs of Y Chromosome's Evolution New York Times, 5.9.1 http://www.nytimes.com/2005/09/01/science/01chimp.html By [3]NICHOLAS WADE Scientists have decoded the chimp genome and compared it with that of humans, a major step toward defining what makes people human and developing a deep insight into the evolution of human sexual behavior. The comparison pinpoints the genetic differences that have arisen in the two species since they split from a common ancestor some six million years ago. The realization that chimpanzees hold a trove of information about human evolution and nature comes at a time when they and other great apes are under harsh pressures in their native habitat. Their populations are dwindling fast as forests are cut down and people shoot them for meat. They may soon disappear from the wild altogether, primatologists fear, except in the few sanctuaries that have been established. Chimpanzees and people possess almost identical sets of genes, so the genes that have changed down the human lineage should hold the key to what makes people human. Biologists suspect that only a handful of genes are responsible for the major changes that reshaped the apelike ancestor of both species into a human and that these genes should be identifiable by having evolved at a particularly rapid rate. The comparison of the human and chimp genomes, reported in today's issue of Nature, takes a first step in this direction but has not yet tracked down the critical handful of genes responsible for human evolution. One problem is the vast number of differences - some 40 million - in the sequence of DNA units in the chimp and human genomes. Most are caused by a random process known as genetic drift and have little effect. For now, their large numbers make it difficult for scientists to find the changes caused by natural selection. But another aspect of the comparison has yielded insights into a different question, the evolution of the human Y chromosome. The new finding implies that humans have led sexually virtuous lives for the last six million years, at least in comparison with the flamboyant promiscuity of chimpanzees. Some 300 million years ago, the Y chromosome used to carry the same 1,000 or so genes as its partner, the X chromosome. But because the Y cannot exchange DNA with the X and update its genes, in humans it has lost all but 16 of its X-related genes through mutation or failure to stay relevant to their owner's survival. However, the Y has gained some genes from other chromosomes because it is a safe haven for genes that benefit only men, since it never enters a woman's body. These added genes, not surprisingly, all have functions involved in making sperm. The scientific world's leading student of the Y chromosome, David Page of the Whitehead Institute in Cambridge, Mass., has been seeking to understand whether the Y will lose yet more genes and lapse into terminal decay, taking men with it. The idea of the Y's extinction "was so delicious from the perspective of gender politics," Dr. Page said. "But many of my colleagues became confused with this blending of gender politics with scientific predictions." Two years ago, he discovered a surprising mechanism that protects the sperm-making genes. Those genes exist in pairs, arranged so that when the DNA of the chromosome is folded back on itself, the two copies of the gene are aligned. If one copy of the gene has been hit by a mutation, the cell can repair it by correcting the mismatch in DNA units. The 16 X-related genes are present in only single copies. Dr. Page and his colleagues thought the chimpanzee genome might show how they were protected. To their surprise, they report in Nature, the protection was not there. The chimp Y chromosome has lost the use of 5 of its 16 X-related genes. The genes are there, but have been inactivated by mutation. The explanation, in his view, lies in the chimpanzee's high-spirited sexual behavior. Female chimps mate with all males around, so as to make each refrain from killing a child that might be his. The alpha male nonetheless scores most of the paternities, according to DNA tests. This must be because of sperm competition, primatologists believe - the alpha male produces more and better sperm, which outcompete those of rival males. This mating system puts such intense pressure on the sperm-making genes that any improved version will be favored by natural selection. All the other genes will be dragged along with it, Dr. Page believes, even if an X-related gene has been inactivated. If chimps have lost five of their X-related genes in the last six million years because of sperm competition, and humans have lost none, humans presumably had a much less promiscuous mating system. But experts who study fossil human remains believe that the human mating system of long-term bonds between a man and woman evolved only some 1.7 million years ago. Males in the human lineage became much smaller at this time, a sign of reduced competition. The new result implies that even before that time, during the first four million years after the chimp-human split, the human mating system did not rely on sperm competition. Dr. Page said his finding did not reach to the nature of the joint chimp-human ancestor, but that "it's a reasonable inference" that the ancestor might have been gorillalike rather than chimplike, as supposed by some primatologists. The gorilla mating system has no sperm competition because the silverback maintains exclusive access to his harem. Frans B. M. de Waal of the Yerkes National Primate Research Center in Atlanta said he agreed with fossil experts that the human pair bonding system probably evolved 1.7 million years ago but that the joint ancestor could have resembled a chimp, a bonobo, a gorilla, or something else entirely. The scientists who have compared the whole genomes of the two species say they have found 35 million sites on the aligned genomes where there are different DNA units, and another five million where units have been added or deleted. Each genome is about three billion units in length. The chimp genome was completed in draft form in December 2003 by the Broad Institute in Cambridge and Washington University in St. Louis. Statistical tests for accelerated evolution are not yet powerful enough to identify the major genes that have shaped humans. "We knew that this was only a beginning, but from a general standpoint we have captured the vast majority of the differences between human and chimps," said Robert H. Waterston of the University of Washington, Seattle, the senior author of the report. The genome of a third primate, the orangutan, is now in progress and will help identify the genes special to human evolution, he said. At the level of the whole animal, primatologists have uncovered copious similarities between the social behavior of chimpanzees, bonobos and humans, some of which may eventually be linked to genes. But this rich vein of discovery may be choked off if the great apes can no longer be studied in the wild. "The situation is very bad, and our feeling is that by 2040 most of the habitat will be gone, except for those little regions we have set aside," Dr. de Waal said. From checker at panix.com Fri Sep 2 01:45:13 2005 From: checker at panix.com (Premise Checker) Date: Thu, 1 Sep 2005 21:45:13 -0400 (EDT) Subject: [Paleopsych] Cape Times: Scientists show way to non-addictive drugs Message-ID: Scientists show way to non-addictive drugs http://www.iol.co.za/index.php?set_id=1&click_id=79&art_id=vn20050816070856987C214316&newslett=1&em=17706a1a20050816ah 5.8.16 [Recall Mr. Mencken's definition of a puritan as you read this.] It is the news that clubbers have been waiting for. Scientists are working on a range of recreational drugs that can produce similar effects to alcohol but with fewer of the side-effects. Experts looked 20 years into the future to discover what kind of drugs we would be taking, and came up with a surprising range of findings, that open up the prospect of Sunday mornings without a thumping hangover or the "parrot's cage" mouth. They have also been able to separate the effect of one psychoactive substance from its addictive properties, leading an expert panel to advise British government ministers that "this could pave the way to non-addictive recreational drugs". One of the new substances has even been found to reduce the side effects of recreational drugs. "Such compounds might allow users to shape their drug experience," said the panel headed by Sir David King, the government's chief scientific adviser. His report to the Trade and Industry secretary, Alan Johnson, raises the possibility that, in a generation, Britain's dinner parties could become more like Woody Allen's "orb" scene in the futuristic film Sleeper, where guests get high by rubbing the orb instead of inhaling a joint. The report said: "There are a number of new and developing technologies that could be used to deliver drugs in new ways. Examples include patches, vaporisers, depot injection and direct neural stimulation... this may encourage the development of technology for the slower release of recreational psychoactive substances, which could reduce the risk of addiction." Some drugs developed to tackle health problems are capable of being used for improving the performance of the brain. Madafinil, which was introduced to treat narcolepsy, can keep normal people awake for three days, says the report. Other drugs could be used to stop alcohol triggering a need for a cigarette. "Drinking with friends might no longer create a trigger for an individual to smoke tobacco," the panel said. From checker at panix.com Fri Sep 2 01:45:24 2005 From: checker at panix.com (Premise Checker) Date: Thu, 1 Sep 2005 21:45:24 -0400 (EDT) Subject: [Paleopsych] NYT Op-Ed: Daniel Dennett: Show Me the Science Message-ID: Daniel Dennett: Show Me the Science http://www.nytimes.com/2005/08/28/opinion/28dennett.html By DANIEL C. DENNETT Blue Hill, Me. PRESIDENT BUSH, announcing this month that he was in favor of teaching about "intelligent design" in the schools, said, "I think that part of education is to expose people to different schools of thought." A couple of weeks later, Senator Bill Frist of Tennessee, the Republican leader, made the same point. Teaching both intelligent design and evolution "doesn't force any particular theory on anyone," Mr. Frist said. "I think in a pluralistic society that is the fairest way to go about education and training people for the future." Is "intelligent design" a legitimate school of scientific thought? Is there something to it, or have these people been taken in by one of the most ingenious hoaxes in the history of science? Wouldn't such a hoax be impossible? No. Here's how it has been done. First, imagine how easy it would be for a determined band of naysayers to shake the world's confidence in quantum physics - how weird it is! - or Einsteinian relativity. In spite of a century of instruction and popularization by physicists, few people ever really get their heads around the concepts involved. Most people eventually cobble together a justification for accepting the assurances of the experts: "Well, they pretty much agree with one another, and they claim that it is their understanding of these strange topics that allows them to harness atomic energy, and to make transistors and lasers, which certainly do work..." Fortunately for physicists, there is no powerful motivation for such a band of mischief-makers to form. They don't have to spend much time persuading people that quantum physics and Einsteinian relativity really have been established beyond all reasonable doubt. With evolution, however, it is different. The fundamental scientific idea of evolution by natural selection is not just mind-boggling; natural selection, by executing God's traditional task of designing and creating all creatures great and small, also seems to deny one of the best reasons we have for believing in God. So there is plenty of motivation for resisting the assurances of the biologists. Nobody is immune to wishful thinking. It takes scientific discipline to protect ourselves from our own credulity, but we've also found ingenious ways to fool ourselves and others. Some of the methods used to exploit these urges are easy to analyze; others take a little more unpacking. A creationist pamphlet sent to me some years ago had an amusing page in it, purporting to be part of a simple questionnaire: Test Two Do you know of any building that didn't have a builder? [YES] [NO] Do you know of any painting that didn't have a painter? [YES] [NO] Do you know of any car that didn't have a maker? [YES] [NO] If you answered YES for any of the above, give details: Take that, you Darwinians! The presumed embarrassment of the test-taker when faced with this task perfectly expresses the incredulity many people feel when they confront Darwin's great idea. It seems obvious, doesn't it, that there couldn't be any designs without designers, any such creations without a creator. Well, yes - until you look at what contemporary biology has demonstrated beyond all reasonable doubt: that natural selection - the process in which reproducing entities must compete for finite resources and thereby engage in a tournament of blind trial and error from which improvements automatically emerge - has the power to generate breathtakingly ingenious designs. Take the development of the eye, which has been one of the favorite challenges of creationists. How on earth, they ask, could that engineering marvel be produced by a series of small, unplanned steps? Only an intelligent designer could have created such a brilliant arrangement of a shape-shifting lens, an aperture-adjusting iris, a light-sensitive image surface of exquisite sensitivity, all housed in a sphere that can shift its aim in a hundredth of a second and send megabytes of information to the visual cortex every second for years on end. But as we learn more and more about the history of the genes involved, and how they work - all the way back to their predecessor genes in the sightless bacteria from which multicelled animals evolved more than a half-billion years ago - we can begin to tell the story of how photosensitive spots gradually turned into light-sensitive craters that could detect the rough direction from which light came, and then gradually acquired their lenses, improving their information-gathering capacities all the while. We can't yet say what all the details of this process were, but real eyes representative of all the intermediate stages can be found, dotted around the animal kingdom, and we have detailed computer models to demonstrate that the creative process works just as the theory says. All it takes is a rare accident that gives one lucky animal a mutation that improves its vision over that of its siblings; if this helps it have more offspring than its rivals, this gives evolution an opportunity to raise the bar and ratchet up the design of the eye by one mindless step. And since these lucky improvements accumulate - this was Darwin's insight - eyes can automatically get better and better and better, without any intelligent designer. Brilliant as the design of the eye is, it betrays its origin with a tell-tale flaw: the retina is inside out. The nerve fibers that carry the signals from the eye's rods and cones (which sense light and color) lie on top of them, and have to plunge through a large hole in the retina to get to the brain, creating the blind spot. No intelligent designer would put such a clumsy arrangement in a camcorder, and this is just one of hundreds of accidents frozen in evolutionary history that confirm the mindlessness of the historical process. If you still find Test Two compelling, a sort of cognitive illusion that you can feel even as you discount it, you are like just about everybody else in the world; the idea that natural selection has the power to generate such sophisticated designs is deeply counterintuitive. Francis Crick, one of the discoverers of DNA, once jokingly credited his colleague Leslie Orgel with "Orgel's Second Rule": Evolution is cleverer than you are. Evolutionary biologists are often startled by the power of natural selection to "discover" an "ingenious" solution to a design problem posed in the lab. This observation lets us address a slightly more sophisticated version of the cognitive illusion presented by Test Two. When evolutionists like Crick marvel at the cleverness of the process of natural selection they are not acknowledging intelligent design. The designs found in nature are nothing short of brilliant, but the process of design that generates them is utterly lacking in intelligence of its own. Intelligent design advocates, however, exploit the ambiguity between process and product that is built into the word "design." For them, the presence of a finished product (a fully evolved eye, for instance) is evidence of an intelligent design process. But this tempting conclusion is just what evolutionary biology has shown to be mistaken. Yes, eyes are for seeing, but these and all the other purposes in the natural world can be generated by processes that are themselves without purposes and without intelligence. This is hard to understand, but so is the idea that colored objects in the world are composed of atoms that are not themselves colored, and that heat is not made of tiny hot things. The focus on intelligent design has, paradoxically, obscured something else: genuine scientific controversies about evolution that abound. In just about every field there are challenges to one established theory or another. The legitimate way to stir up such a storm is to come up with an alternative theory that makes a prediction that is crisply denied by the reigning theory - but that turns out to be true, or that explains something that has been baffling defenders of the status quo, or that unifies two distant theories at the cost of some element of the currently accepted view. To date, the proponents of intelligent design have not produced anything like that. No experiments with results that challenge any mainstream biological understanding. No observations from the fossil record or genomics or biogeography or comparative anatomy that undermine standard evolutionary thinking. Instead, the proponents of intelligent design use a ploy that works something like this. First you misuse or misdescribe some scientist's work. Then you get an angry rebuttal. Then, instead of dealing forthrightly with the charges leveled, you cite the rebuttal as evidence that there is a "controversy" to teach. Note that the trick is content-free. You can use it on any topic. "Smith's work in geology supports my argument that the earth is flat," you say, misrepresenting Smith's work. When Smith responds with a denunciation of your misuse of her work, you respond, saying something like: "See what a controversy we have here? Professor Smith and I are locked in a titanic scientific debate. We should teach the controversy in the classrooms." And here is the delicious part: you can often exploit the very technicality of the issues to your own advantage, counting on most of us to miss the point in all the difficult details. William Dembski, one of the most vocal supporters of intelligent design, notes that he provoked Thomas Schneider, a biologist, into a response that Dr. Dembski characterizes as "some hair-splitting that could only look ridiculous to outsider observers." What looks to scientists - and is - a knockout objection by Dr. Schneider is portrayed to most everyone else as ridiculous hair-splitting. In short, no science. Indeed, no intelligent design hypothesis has even been ventured as a rival explanation of any biological phenomenon. This might seem surprising to people who think that intelligent design competes directly with the hypothesis of non-intelligent design by natural selection. But saying, as intelligent design proponents do, "You haven't explained everything yet," is not a competing hypothesis. Evolutionary biology certainly hasn't explained everything that perplexes biologists. But intelligent design hasn't yet tried to explain anything. To formulate a competing hypothesis, you have to get down in the trenches and offer details that have testable implications. So far, intelligent design proponents have conveniently sidestepped that requirement, claiming that they have no specifics in mind about who or what the intelligent designer might be. To see this shortcoming in relief, consider an imaginary hypothesis of intelligent design that could explain the emergence of human beings on this planet: About six million years ago, intelligent genetic engineers from another galaxy visited Earth and decided that it would be a more interesting planet if there was a language-using, religion-forming species on it, so they sequestered some primates and genetically re-engineered them to give them the language instinct, and enlarged frontal lobes for planning and reflection. It worked. If some version of this hypothesis were true, it could explain how and why human beings differ from their nearest relatives, and it would disconfirm the competing evolutionary hypotheses that are being pursued. We'd still have the problem of how these intelligent genetic engineers came to exist on their home planet, but we can safely ignore that complication for the time being, since there is not the slightest shred of evidence in favor of this hypothesis. But here is something the intelligent design community is reluctant to discuss: no other intelligent-design hypothesis has anything more going for it. In fact, my farfetched hypothesis has the advantage of being testable in principle: we could compare the human and chimpanzee genomes, looking for unmistakable signs of tampering by these genetic engineers from another galaxy. Finding some sort of user's manual neatly embedded in the apparently functionless "junk DNA" that makes up most of the human genome would be a Nobel Prize-winning coup for the intelligent design gang, but if they are looking at all, they haven't come up with anything to report. It's worth pointing out that there are plenty of substantive scientific controversies in biology that are not yet in the textbooks or the classrooms. The scientific participants in these arguments vie for acceptance among the relevant expert communities in peer-reviewed journals, and the writers and editors of textbooks grapple with judgments about which findings have risen to the level of acceptance - not yet truth - to make them worth serious consideration by undergraduates and high school students. SO get in line, intelligent designers. Get in line behind the hypothesis that life started on Mars and was blown here by a cosmic impact. Get in line behind the aquatic ape hypothesis, the gestural origin of language hypothesis and the theory that singing came before language, to mention just a few of the enticing hypotheses that are actively defended but still insufficiently supported by hard facts. The Discovery Institute, the conservative organization that has helped to put intelligent design on the map, complains that its members face hostility from the established scientific journals. But establishment hostility is not the real hurdle to intelligent design. If intelligent design were a scientific idea whose time had come, young scientists would be dashing around their labs, vying to win the Nobel Prizes that surely are in store for anybody who can overturn any significant proposition of contemporary evolutionary biology. Remember cold fusion? The establishment was incredibly hostile to that hypothesis, but scientists around the world rushed to their labs in the effort to explore the idea, in hopes of sharing in the glory if it turned out to be true. Instead of spending more than $1 million a year on publishing books and articles for non-scientists and on other public relations efforts, the Discovery Institute should finance its own peer-reviewed electronic journal. This way, the organization could live up to its self-professed image: the doughty defenders of brave iconoclasts bucking the establishment. For now, though, the theory they are promoting is exactly what George Gilder, a long-time affiliate of the Discovery Institute, has said it is: "Intelligent design itself does not have any content." Since there is no content, there is no "controversy" to teach about in biology class. But here is a good topic for a high school course on current events and politics: Is intelligent design a hoax? And if so, how was it perpetrat- ed? Daniel C. Dennett, a professor of philosophy at Tufts University, is the author of "Freedom Evolves" and "Darwin's Dangerous Idea." From checker at panix.com Sat Sep 3 01:31:47 2005 From: checker at panix.com (Premise Checker) Date: Fri, 2 Sep 2005 21:31:47 -0400 (EDT) Subject: [Paleopsych] Nature Neuroscience: Book Review: The Ethical Brain Message-ID: Book Review: The Ethical Brain Nature Neuroscience 8, 1127 (2005) doi:10.1038/nn0905-1127 http://www.nature.com/neuro/journal/v8/n9/full/nn0905-1127.html Reviewed by: Charles Jennings Charles Jennings is at the Harvard Stem Cell Institute, Harvard University, 42 Church Street, Cambridge, Massachusetts 02138, USA. charles_jennings at harvard.edu Michael Gazzaniga is a leader in the field of cognitive neuroscience, and since 2002 he has been a member of President Bush's Council on Bioethics. In a group dominated by conservatives, Gazzaniga is sometimes a dissenting voice, for example, in his support for embryonic stem cell research. His work on split-brain patients has profound implications for understanding the neural basis of self, and his presence on the council has brought a neurobiological perspective to many current bioethical controversies. The Ethical Brain is a wide-ranging, yet short and readable, summary of his views. Gazzaniga is a technological optimist, with little patience for the vague 'slippery slope' arguments that are often invoked by those who worry about where biotechnology is leading us. A deeper concern?articulated, for example, by fellow council member Michael Sandel?is that the desire to manipulate human nature is a form of hubris that threatens to undermine our appreciation for life's gifts. Gazzaniga, however, will have none of this. He welcomes the prospect of genetic enhancement, prolongation of lifespan, memory pills and so forth, arguing that humanity's innate moral sense will always guide us to use our powers wisely. I would like to think he is right, but I did not always find his arguments persuasive. A case in point is his discussion of sex selection. In some Asian countries, notably China, a cultural preference for boys, combined with easy access to methods for sex determination and selective abortion, has led to a large distortion of birth ratios. Gazzaniga acknowledges the potential concern, but because some US fertility clinics are now starting to discourage sex selection, he concludes that humans can be trusted to do the right thing in the long run. Maybe so, but I am less sanguine than Gazzaniga about this massive biotechnological experiment, and about the world's largest country soon having 15 million young men unable to find marriage partners. Gazzaniga's faith in human destiny is based in part on his belief in a biologically based universal morality, and his discussion of this idea is one of the most interesting aspects of the book. He argues that our sense of right and wrong has been shaped by evolution, and that there consequently exists a core of moral instincts that are shared across all societies. Religious traditions, in his view, represent attempts to explain and validate these biological instincts. Our brains have a strong tendency to form beliefs as a way of making sense of the world, and as Gazzaniga's own work has emphasized, these are often confabulated on the basis of limited evidence, yet refractory to change once formed. As an explanation of religious faith, this viewpoint is surely anathema to many conservatives, but Gazzaniga (who was raised Catholic) shows no animosity toward religion, which he regards as a natural aspect of human biology. Gazzaniga hopes that a deeper understanding of our shared moral instincts and their biological basis could help to overcome ideological conflicts between different belief systems. This is an appealing idea ('biology good, ideology bad'), even though only a chronic optimist could think that universal education in cognitive neuroscience will lead to world peace. A skeptic might counter that our brains come prewired not only for moral reasoning but also for prejudice, tribalism, warfare?less attractive but no less universal aspects of human societies. Moreover, the scientific evidence for a moral instinct is based largely on simple test scenarios in which decisions have immediate and visible consequences for another individual. Although people tend to show similar responses on such tests, most real-world dilemmas are not like this. It seems unlikely that divisive societal debates on questions such as abortion or capital punishment could ever be resolved by an appeal to biology. Perhaps the most pressing issue in neuroethics is how (if at all) neuroscience should inform the justice system, and Gazzaniga devotes several chapters to this topic. The central problem is this: if decisions are made by the brain, a physical object that obeys physical laws, in what sense can they be considered 'free'? But if people are constrained by their brains, how can we hold them responsible for their actions? This quickly leads to problems, of course; if defendants could be acquitted simply by arguing "my brain made me do it," the entire justice system would collapse. Gazzaniga's proposed solution is to argue that responsibility is "a social construct that exists in the rules of a society [but not] in the neuronal structures of the brain." Yet I did not find this argument convincing. The justice system, held together by moral rules and concepts of accountability, is an emergent property of large numbers of brains. It may be dauntingly complex, but that does not put it beyond the realm of scientific study. Indeed, social neuroscience is an emerging field of research, and neuroimagers can now examine the mechanisms underlying not only people's own moral decisions, but also their perceptions about the accountability of other individuals. Gazzaniga is understandably concerned about neuroscience being drawn into the courtroom, but he acknowledges that it is inevitable. The challenge for neuroethicists, then, will be to help lawyers sort the wheat from the chaff, to recognize valid arguments for exculpation or leniency, while rejecting the abuses that will surely become increasingly tempting to defense counsels as brain science continues to advance. The Ethical Brain is not the last word on these difficult issues, but it does provide a clear and useful introduction to the field. Gazzaniga's fans include Tom Wolfe, who gives the book a cameo role in his novel I Am Charlotte Simmons, where it appears as recommended reading for a college course. In this case life would do well to imitate art?The Ethical Brain would be an excellent introduction for anyone who is interested in learning more about 'the next big thing' in bioethics. From checker at panix.com Sat Sep 3 01:31:54 2005 From: checker at panix.com (Premise Checker) Date: Fri, 2 Sep 2005 21:31:54 -0400 (EDT) Subject: [Paleopsych] Live Science: Why great minds can't grasp consciousness Message-ID: Why great minds can't grasp consciousness Source: LiveScience.com/MSNBC http://msnbc.msn.com/id/8873364/ Subject no longer just for philosophers and mystics, but remains a mystery Aug. 8, 2005 By Ker Than At a physics meeting last October, Nobel laureate David Gross outlined 25 questions in science that he thought physics might help answer. Nestled among queries about black holes and the nature of dark matter and dark energy were questions that wandered beyond the traditional bounds of physics to venture into areas typically associated with the life sciences. One of the Gross's questions involved human consciousness. He wondered whether scientists would ever be able to measure the onset consciousness in infants and speculated that consciousness might be similar to what physicists call a "phase transition," an abrupt and sudden large-scale transformation resulting from several microscopic changes. The emergence of superconductivity in certain metals when cooled below a critical temperature is an example of a phase transition. In a recent email interview, Gross said he figures there are probably many different levels of consciousness, but he believes that language is a crucial factor distinguishing the human variety from that of animals. Gross isn't the only physicist with ideas about consciousness. Beyond the mystics Roger Penrose, a mathematical physicist at Oxford University, believes that if a "theory of everything" is ever developed in physics to explain all the known phenomena in the universe, it should at least partially account for consciousness. Penrose also believes that quantum mechanics, the rules governing the physical world at the subatomic level, might play an important role in consciousness. It wasn't that long ago that the study of consciousness was considered to be too abstract, too subjective or too difficult to study scientifically. But in recent years, it has emerged as one of the hottest new fields in biology, similar to string theory in physics or the search for extraterrestrial life in astronomy. No longer the sole purview of philosophers and mystics, consciousness is now attracting the attention of scientists from across a variety of different fields, each, it seems, with their own theories about what consciousness is and how it arises from the brain. In many religions, consciousness is closely tied to the ancient notion of the soul, the idea that in each of us, there exists an immaterial essence that survives death and perhaps even predates birth. It was believed that the soul was what allowed us to think and feel, remember and reason. Our personality, our individuality and our humanity were all believed to originate from the soul. Nowadays, these things are generally attributed to physical processes in the brain, but exactly how chemical and electrical signals between trillions of brain cells called neurons are transformed into thoughts, emotions and a sense of self is still unknown. "Almost everyone agrees that there will be very strong correlations between what's in the brain and consciousness," says David Chalmers, a philosophy professor and Director of the Center for Consciousness at the Australian National University. "The question is what kind of explanation that will give you. We want more than correlation, we want explanation -- how and why do brain process give rise to consciousness? That's the big mystery." Just accept it Chalmers is best known for distinguishing between the 'easy' problems of consciousness and the 'hard' problem. The easy problems are those that deal with functions and behaviors associated with consciousness and include questions such as these: How does perception occur? How does the brain bind different kinds of sensory information together to produce the illusion of a seamless experience? "Those are what I call the easy problems, not because they're trivial, but because they fall within the standard methods of the cognitive sciences," Chalmers says. The hard problem for Chalmers is that of subjective experience. "You have a different kind of experience -- a different quality of experience -- when you see red, when you see green, when you hear middle C, when you taste chocolate," Chalmers told LiveScience. "Whenever you're conscious, whenever you have a subjective experience, it feels like something." According to Chalmers, the subjective nature of consciousness prevents it from being explained in terms of simpler components, a method used to great success in other areas of science. He believes that unlike most of the physical world, which can be broken down into individual atoms, or organisms, which can be understood in terms of cells, consciousness is an irreducible aspect of the universe, like space and time and mass. "Those things in a way didn't need to evolve," said Chalmers. "They were part of the fundamental furniture of the world all along." Instead of trying to reduce consciousness to something else, Chalmers believes consciousness should simply be taken for granted, the way that space and time and mass are in physics. According to this view, a theory of consciousness would not explain what consciousness is or how it arose; instead, it would try to explain the relationship between consciousness and everything else in the world. Not everyone is enthusiastic about this idea, however. 'Not very helpful' "It's not very helpful," said Susan Greenfield, a professor of pharmacology at Oxford University. "You can't do very much with it," Greenfield points out. "It's the last resort, because what can you possibly do with that idea? You can't prove it or disprove it, and you can't test it. It doesn't offer an explanation, or any enlightenment, or any answers about why people feel the way they feel." Greenfield's own theory of consciousness is influenced by her experience working with drugs and mental diseases. Unlike some other scientists -- most notably the late Francis Crick, co-discoverer of the structure of DNA, and his colleague David Koch, a professor of computation and neural systems at Caltech -- who believed that different aspects of consciousness like visual awareness are encoded by specific neurons, Greenfield thinks that consciousness involves large groups of nonspecialized neurons scattered throughout the brain. Important for Greenfield's theory is a distinction between 'consciousness' and 'mind,' terms that she says many of her colleagues use interchangeably, but which she believes are two entirely different concepts. "You talk about losing your mind or blowing your mind or being out of your mind, but those things don't necessarily entail a loss of consciousness," Greenfield said in a telephone interview. "Similarly, when you lose your consciousness, when you go to sleep at night or when you're anesthetized, you don't really think that you're really going to be losing your mind." Like the wetness of water According to Greenfield, the mind is made up of the physical connections between neurons. These connections evolve slowly and are influenced by our past experiences and therefore, everyone's brain is unique. But whereas the mind is rooted in the physical connections between neurons, Greenfield believes that consciousness is an emergent property of the brain, similar to the 'wetness' of water or the 'transparency' of glass, both of which are properties that are the result of -- that is, they emerge from -- the actions of individual molecules. For Greenfield, a conscious experience occurs when a stimulus -- either external, like a sensation, or internal, like a thought or a memory -- triggers a chain reaction within the brain. Like in an earthquake, each conscious experience has an epicenter, and ripples from that epicenter travels across the brain, recruiting neurons as they go. Mind and consciousness are connected in Greenfield's theory because the strength of a conscious experience is determined by the mind and the strength of its existing neuronal connections -- connections forged from past experiences. Part of the mystery and excitement about consciousness is that scientists don't know what form the final answer will take. "If I said to you I'd solved the hard problem, you wouldn't be able to guess whether it would be a formula, a model, a sensation, or a drug," said Greenfield. "What would I be giving you?" From checker at panix.com Sat Sep 3 01:32:02 2005 From: checker at panix.com (Premise Checker) Date: Fri, 2 Sep 2005 21:32:02 -0400 (EDT) Subject: [Paleopsych] Wired: Whew! Your DNA Isn't Your Destiny Message-ID: Whew! Your DNA Isn't Your Destiny By Brandon Keim http://www.wired.com/news/medtech/0,1286,68468,00.html 02:00 AM Aug. 16, 2005 PT The more we learn about the human genome, the less DNA looks like destiny. As scientists discover more about the "epigenome," a layer of biochemical reactions that turns genes on and off, they're finding that it plays a big part in health and heredity. By mapping the epigenome and linking it with genomic and health information, scientists believe they can develop better ways to predict, diagnose and treat disease. "A new world is opening up, one that is so much more complex than the genomic world," said Moshe Szyf, an epigeneticist at Canada's McGill University. The epigenome can change according to an individual's environment, and is passed from generation to generation. It's part of the reason why "identical" twins can be so different, and it's also why not only the children but the grandchildren of women who suffered malnutrition during pregnancy are likely to weigh less at birth. "Now we're even talking about how to see if socioeconomic status has an impact on the epigenome," Szyf said. Researchers have already linked some human cancers with epigenetic changes. In a few years, scientists hope that doctors, by looking at an individual's epigenome, will be able to detect cancer early and determine what treatments to use. The same might be done for other diseases -- and as the effect of the environment on epigenetic change is better understood, people will be able to address the environmental aspects of health. The field, though still embryonic, won't be that way for long. "Epigenetics is one of the fastest-moving areas of science, period," said Melanie Ehrlich, a Tulane University epigeneticist whose lab linked human cancer to epigenomic changes in 1983. Back then, Ehrlich's discipline was largely ignored. Walter Gilbert, a Nobel Prize-winning biologist, famously said that since fruit flies had no epigenomes, people could hardly need them. But in the past two decades -- and especially the last couple of years -- studies have linked the epigenome to disease and development, showing that it changes in response to the environment and can be passed from parents to children. While predicted treatments run from diabetes and heart disease to substance abuse and schizophrenia, the most promising applications are in cancer. Research shows that some cancers follow from the deactivation of tumor-suppression genes. Last year, the Food and Drug Administration approved the first epigenetic drug, azacitidine, which treats a form of leukemia by reactivating those genes. However, using drugs to target specific parts of the epigenome, which runs in tandem with our 6 billion base pairs of DNA, is extremely complicated. Ehrlich believes epigenetic researchers are better off trying to predict and diagnose cancer and other diseases. To do that, scientists need a large-scale map that shows how epigenetic patterns relate to disease, said Steve Baylin, an epigeneticist at Johns Hopkins. "If we knew those patterns," Baylin said, "you could predict which individuals are more at risk -- change their diets, change their exposures, use prevention. We could detect disease early and predict how people respond to drugs." Making that map won't be easy. Not only does the epigenome change over time, it also differs in every major cell type, of which there are a couple hundred. Epigeneticists say this will be time-consuming but possible. In Europe, a consortium of public and private institutions is collaborating on the Human Epigenome Project, while mapping in the United States is scattered among a handful of companies and government-funded scientists. "We don't have the funding to do a comprehensive, large-scale epigenetics project," said Elise Feingold, a director of the National Human Genome Research Institute's ENCODE Project. The lack of investment is somewhat reminiscent of the Human Genome Project's early struggles, when James Watson fought for government money. But at least the epigenomic mapping effort seems to have learned something from the gene-patenting frenzy that loomed over the Human Genome Project. "That was a lesson in how intellectual property should not be handled," said John Stamatoyannopoulos, founder of biopharmaceutical company Regulome. "Everybody patented everything left and right, the lawyers got rich, the patent office was flooded, and at the end of the day the patents just weren't valuable." The absence of patent sniping might diminish some of the urgency, but the upside is that the epigenomic map is free and available to anyone -- although only a tiny fraction has thus far been made. "We are well under 1 percent finished; 1 percent would be a massive overstatement," Stamatoyannopoulos said. "But, ultimately, this type of knowledge will revolutionize the way we diagnose and treat disease." From checker at panix.com Sat Sep 3 01:32:09 2005 From: checker at panix.com (Premise Checker) Date: Fri, 2 Sep 2005 21:32:09 -0400 (EDT) Subject: [Paleopsych] Slate: Amanda Schaffer: Cave Thinkers: How evolutionary psychology gets evolution wrong. Message-ID: Amanda Schaffer: Cave Thinkers: How evolutionary psychology gets evolution wrong. http://slate.msn.com/id/2124503/ Posted Tuesday, Aug. 16, 2005, at 4:16 AM PT This spring, New York Times columnist John Tierney asserted that men must be [24]innately more competitive than women since they monopolize the trophies in--hold onto your vowels--world Scrabble competitions. To bolster his case, Tierney turned to [25]evolutionary psychology. In the distant past, he argued, a no-holds-barred desire to win would have been an adaptive advantage for many men, allowing them to get more girls, have more kids, and pass on their competitive genes to today's word-memorizing, vowel-hoarding Scrabble champs. Tierney's peculiar, pseudo-scientific claim--[26]not the first from him--reflects the extent to which evolutionary psychology has metastasized throughout public discourse. EP-ers' basic claim is that human behavior stems from psychological mechanisms that are the products of natural selection during the Stone Age. Researchers often focus on how evolution produced mental differences between men and women. One of EP's academic stars, David Buss, argues in his salacious new book [27]The Murderer Next Door that men are wired to kill unfaithful wives because this response would have benefited their distant forefathers. Larry Summers took [28]some cover from EP this winter after his remarks about women's lesser capacity to become top scientists. And adaptive explanations of old sexist hobbyhorses--men like young women with perky breasts and can't stop themselves from philandering because these urges aided ancestral reproduction--are commonly marshaled in defense of ever-more-ridiculous [29]playboys. Evolutionary psychologists have long taken heat from critics for overplaying innate characteristics--nature at the expense of nurture--and for reinforcing gender stereotypes. But they've dismissed many detractors, fairly or no, as softheaded feminists and sociologists who refuse to acknowledge the true power of natural selection. Increasingly, however, attacks on EP come from academics well-versed in the hard-nosed details of evolutionary biology. A case in point is the new book [30]Adapting Minds by philosopher David Buller, which was supported by a research grant from the National Science Foundation and published by MIT Press and has been getting glowing reviews [31]like this one (paid link) from biologists. Buller persuasively argues that while evolutionary forces likely did play a role in shaping our minds, the assumptions and methods that have dominated EP are weak. Much of the work of pioneers like [32]Buss, [33]Steven Pinker, [34]John Tooby, [35]Leda Cosmides, [36]Martin Daly, and [37]Margo Wilson turns out to be vulnerable on evolutionary grounds. EP claims that our minds contain hundreds or thousands of "mental organs" or "modules," which come with innate information on how to solve particular problems--how to interpret nuanced facial expressions, how to tell when someone's lying or cheating. These problem-solving modules evolved between 1.8 million and 10,000 years ago, during the Pleistocene epoch. And there the selection story ends. There has not been enough time in the intervening millenia, EP-ers say, for natural selection to have further resculpted our psyches. "Our modern skulls house a Stone Age mind," as Cosmides' and Tooby's [40]primer on evolutionary psychology puts it. The way forward for research is to generate hypotheses about the urges that would have been helpful to Stone Age baby-making and then try to test whether these tendencies are widespread today. What's wrong with this approach? To begin with, we know very little about the specific adaptive problems faced by our distant forebears. As Buller points out, "We don't even know the number of species in the genus Homo"--our direct ancestors--"let alone details about the lifestyles led by those species." This makes it hard to generate good hypotheses. Some EP-ers have suggested looking to modern-day hunter-gatherers as proxies, studying them for clues about our ancestors. But this doesn't get them far. For instance, in some contemporary African groups, men gather the bulk of the food; in other groups, women do. Which groups are representative of our ancestors? Surely there's a whole lot of guesswork involved when evolutionary psychologists hypothesize about the human brain's supposedly formative years. In addition, we are probably not psychological fossils. New research suggests that evolutionary change can occur [41]much faster than was previously believed. Natural selection is thought to effect rapid change especially when a species' environment is in flux--precisely the situation in the last 10,000 years as humans learned to farm, domesticate animals, and live in larger communal groups. Crucially, Buller notes, in order for significant change to have occurred in the human mind in the last 10 millennia, evolution need not have built complex brain structures from scratch but simply modified existing ones. Finally, the central, underlying assumption of EP--that humans have hundreds or thousands of mental problem-solving organs produced by natural selection--is questionable. Many cognitive scientists believe that such modules exist for processing sensory information and for acquiring language. It does not follow, however, that there are a plethora of other ones specifically designed for tasks like detecting cheaters. In fact, considering how much dramatic change our forebears faced, it makes more sense that their problem-solving faculties would have evolved to be flexible in response to their immediate surroundings. (A well-argued [42]book from philosopher Kim Sterelny fleshes out this claim.) Indeed, our mental flexibility, or [43]cortical plasticity, may be evolution's greatest gift. So, if evolutionary psychology has so many cracks in its foundations, why is it so stubbornly influential? It helps that EP-ers like Buss and Pinker are lively, media-friendly writers who present topics like sex, love, and fear in simple terms. More to the point for scientists, EP's conclusions can be quite difficult to falsify. Even if its methods of generating hypotheses are suspect, there is always the possibility that on any given topic, an EP-er will turn out to be partly right. That forces critics to delve into the details of particular empirical claims. Buller does this in the latter part of his book and successfully dismantles several major EP findings. For instance, EP-ers have asserted that stepparents are more likely to abuse their stepchildren than their own sons and daughters because in the Stone Age, the parents who selectively devoted love and resources to their own progeny would have had a leg up in passing on their own genes. The proof is data that purport to show a higher rate of modern-day abuse by stepparents than by parents. When Buller dissects the data, however, this conclusion begins to fall apart. To begin with, most of the relevant studies on abuse do not say whether the abuser was a parent or stepparent. The EP assumption that the abuser is always the stepparent creates an artificial and entirely absurd confirmation of the field's hypothesis. In addition, research has shown that when a stepfather is present, a child's bruises are more likely attributed to abuse rather than to accidents, whereas when a biological father is present, the opposite tendency exists. Buller has to wade in deep to unravel this, but the effort pays off. Ultimately, the biggest problem with EP may be that it underestimates the power of evolutionary forces--both to tinker continually with the human brain, and to have created ingenious and flexible problem-solving structures in the first place. There's a nice irony here, since for years EP-ers have ridiculed opponents for not appreciating evolutionary theory's core tenets. Buller goes so far as to note an eerie resemblance between EP and [44]intelligent design, which also treats human nature as fixed and complete. The more persuasive claim is that there is no single human nature, and that we're works in progress. Related in Slate _________________________________________________________________ Jacob Weisberg [45]argued that scientists should acknowledge that evolution and religion aren't compatible. Bob Wright [46]explained why Steven Jay Gould can't be trusted. Judith Shulevitz [47]laid out evolutionary psychology's take on why men rape. Amanda Schaffer is a frequent contributor to Slate. posted Aug. 16, 2005 References 24. http://query.nytimes.com/gst/abstract.html?res=F30F1EFC345D0C728FDDAC0894DD404482&n=Top%252fOpinion%252fEditorials%2520and%2520Op%252dEd%252fOp%252dEd%252fColumnists%252fJohn%2520Tierney 25. http://www.anth.ucsb.edu/projects/human/evpsychfaq.html 26. http://query.nytimes.com/gst/abstract.html?res=FA0716FE355F0C718EDDA00894D8404482&incamp=archive:sear 27. http://us.penguingroup.com/nf/Book/BookDisplay/0,,0_1594200432,00.html 28. http://pinker.wjh.harvard.edu/articles/media/2005_02_14_newrepublic.html 29. http://news.independent.co.uk/uk/health_medical/article294039.ece 30. http://www.amazon.com/exec/obidos/tg/sim-explorer/explore-items/-/0262025795/0/101/1/none/purchase/ref=pd_sxp_r0/002-0228377-9825645 31. http://www.sciencemag.org/cgi/content/full/309/5735/706 32. http://homepage.psy.utexas.edu/homepage/Group/BussLAB/AboutDavid.htm 33. http://pinker.wjh.harvard.edu/ 34. http://www.anth.ucsb.edu/faculty/tooby/ 35. http://www.psych.ucsb.edu/people/faculty/cosmides/index.php 36. http://www.science.mcmaster.ca/Psychology/md.html 37. http://www.science.mcmaster.ca/Psychology/margo.html 38. http://slate.msn.com/id/2124503/#ContinueArticle 39. http://ad.doubleclick.net/jump/slate.arts/slate;kw=slate;sz=300x250;ord=4985? 40. http://www.psych.ucsb.edu/research/cep/primer.html 41. http://www.newscientist.com/article.ns?id=mg18725071.100 42. http://www.amazon.com/exec/obidos/ASIN/0631188878/ref=pd_sxp_f/002-0228377-9825645 43. http://slate.msn.com/id/2124503/sidebar/2124504/ 44. http://slate.msn.com/id/2118388/ 45. http://slate.msn.com/id/2124297/ 46. http://slate.msn.com/id/2016/ 47. http://slate.msn.com/id/1004368/ From checker at panix.com Sat Sep 3 01:33:40 2005 From: checker at panix.com (Premise Checker) Date: Fri, 2 Sep 2005 21:33:40 -0400 (EDT) Subject: [Paleopsych] CHE: Making a Living on Choking Under Pressure Message-ID: Making a Living on Choking Under Pressure The Chronicle of Higher Education, 5.9.2 http://chronicle.com/weekly/v52/i02/02a01003.htm By JOHN GRAVOIS For any ambitious young scholar just hitting the job market, choking under pressure is a real occupational hazard. Consider Sian L. Beilock: As she was leaving Michigan State University two years ago with two Ph.D.'s, in kinesiology and cognitive psychology, she got 12 invitations for job interviews right off the bat. That might sound like a giddy prospect, but it contained a hint of menace: What if her academic stock, painstakingly built up during years of research, suddenly plummeted in the glare of a few three-hour interviews? As it turned out, the psychological phenomenon that drives people to underperform in pressure situations served Ms. Beilock astonishingly well in the hiring process. She accepted seven of the job-interview invitations, and was subsequently rewarded with six job offers -- including plum appointments at Carnegie Mellon University and the Georgia Institute of Technology. That's because, for Ms. Beilock, 29, choking under pressure isn't just a nerve-racking fact of life -- it's a career-making research interest. From basketball stars on the free-throw line to golfers on the putting green to high-school students twiddling their No. 2 pencils in agony before the SAT, everyone shares a vulnerability to lousy performance when the stakes are high. Beginning with her research for her master's thesis in 1998, Ms. Beilock has quickly established herself as the go-to psychologist for this universal quirk. (Run a Google search for the term "choking under pressure," and the first hit leads to one of her papers.) "Basically, I study skilled performance," she says, "and I'm really interested in how skilled performance fails in a variety of situations." Over the past couple of years, it has become clear that the academic and grant-making worlds are interested too. With all those job offers on the table, Ms. Beilock did something unpredictable. She accepted an assistant professorship in the psychology department at Miami University of Ohio, where her husband, Allen R. McConnell, is a professor of social psychology. But now, after spending the better part of two years at Miami, she has accepted a job in the University of Chicago's psychology department, and has been busy setting up residence there over the summer. That's not all that's been keeping her occupied this season. Ms. Beilock spent much of August in Australia, attending a meeting of the International Society of Sport Psychology and accepting its Developing Scholar Award, a prize that is given out only once every four years. She has also had two research grant proposals accepted in recent months. One, which she will share with her husband, is a $199,000, three-year grant from the National Science Foundation to study "stereotype threat," a phenomenon that causes a subject's awareness of his own social identity, and its various connotations, to affect his performance of a skill. The other is a $428,000, three-year grant from the Department of Education to study how different high-pressure standardized-testing environments affect students' scores. No Pressure Ms. Beilock's research aims to settle one of the age-old questions generated by flubbed free throws and math tests: Do we choke because we over-think our performance, or do we choke because we are too distracted to focus on it properly? The answer: both, depending on the situation. When a professional basketball player steps up to the free-throw line, he probably doesn't shoot by thinking about a detailed series of steps. Instead, the skill is implanted so deeply in his memory that his motor system, in a sense, automatically remembers how to shoot for him. Up the stakes, however, and he might start paying too much attention to his actions. According to Ms. Beilock's findings, when someone under pressure starts analyzing all the component steps of a skill he normally executes like clockwork, he may perform as poorly as a novice. By contrast, the skills involved in cracking equations on a math test do require a lot of what psychologists call "working memory" -- the kind of step-by-step processing necessary to navigate a complex problem. Here's the surprising part: When taking a test, someone with lots of "working memory" is actually more prone to be handicapped by high stakes than someone with fewer cognitive resources. That's because the more expert thinker relies on very "cognitively intense" strategies, which can get sidetracked by distraction. Less intense thinkers might use mental shortcuts and rules of thumb to get by -- strategies that do not appear to be affected as much by pressure. That somewhat counterintuitive finding has tremendous bearing on the whole enterprise of standardized testing, the reigning aptitude-sorting mechanism in American education. "What pressure situations may do," Ms. Beilock says, "is serve to diminish the predictive differences we're looking for." The Golf Factor To study skilled performance and its hiccups, Ms. Beilock has used a number of experimental models, including math tests and dribbling soccer balls. But the one truly indispensable element in all of Ms. Beilock's labs over the years has been golf putting. "It's a very nice skill for a lot of reasons," she says. "One, you can put a putting green in a lab." (Ms. Beilock's own strongest sport -- lacrosse -- proved logistically ill-matched to the lab setting.) But that's not all putting has going for it. "It's got great motor components," Ms. Beilock says, "but it's also got this cognitive component as well." Though Ms. Beilock is not much of a golfer herself, her favored means of generating data has given her a level of exposure in the world of golf journalism that is probably rare among cognitive scientists. She has had no less than five articles about her appear in golf publications. Golf fanatics are not the only people outside of academe who have picked up on Ms. Beilock's work. Because her research illuminates such a universal source of chagrin, it has garnered her coverage in The New York Times and interviews on National Public Radio. Navigating the waters between academe and the popular press is a famously perilous enterprise for young scholars. However, Ms. Beilock's colleagues and mentors say she possesses more than enough seriousness to back up her public appearances. "She has all the tools to do great science on hard topics," says Thomas H. Carr, Ms. Beilock's former dissertation adviser at Michigan State and now a frequent collaborator. What's more, he says, she combines those tools with "an intense ambition to make a difference in our understanding of the mind and to play that understanding out in real-world activities." Of all Ms. Beilock's research findings, one of her less-heralded discoveries is that it is surprisingly easy to design a pressure situation sufficient to make people susceptible to choking. "We'll offer people five bucks," she says, "and that's enough to make them feel like they're going to screw up." With much more than that on the line, Ms. Beilock has advanced her very brief career. But that's not to say she's always happy with her performance in the various high-pressure interview sessions, exams, and reviews that have marked her ascent. "I've always just felt that all of my abilities weren't really reflected in those short snapshots," she says. "I always think I could do better than I've done." From checker at panix.com Sat Sep 3 01:34:14 2005 From: checker at panix.com (Premise Checker) Date: Fri, 2 Sep 2005 21:34:14 -0400 (EDT) Subject: [Paleopsych] Re: Runners World: Should You Run Naked? Message-ID: ---------- Forwarded message ---------- Date: Fri, 2 Sep 2005 11:52:51 +0200 From: Amara Graps Reply-To: World Transhumanist Association Discussion List To: wta-talk at transhumanism.org Subject: [wta-talk] Runners World: Should You Run Naked? Beyond sports - There are many many reason for why I think peeling off the clothes is a good idea. Here I repost something I sent to the extropians three years ago. Amara Date: Sat, 19 Jan 2002 03:42:48 +0100 To: extropians at extropy.org From: Amara Graps Subject: Beingness Sender: owner-extropians at extropy.org Reply-To: extropians at extropy.org Spike wrote: > Regarding public nudity, there is a prominent nude beach near where > I grew up, just north of Kennedy Space Center, Playalinda beach. [...] > on the most perfect beach weather days. But the last thing one > will see after starting that hike and the first thing one will see at > the other end are countless naked people. That would be fine, > except for the fact that the kinds of people who generally go nude > on the beach are exactly those who you would really prefer not to > see naked. Ever. Not even in ones worst nightmare. Now now Spike. Naturist beaches and resorts are freedom, in an ultimate sense. What better way to see the marvelous variety of shapes and sizes in which the the human body manifests itself? Social roles, economic classes, sex roles reduced or removed, and we can be who we are, simply. "If it were perfectly natural to go nude, we'd all be born that way." General Naturist/Nudist Information http://www.mbay.net/~cgd/naturism/nlink01.htm Being and Nakedness http://www.naturistplace.com/ REC.NUDE FAQ: Naturist Site Reports: http://www.faqs.org/faqs/nude-faq/beaches/ The following pieces are from: Humorous Introduction to Naturism http://www.netnude.com/main/intro.html#intro {begin quote} Nobody knows for certain exactly how many naturists there are in the world, but the numbers of those enjoying a clothes optional lifestyle appear to be increasing. Unfortunately, naturism still carries a stigma, born largely of ignorance of the truth. To some naturists are well meaning but slightly dotty individuals, who meander naked through wooded glades, pausing in catalogue poses behind strategically placed leaves. To others, they are immoral hedonists, congregating in mixed groups to enjoy pleasures of the flesh in orgy situations not seen since Caesar hung up his laurels. Or they are perverts trying to corrupt the 'normal'way of life. As with the majority of prejudices based on lies, misunderstandings and half-truths, the reality of life for the average naturist is very different indeed. I COULD NEVER BE A NUDIST ANYWAY - JUST LOOK AT MY BODY! That's the whole point though. Naturism isn't about looking at bodies - naturists are not exhibitionists. It's just about enjoying the freedom that a clothes optional atmosphere brings. Naturism is about accepting the human body for what it is - nothing to be ashamed of. So the men don't need to hit the gym for six months, buffing their muscles to within an inch of their lives in order to gain entry. And the women don't have to look like Baywatch babes. The media is largely responsible for promoting this idea of body perfection, but the truth is that the vast majority of people do not now, nor are they likely to ever resemble this false ideal. So for naturists there is no such thing as too fat, too thin, too short, too tall, too hairy. Nobody's going to comment on the size or shape of your breasts or critically evaluate your genitalia. And if you have any surgical scars or other distinguishing marks you needn't worry - ignore them just like everyone else will. For most people, their initial discomfort disappears very quickly, once they realize they are not being judged on their appearance. BUT WHAT DO PEOPLE GET OUT OF IT? IF IT'S NOT ABOUT SEX, JUST WHAT IS IT ABOUT? It's about relaxation, freedom from restriction - and to a very large degree, it's about honesty. Naturists are judged on their personalities alone. They take away the trappings that most of us have around us every day. They have less to 'hide behind'. This is very healthy, because it means that friendships are built on truth - as people get used to being open with each other, there is less temptation to embellish![...] Being nude can also be incredibly relaxing. The feeling of air, sun and water on the skin is a terrific stress reliever. [...] IT CAN'T BE HEALTHY FOR CHILDREN THOUGH, SURELY. On the contrary, children who grow up in a naturist environment usually have far fewer hang-ups than other kids. Once again, they are not being subjected to premature sexual situations - they grow up around other children and adults, understanding that the body is not something to be hidden and ashamed of. They know anatomy of the human body, and it is less of a 'taboo' to be explored at the earliest opportunity. There are fewer incidences of teenage pregnancy, sexually transmitted diseases and criminal behaviour amongst nudist children than amongst other children. WHERE WOULD I PUT MY SUNGLASSES!? As for the sunglasses, well friends have found that nipple rings are the perfect holders for their Ray Bans. If you don't fancy body piercing, though, a small bag slung around your neck or carried with you is the perfect repository for your small change and other necessities. {end quote} From checker at panix.com Sat Sep 3 01:34:52 2005 From: checker at panix.com (Premise Checker) Date: Fri, 2 Sep 2005 21:34:52 -0400 (EDT) Subject: [Paleopsych] Runners World: Should You Run Naked? Message-ID: This is the original. Should You Run Naked? http://www.runnersworld.com/article/printer_friendly/0,5046,s6-187-0-0-6844,00.html Nothing came between ancient Olympians and their performance. Were they onto something? by: Amby Burfoot If you ask me, the ancient Olympians were a lot smarter than we are. They had the good sense to run, jump, and throw in the nude. When you put anything between your skin and the environment--like shorts and a singlet, for example--you only decrease your body's cooling efficiency (even if you're more...comfortable in certain areas). The so-called "modern" Olympians of 1896 were smarter than us, too. They did their running, jumping, and throwing in April. Some athletes complained about the chilly, damp weather, but Spiridon Louis gave thanks to Zeus all the way to his (clothed) marathon victory in 2:58:50. Unfortunately, Olympic Marathons have been getting hotter ever since. The 1900 Olympic Marathon started at 2:36 p.m. under a 95-degree Parisian sun. Twelve years later, in Stockholm, a Portuguese runner died in the sweltering Olympic Marathon. Many of us remember Gabriele Andersen Schiess staggering across the finish line in the 1984 Women's Olympic Marathon in Los Angeles. In Athens this month, both the men's and women's marathons will start at 6 p.m., when average temperatures are in the mid-80s, though the city has a record August high of 109. And the marathoners will be running on black asphalt that has been simmering for 12 hours. "It's a terrible disservice that the marathoners will be forced to compete in conditions where they can't perform their best, and could actually hurt themselves," says Dr. William Roberts, medical director of the Twin Cities Marathon and president of the American College of Sports Medicine. To help athletes deal with the Athens weather, the U.S. Olympic Committee has been holding educational meetings since last September, when it organized a conference called "Heat, Humidity and Air Pollution: What to Expect in Athens 2004." In May, the top U.S. marathoners gathered in Colorado Springs for the latest update. "We believe the heat actually opens the window of possibilities for our marathoners," says U.S. men's Olympic distance coach Bob Larsen. "We'll leave no stone unturned in our search for scientific approaches to running in the heat." The lessons learned by the marathon team will also work for you. Here are some of the highlights. Heat Acclimation Many years of heat acclimation research have convinced most experts that you can do a good job of adjusting to the heat in eight days, a better job in 14, and perhaps better still in 21. The last physiological variable to adapt is your sweat rate, which takes eight to 14 days to reach maximum efficiency. Other, faster responders include increased plasma volume, decreased sodium concentration in the blood, decreased heart rate while running, decreased perceived exertion, and increased running economy. U.S. track athletes will be given the chance to attend a pre-Olympic training camp on Crete about two weeks before they move to Athens. The runners will follow a heat-training protocol outlined by Randy Wilber, Ph.D., of the USOC sports sciences department, who suggests the following: First run in the morning or evening cool; then move to warmer times of the day; finally, increase the length and intensity of your midday workouts. Perhaps no runner has thought more about heat training and racing than Alberto Salazar. Before the 1984 Olympic Marathon he traveled to the U.S. Army Labs in Natick, Massachusetts, to get tested in a heat chamber (where sweat production is measured) and learned to chug two quarts of fluid before every workout. But then he crashed. He now believes he did too many hard 20-milers in the heat. "I was exhausted from the first step of the marathon," he says. He finished 15th in 2:14:19. Today Salazar is coaching Dan Browne, one of the 2004 U.S. marathon qualifiers. He plans to have Browne do occasional workouts in a Nike heat chamber and to cut back on the intensity of his speedwork. "No one's going to run 2:06 in Athens, so we don't have to worry about training for that pace," Salazar says. Hydration Everyone knows drinking fluids is supposed to help you run faster. But you have to slow down to grab your drinks. America's Steve Spence worked on this dilemma when he was training for the hot, humid World Championships Marathon in Tokyo in 1991. Spence set up a water table on his local track, and then practiced drinking while running intervals at faster-than-marathon pace. "I figured if I got good at taking my drinks at this pace, it would come easy in the marathon," he says. Spence claimed the bronze medal. A couple of months ago, Alan Culpepper, another 2004 marathon qualifier, visited the Gatorade Sports Science Institute in Illinois to get a better idea of his sweat production and hydration needs. When he ran for an hour in a heat chamber cranked up to 85 degrees, he sweat 1.4 liters. He also learned that he is a salty sweater. "I'm much more aware now of my drinking and sodium needs," Culpepper says. "I feel more prepared to handle the heat challenges in Athens." Superhydration Storing extra water would be nice, but runners aren't camels. Still, two simple substances seem capable of promoting superhydration: common salt and glycerol, a liquid supplement. A New Zealand study presented at this year's American College of Sports Medicine meeting showed that well-trained runners who prehydrated with a heavily salted drink were able to exercise 20 percent longer in 90-degree weather than when they prehydrated with a minimally salty beverage. Not all glycerol studies have shown an improvement in hydration status or endurance performance, but a two-year-old study with Olympic distance triathletes produced convincing results. In a randomized, double blind, crossover study in 87-degree conditions, the triathletes slowed down much less with glycerol than without it. "Glycerol lets you increase the amount of standing water on board," says U.S. marathon guru David Martin, Ph.D. "It's nice to have that extra amount during a long, hot race." Spence readily admits he used glycerol in Tokyo, Keith Brantly says he used it in his best marathons, and Salazar says Browne will probably test glycerol to see how it works for him. Cool Vests In January, a team from the University of Georgia studied college distance runners covering 5-K in a 90-degree heat chamber with and without ice vests to cool their core before their efforts. The "precooled" runners finished 13 seconds faster, which is more than the gap that will separate many gold-medalists and fourth-place finishers in Athens. Recently, the folks at Nike Sport Research have been working to design an improved cooling vest that places more of the body's surfaces closer to larger volumes of ice. Only field hockey players have tested it (successfully, Nike says), but Lance Armstrong and Paula Radcliffe were both trying the vest in early summer. Cool Clothing You already know that a white shirt will absorb less heat than a black one. And for the past decade you've read about the amazing advances of breathable microfibers. But wait, those shirts are designed to keep you warm and dry in the winter. Do you really want that on a hot summer's day? Nope. So four years ago Nike produced a shirt that several U.S. runners wore in the Sydney Olympics. This white shirt sat off the skin on small bumps (allowing air to circulate), was constructed of a large fishnet weave (more air circulation), didn't absorb sweat (leaving it on the skin to cool you via evaporation), and was made of recycled plastic bottles. Home run! Too bad Nike called the shirt the Stand-Off Distance Singlet (because of the way it stood off your skin), which sounded too much like a shirt with a body-odor problem. This year Nike has produced something called the Nike Sphere Cool Marathon Singlet, with aerodynamic seam placement, mesh construction, and patent-pending "Zoned Venting" technology. But give me a Stand-Off Distance Singlet, and I'll show you a really great hot-weather running shirt. Here's my advice to the U.S. marathoners: Bring your scissors to Athens and cut your racing singlet as short as you can. Research by exercise physiologist Timothy Gavins, Ph.D., has shown that "the chimney effect" can improve body cooling. This refers to air moving up the bottom of your untucked shirt and out the top. Or just run naked. You'll be reconnecting with your Olympic forebears, increasing your chances of a medal, and giving a big boost to NBC's Olympic TV ratings. From checker at panix.com Sun Sep 4 00:43:37 2005 From: checker at panix.com (Premise Checker) Date: Sat, 3 Sep 2005 20:43:37 -0400 (EDT) Subject: [Paleopsych] NYT: In the Long Run, Sleep at Home and Invest in the Stock Market Message-ID: In the Long Run, Sleep at Home and Invest in the Stock Market http://www.nytimes.com/2005/08/19/realestate/19real.html By MOTOKO RICH and [3]DAVID LEONHARDT The housing boom of the last five years has made many homeowners feel like very, very smart investors. As the value of real estate has skyrocketed, owners have become enamored of the wealth their homes are creating, with many concluding that real estate is now a safer and better investment than stocks. It turns out, though, that the last five years - when homes in some hot markets like Manhattan and Las Vegas have outperformed stocks - has been a highly unusual period. In fact, by a wide margin over time, stock prices have risen more quickly than home values, even on the East and West Coasts, where home values have appreciated most. When Marti and Ray Jacobs sold the five-bedroom colonial house in Harrington Park, N.J., where they had lived since 1970, they made what looked like a typically impressive profit. They had paid $110,000 to have the house built and sold it in July for $900,000. But the truth is that much of the gain came from simple price inflation, the same force that has made a gallon of milk more expensive today than it was three decades ago. The Jacobses also invested tens of thousands of dollars in a new master bathroom, with marble floors, a Jacuzzi bathtub and vanity cabinets. Add it all up, and they ended up making an inflation-adjusted profit of less than 10 percent over the 35 years. That return does not come close to the gains of the stock market over the same period. The Standard & Poor's 500-stock index has increased almost 200 percent since 1970, even after accounting for inflation. Yet investment advisers worry that this reality is getting lost in today's enthusiasm for houses, even as some economists say the housing market has peaked. People are buying homes purely on speculation, trading real estate almost as if it were a stock. Surveys show that a large majority of Americans consider real estate to be a safer investment than stocks. "With how strong the real estate market has performed, there is the urge for people to chase returns," said Jeff A. Weiand, executive vice president of RTD Financial Advisors in Philadelphia. "But it's very difficult to beat the long-term historical record of stocks." Since 1980, for example, money invested in the Standard & Poor's 500 has delivered a return of 10 percent a year on average. Including dividends, the return on the S.& P. 500 rises to 12 percent a year. Even in New York and San Francisco, homes have risen in value only about 7 percent a year over the same span. That does not mean real estate is a bad investment. It is often an important source of wealth for families. But its main benefit is what it has always been: you can live in the house you own. "The biggest value of the house is the shelter it provides," said Thomas Z. Lys, an accounting professor at the Kellogg School of Management at Northwestern University. "Too many people are fixated on speculation whereas most of the benefit really comes from usage." Despite the fact that home values usually appreciate over time, most of the value of a house actually comes from the ability to use it. In this way, it is more like a car, albeit one that does not become less valuable over time, than it is like a stock. Whenever people sell one house, they must immediately pay to live elsewhere, meaning that they can never wholly cash out of a home's value. Including the value of living in a house - that is, the rent that a family would have to pay to live in an equivalent house elsewhere - homes in New York have returned more than 15 percent a year since 1980, according to an analysis by Mr. Lys. But only five percentage points of this return comes from sheer price appreciation, as opposed to the value of shelter. Mr. Lys accounted for property taxes, spending on renovations, interest payments and the tax deductions on those payments, and the fact that most house purchases are made with mortgages. When the sale of a house brings in a cash windfall, homeowners tend to focus on the fact that they made a down payment that was just a fraction of their house's value, lifting their return. But many forget just how much money they spent on property taxes, a new roof and the mortgage interest. Add to all these factors the corrosive effect of inflation, and the returns are even lower. The Jacobses - she is an administrator for a magazine and he a lawyer - were quite pleased with the sale of their house in New Jersey. To them, it was a place to raise their two children more than it was an investment. When the couple spent about $100,000 to redo their master bathroom, install a walk-in closet and build a deck in the mid-1980's, Mr. Jacobs recalled saying to his wife, "We'll never get the money out that we put into this, but at least we'll enjoy it for 15 years or so." Told of the comparative returns on his house and the stock market, Mr. Jacobs said, "Of course I couldn't live in the portfolio." Today, however, he has come to see the advantages of the stock market. The Jacobses now rent an apartment on the Upper West Side for more than $4,000 a month and have invested the proceeds from the house sale in the stock market, making it easier for them to raise cash by selling shares. "I didn't want to take the money that we pulled out of the house and have all that money tied up in an apartment where I still have expenses of maintenance fees," Mr. Jacobs said. But many people seem to be going in the opposite direction from the Jacobses. Eighty percent of Americans deemed real estate a safer investment than stocks in an NBC News/Wall Street Journal poll done this spring, while only 13 percent said stocks were safer. Part of that sentiment is driven by the recent memory of the stock market collapse in 2000. Many homeowners seem to have forgotten that less than 15 years ago house prices in the Northeast and California fell, but the money they lost on technology stocks is still fresh in their minds. Stocks are also more volatile, and their price changes can be viewed every day. "The news doesn't report to you daily that your house price might have gone up or down," Mr. Lys said. "So you think your house price is more stable than it really is because you don't observe these minute-by-minute gyrations." Economists caution that any comparison between real estate and stocks is tricky, because real estate is typically a leveraged investment, in which a home buyer makes a down payment equal to only a fraction of a house's value and borrows to finance the rest. While it is possible to borrow money from a brokerage firm to buy stocks, most individual investors simply buy the shares outright. When home prices are rising, the leverage from a mortgage lifts real estate returns in the short term. Someone putting down $100,000 to buy a $500,000 home can feel as if the investment doubled when told that the house is now worth $600,000. But the power of leverage vanishes as homeowners pay off the mortgage, as the Jacobses have. Leverage also creates more short-term risk, especially for those who have stretched to afford their house. "If the home went down by 30 percent, you'd probably be sitting with a bankruptcy attorney," said Jonathan Golub, United States equity strategist at J. P. Morgan Asset Management in New York. "If your I.B.M. stock goes down by 30 percent, no big deal. So you had $100,000, now you have $70,000. You don't declare bankruptcy; you just don't go out to the movies as much, or you retire a year later." But such risks are hard to imagine when many markets are still enjoying double-digit percentage increases every year. The number of people buying houses they do not plan to live in has surged. There are also Internet exchanges where investors can trade yet-to-be-built condominiums or futures contracts tied to average home prices in big metropolitan areas. But economists and investment advisers say that most of the value from real estate comes not from anything that can be captured by flipping it, but from the safety net it provides in bad times. Even if the market shifts downward, "you have a roof over your head," said Jonathan Miller, a real estate appraiser in Manhattan. Beyond the shelter it provides, the biggest advantage of real estate might be that it protects people from their worst investment instincts. Most people do not sell their house out of frustration after a few months of declining values, as they might with a stock. Instead, they are almost forced to be long-run investors who do not try to time the market. Harlan Larson, a retired manager of car dealerships near Minneapolis, still regrets having bought Northwest Airlines stock at $25 a share a few years ago. It is now trading at less than $5. By comparison, he views the four-bedroom home he bought for $32,500 in 1965 - or about $200,000 in today's dollars - as a money tree. He and his wife recently listed it for $413,000. That would translate into an annual return of 1.2 percent, taking into account inflation and the cost of two new decks and an extra room. They plan to move to Texas after it has sold. "I wish I'd bought more real estate," Mr. Larson said. From checker at panix.com Sun Sep 4 00:43:44 2005 From: checker at panix.com (Premise Checker) Date: Sat, 3 Sep 2005 20:43:44 -0400 (EDT) Subject: [Paleopsych] The American Prospect: The Right Fight Message-ID: The Right Fight http://www.prospect.org/web/printfriendly-view.ww?id=10140 It took the Bush administration to bring a truce between the postmodern left and the scientific community. By [2]Chris Mooney Web Exclusive: 08.15.05 Circa 1996, many of the nation's intellectuals could be found chattering about the famous "Sokal hoax." Remember that? It all began when New York University physicist Alan Sokal submitted an [5]article to the left-wing academic journal Social Text that basically amounted to gibberish. It essentially argued that physical reality does not exist: It has thus become increasingly apparent that physical "reality,'' no less than social "reality,'' is at bottom a social and linguistic construct; that scientific "knowledge," far from being objective, reflects and encodes the dominant ideologies and power relations of the culture that produced it; that the truth claims of science are inherently theory-laden and self-referential; and consequently, that the discourse of the scientific community, for all its undeniable value, cannot assert a privileged epistemological status with respect to counter-hegemonic narratives emanating from dissident or marginalized communities . The article had a giveaway title: "Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity." Coming from a physicist, this should have raised serious red flags. Nevertheless, Social Text was stupid enough to publish the thing, and then Sokal [6]exposed the hoax in Lingua Franca magazine. On the one hand, this was a pretty mean trick to pull on poor Social Text. On the other, editors unable to distinguish real physics from spoof physics probably shouldn't be publishing articles arguing against physical reality. At any rate, Sokal claimed his objectives were thoroughly constructive. He wanted, he said, to shake the academic left out of its postmodern torpor and force its leading intellectuals to recognize that jargony articles and a general tone of relativism and subjectivism weren't helping anybody -- certainly not the oppressed people of the world. "For most of the past two centuries," Sokal wrote, "the Left has been identified with science and against obscurantism . Theorizing about 'the social construction of reality' won't help us find an effective treatment for AIDS or devise strategies for preventing global warming. Nor can we combat false ideas in history, sociology, economics, and politics if we reject the notions of truth and falsity." The Sokal hoax hit liberal academia like a thunderclap and prompted many a gloat from scientists. It went hand in hand with books like [7]Higher Superstition, an all-out attack on the perceived anti-science obscurantism of the academic left. For many pro-science liberals as well as many anti-campus conservatives, the notion slowly took hold that there were a lot of out-of-touch left-wing academics, nestled in secluded universities, who were conducting a campaign against scientific knowledge in obscure journals through excessive quotation of Foucault and Derrida. Even at the time, however, the quest to root out anti-science tendencies in academia seemed a strange deployment of resources. After all, the Gingrich Republicans had just taken over Congress, set out to radically slash science budgets, and preached denial about global warming. If there was a war on science afoot, university professors probably weren't the leading culprits. Certainly they weren't the most powerful ones. Indeed, despite some undeniable academic excesses, the "science wars" were always somewhat overblown. The sociological, historical, philosophical, and cultural study of science is a very worthwhile endeavor. If scholars engaged in such research sometimes take a stance of agnosticism toward the truth claims of science, perhaps that's simply their way of remaining detached from the subject they're studying. But it doesn't necessarily follow that these scholars are absolute relativists, to the extent of thinking that concepts like gravity are a mere matter of opinion. Social Text founding Editor Stanley Aronowitz has himself written that "[t]he critical theories of science do not refute the results of scientific discoveries since, say, the Copernican revolution or since Galileo's development of the telescope." When it comes to the field of science studies, meanwhile, much scholarly work in the area lends itself not to left-wing attacks on science but rather to defenses of science from forms of abuse prevalent on the political right. To cite just one example, leading science-studies scholar Sheila Jasanoff's 1991 book, The Fifth Branch: Science Advisers as Policymakers, presents a potent critique of demands for unreasonable levels of scientific certainty before political decisions can be made, especially when it comes to protecting public health and the environment. So perhaps it's no surprise that the science wars of the 1990s have almost entirely subsided, and, as the scientific community has increasingly become embroiled with the Bush administration across a wide range of issues (from evolution to climate science), a very new zeitgeist has emerged. The summer issue of The American Scholar, a leading read among academic humanists and the literary set, provides a case in point. "Science matters," blazons the cover. Inside, Editor Robert Wilson explains to readers that although "the attack on science has always been our game the enemy of our enemy is most definitely not our friend." The right's attack on science, Wilson continues, "is an attack on reason, and it cannot be ignored, or excused, or allowed to go uncontested." With those words, I think it's safe to say that peace has officially been made in the science wars of the 1990s. And not a moment too soon. The evolution deniers (and other reality deniers) are gathering momentum. On matters like this, the university community -- composed of scientists and scholars alike -- really ought to be on the same page. Chris Mooney is the Washington correspondent for [8]Seed Magazine and a columnist for The American Prospect Online. His first book, [9]The Republican War on Science, will be published in September. His daily blog and other writings can be found at [10]www.chriscmooney.com. References 2. http://www.prospect.org/web/page.ww?name=View+Author§ion=root&id=174 3. http://www.prospect.org/web/printfriendly-view.ww?id=10140 4. http://www.prospect.org/web/start-email.ww?id=10140 5. http://www.physics.nyu.edu/faculty/sokal/transgress_v2/transgress_v2_singlefile.html 6. http://www.physics.nyu.edu/faculty/sokal/lingua_franca_v4/lingua_franca_v4.html 7. http://www.amazon.com/exec/obidos/tg/detail/-/0801857074/103-4828884-6127823?v=glance 8. http://www.seedmediagroup.com/ 9. http://www.amazon.com/exec/obidos/ASIN/0465046754/chriscmooneyc-20/103-4828884-6127823 10. http://chriscmooney.com/ From checker at panix.com Sun Sep 4 00:43:54 2005 From: checker at panix.com (Premise Checker) Date: Sat, 3 Sep 2005 20:43:54 -0400 (EDT) Subject: [Paleopsych] Bulletin of the Atomic Scientists: Smile for the camera Message-ID: Smile for the camera http://www.thebulletin.org/article.php?art_ofn=ja05brin [1]Bulletin of the Atomic Scientists [2]Doomsday Clock No Place to Hide By Robert O'Harrow Jr. Free Press, 2005 348 pages, $26 By David Brin July/August 2005 pp. 64-67 (vol. 61, no. 04) O ne can all too easily get caught up in today's atmosphere of desperate worry. Democrats and Republicans who disagree over many things seem to share a perception that civilization is plunging into crisis. Post-9/11 unease goes beyond airport inconvenience, economic disruption, and military conflict, all the way to jeremiads warning against technological innovations. Amid this gloom, I take solace from that most discomforting of symbols, the Doomsday Clock of the Bulletin of the Atomic Scientists, which helped crystallize an earlier generation's end-of-the-world parable. The day still has not yet come when any combination of terror attacks could wreak as much harm as the lethal cargo of a single ballistic missile submarine. So should not our worry level be lower than it was in, say, 1980? But people don't use game theory to weigh their fears, and the new science of threat psychology explains why. The Cold War was run mostly by professionals, but terror attacks seem unguided by logic. It is the unpredictable and irrational threat, above all, that makes us shiver. Does this explain why we hear so many commentators expressing fear of technological change? Take Francis Fukuyama, professor at the Johns Hopkins School of Advanced International Studies, whose The End of History and the Last Man (1992) suggested that the collapse of communism might be the final event worth chronicling before Earth slides, happily ever after, into blithe liberal democracy. Alas, short-lived jubilation swiftly gave way to pessimism in Our Posthuman Future: Consequences of the Biotechnology Revolution (2002), in which Fukuyama condemns a wide range of potentially disruptive biological advances. People cannot be trusted to make wise use of, for example, genetic therapy, he says. Human "improvability" is so perilous a concept that he prescribes joint government-industry panels to control or ban whole avenues of scientific investigation. Fukuyama is hardly alone in fretting over technological innovation gone amok. Popular authors Margaret Atwood and Michael Crichton have probably never voted for the same party or candidate. Yet their novels share a persistent theme with many other antimodernists, both left and right, who doubt human ability to solve problems or to cope. Pondering the challenges of tomorrow, they say, "Don't go there." No issue has stoked this ecumenical sense of alienation more than the Great Big Privacy Scare. While the information age seems on one level benign--the internet can't blast, kill, mutate, or infect us--social repercussions of new data-handling technologies seem daunting. Pundits, spanning the spectrum from William Safire to Jeffrey Rosen, have proclaimed this to be our ultimate test. I don't disagree. Every day, powerful, interconnected databases fill with information about you and about me, fed by inputs from our every purchase and telephone call. New sensor technologies add cascades of detail, not just from the vast proliferation of cameras (which are getting cheaper, smaller, faster, and more mobile every year), but also from radio frequency identification tags that identify and track objects (as well as the people who happen to be wearing, riding, or chatting into them), along with biometric devices that identify people by their irises, retinas, fingerprints, voices, or dozens of other physical markers. These gadgets' torrential output will feed into the internet's sophisticated successors--government and corporate databases that hunt for connections and make inferences based on them. It all sounds pretty dreadful, and Washington Post reporter Robert O'Harrow Jr. reinforces this disheartening view with copious detail in his new book, No Place to Hide, offering one of the most thorough litanies of information and privacy abuse since Simson Garfinkel's Database Nation: The Death of Privacy in the 21st Century (2000). O'Harrow's complaints about the behavior of voracious data-mining groups, such as Acxiom, LexisNexis, and ChoicePoint, are accurate and timely. Just after No Place to Hide was published, several companies were caught violating either their own privacy-protection rules or their legal obligations to safeguard private data. Security breaches at ChoicePoint and LexisNexis exposed tens of thousands of supposedly secure private records, including credit card information and Social Security numbers. Such events steadily erode trust and increase the near-term danger that we all face from crimes like identity theft. O'Harrow shows that privacy problems are nothing new, giving readers a historical context that builds on Vance Packard's The Naked Society (1964) and Arthur R. Miller's Assault on Privacy (1971) and includes a rundown of post-Watergate reforms that were supposed to end surveillance abuses. Indeed, many of today's arguments about the proper balance between privacy rights and law enforcement needs are rooted in the pre-internet era. But today, the government is vastly better equipped, with aptly named tools like "Carnivore" and "The Matrix" that empower the agencies of our paid protector caste to penetrate telephone lines, e-mail traffic, myriad databases, and more, sifting for anything that their constantly shifting criteria might deem threat-related. The post-9/11 era, which spawned angst over an amorphous and ill-defined enemy, has aggravated a political divide that has been around for years. This chasm separates those who emphasize a need for enhanced security from those who urge that we accept a little added risk in order to preserve traditional liberties. Nowhere has this edgy debate swirled more bitterly than around provisions of the PATRIOT Act, which dramatically bolstered the federal government's wiretap and surveillance powers while at the same time shrouding law enforcement activity in a haze of heightened secrecy. O'Harrow chronicles the evolution of this landmark law in New Journalism style, "through the eyes of" such players as Vermont Democratic Sen. Patrick Leahy and Assistant Attorney General Viet Dinh (who both pushed for different versions of an anti-terrorism bill after 9/11), following them at breakfast, in the car, and through the meetings in which adamant, no-compromise positions took form. Forsaking any pretense of impartiality, O'Harrow venerates the ACLU lobbyists who "grasped the difficulty of their position. They were trying to persuade Americans to hold fast to concerns about individual freedom and privacy while the vast majority of people were terrified." No passage better illustrates O'Harrow's approach to this serious issue, typifying the snobbery of those on both right and left who share a common need to portray the American people as hapless sheep who either require protection from terrorists, or protection from overprotection. Only a few commentators, notably the Boston Globe's Elaine Scarry, have pointed out that in fact most Americans did not panic or act terrified on, or after, 9/11. If you want a detailed series of anecdotes showing how databases and data mining can be surreptitiously abused, then read No Place to Hide and find out how many groups, from industry to government to criminal gangs, are trying to gather information on you. Campaigns to control this information-gathering frenzy--for example, when Congress stopped the Defense Advanced Research Projects Agency's efforts to achieve "total information awareness"--simply drive the trend toward universal data collection underground. Meanwhile, supposedly secure systems, like those at LexisNexis, are breached with apparent regularity. And once information floats free, there is no calling it back. As O'Harrow follows real-life characters, showing their quirky hobbies and their passionate battles for or against privacy rights, another common theme emerges. Everyone appears to accept the underlying premise of a zero-sum game, a "great dichotomy"--the notion that one must choose, or strike some balance, between freedom and safety. One can hardly blame O'Harrow for imbuing his book with an assumption that seems both widespread and intuitively obvious. Wisconsin Democratic Sen. Russell Feingold, a very smart man, nevertheless accepted this trade-off during the PATRIOT Act debates: "There is no doubt that if we lived in a police state, it would be easier to catch terrorists. If we lived in a country where the police were allowed to search your home at any time, for any reason. . . . But that would not be a country in which we would want to live." A truism--but truisms can mislead. The notion of a freedom-security trade-off insidiously serves the interests of those who oppose freedom, because there will inevitably come days when security seems paramount to a frightened public. Liberty will lose any resulting ratchet effect. But the dismal "trade-off" notion is disproved by a simple counter-example--us. In all of history, no people were ever so safe and so free. A civilization of vigilant, technologically empowered problem-solvers can and should make safety and freedom codependent. Citizens may learn to thrive, even in an environment where various elites know much about them. "Surveillance comes with a price," O'Harrow writes. "It dulls the edge of public debate, imposes a sense of conformity, introduces a feeling of being watched. It chills culture and stifles dissent." This, too, sounds intuitively obvious, and it was almost certainly true in most other societies, in which narrow elites monopolized both power and the flow of information. But should we accept it, unexamined, as a valid assumption about the United States today? One might hope that after listing numerous threats to privacy, O'Harrow would offer some solutions or ideas for change. Unfortunately, he goes from introduction to conclusion without ever proposing a single suggestion, or even a palliative, to remedy burgeoning surveillance and its accompanying trends. The book lives up to its gloomy title and premise--its implicit prescription is "grumble at the inevitable." It did not have to be that way. Early in No Place to Hide, O'Harrow quotes a prophetic speech in which President Dwight D. Eisenhower warned against "the acquisition of unwarranted influence . . . by the military-industrial complex." (Former Clinton administration privacy counselor Peter Swire has echoed this famous admonition by referring to a security-industrial complex--an apt comparison.) O'Harrow then continues with an even more cogent excerpt from the same speech: "Only an alert and knowledgeable citizenry can compel the proper meshing of the huge industrial and military machinery of defense with our peaceful methods and goals, so that security and liberty may prosper together." There it is--an alternative to grumbling and to the dichotomy of gloom. Alas, O'Harrow offers the quotation and then moves quickly on, never commenting on how Eisenhower's statement differs so vastly from others in the book. For while warning of danger, Ike also spoke of opportunity and offered pragmatic ingredients for a genuine solution--a positive-sum solution based on Americans doing what they do best--living both safe and free. Human destiny is not predetermined by any of the flourishing surveillance technologies that O'Harrow details. I agree that nothing we can do will stop the ballooning growth of databases and microscopic cameras, proliferating across the land like crocuses after a spring rain. And yet I remain optimistic because educated citizens of a modern civilization may be capable of playing a different role than the one plotted out for them by smug elites, a role other than as bleating sheep. David Brin, a scientist, public speaker, and writer, is the author of the novel The Postman (1985) and the nonfiction book The Transparent Society (1998), which deals with openness, security, and liberty in the wired age. From checker at panix.com Sun Sep 4 00:44:05 2005 From: checker at panix.com (Premise Checker) Date: Sat, 3 Sep 2005 20:44:05 -0400 (EDT) Subject: [Paleopsych] NYT: How India Reconciles Hindu Values and Biotech Message-ID: How India Reconciles Hindu Values and Biotech http://www.nytimes.com/2005/08/21/weekinreview/21mishra.html By PANKAJ MISHRA LONDON In 2001, President Bush restricted federal financing for stem cell research. The decision, which was shaped at least partly by the Republican Party's evangelical Christian base, and which disappointed many American scientists and businessmen, provoked joy in India. The weekly newsmagazine India Today, read mostly by the country's ambitious middle class, spoke of a "new pot of gold" for Indian science and businesses. "If Indians are smart," the magazine said, American qualms about stem cell research "can open an opportunity to march ahead." Just four years later, this seems to have occurred. According to Ernst & Young's Global Biotechnology Report in 2004, Indian biotechnology companies are expected to grow tenfold in the next five years, creating more than a million jobs. With more than 10,000 highly trained and cheaply available scientists, the country is one of the leading biotechnology powers along with Korea, Singapore, China, Japan, Sweden, Britain and Israel. A top Indian corporation, the Reliance Group, owns Reliance Life Sciences, which is trying to devise new treatments for diabetes and Parkinson's and Alzheimer's diseases, and create human skin, blood and replacement organs genetically matched to their intended recipients. Some scientists have even more ambitious ideas. Encouraged by the cloning of a sheep by British scientists in 1996, they plan to do the same with endangered species of Indian lions and cheetahs. American scientists and businessmen note enviously that religious and moral considerations do not seem to inhibit Indian biotechnologists. But this indifference to ethical issues would have certainly appalled Gandhi, father of the Indian nation. Gandhi accused Western medicine, along with much of modern science and technology, of inflicting violence upon human nature. His vegetarianism and belief in nonviolence were derived from Indian traditions, mainly Hinduism, which is also the faith, though loosely defined, of most Indian scientists and businessmen. Indeed, most evangelical Christians, who believe that the embryo is a person, may find more support in ancient Hindu texts than in the Bible. Many Hindus see the soul - the true Self (or atman) - as the spiritual and imperishable component of human personality. After death destroys the body, the soul soon finds a new temporal home. Thus, for Hindus as much as for Catholics, life begins at conception. The ancient system of Indian medicine known as Ayurveda assumes that fetuses are alive and conscious when it prescribes a particular mental and spiritual regimen to pregnant women. This same assumption is implicit in "The Mahabharata," the Hindu epic about a fratricidal war apparently fought in the first millennium B.C. In one of its famous stories, the warrior Arjuna describes to his pregnant wife a seven-stage military strategy. His yet-to-born son Abhimanyu is listening, too. But as Arjuna describes the seventh and last stage, his wife falls asleep, presumably out of boredom. Years later, while fighting his father's cousins, the hundred Kaurava brothers, Abhimanyu uses well the military training he has learned in his mother's womb, until the seventh stage, where he falters and is killed. But the religions and traditions we know as Hinduism are less monolithic and more diverse than Islam and Christianity; they can yield contradictory arguments. Early in "The Mahabharata," there is a story about how the hundred Kaurava brothers came into being. Their mother had produced a mass of flesh after two years of pregnancy. But then a sage divided the flesh into 100 parts, which were treated with herbs and ghee, and kept in pots for two years - from which the Kaurava brothers emerged. Indian proponents of stem-cell research often offer this story as an early instance of human cloning through stem cells extracted from human embryos. They do not mention that "The Mahabharata" presents the birth of the hundred Kaurava brothers as an ominous event. Other Asian scientists have also pressed myth and tradition into the service of modern science and nationalism. In South Korea, where a third of the population is Buddhist, a scientist who cloned human embryonic stem cell lines claimed that he was "recycling" life just as reincarnation does. But spiritual tradition cannot solve all the ethical issues raised by science's progress through the third world. Ultrasound scans help many women in India to abort female fetuses; a girl child is still considered a burden among Indians. The trade in human organs, especially kidneys, remains a big business, despite growing scrutiny by the police. It is not hard to imagine that, as stem cell research grows in India, and remains unregulated, a small industry devoted to the creation of human embryos would soon develop. In any case, biotechnology may offer only pseudo-answers to many of India's urgent problems. For one thing, if and when lions and cheetahs emerge from biotechnology labs, the steadily deforested Indian countryside may not have a place for them. Stem cell research is also expensive, and seems glaringly so in a country which does not provide basic health care for most of its people. The advanced treatments promised by biotechnology are likely to benefit the rich, at least for the first few years. In the meantime, the poor may be asked to offer themselves as guinea pigs. In an article on biotechnology last year, India Today asserted: "India has another gold mine - the world's largest population of 'na?ve' sick patients, on whom no medicine has ever been tried. India's distinct communities and large families are ideal subjects for genetic and clinical research." Scientism has few detractors in India; and the elites find it easy to propose technological rather than political and moral solutions to the problems of poverty, inequality and environmental damage. Obsessed with imitating Western consumer lifestyles, most middle-class Indians are unlikely to have much time for Gandhi's belief that "civilization consists not in the multiplication of wants but in the deliberate and voluntary reduction of wants." They subscribe to a worldly form of Hinduism - one that now proves to be infinitely adjustable to the modern era, endorsing nuclear bombs and biotechnology as well as India's claim to be taken seriously as an emerging economic and scientific superpower. Pankaj Mishra, an Indian novelist and journalist, is the author, most recently, of "An End to Suffering: The Buddha in the World." He lives in London and India. From checker at panix.com Sun Sep 4 00:44:14 2005 From: checker at panix.com (Premise Checker) Date: Sat, 3 Sep 2005 20:44:14 -0400 (EDT) Subject: [Paleopsych] Archeology Mag: The Fate of Greenland's Vikings Message-ID: The Fate of Greenland's Vikings http://www.archaeology.org/online/features/greenland/ [In the last few paragraphs you'll encounter Thomas McGovern's theory that the reason for the fate of Greenland was the hidebound thinking of the Vikings at the time. Declining birthrates, too. Flash forward a millennium!] The Fate of Greenland's Vikings February 28, 2000 by Dale Mackenzie Brown Arm of Ericsfjord, on which Eric the Red had his farm (Dale Mackenzie Brown) Some people call it the Farm under the Sand, others Greenland's Pompeii. Dating to the mid-fourteenth century, it was once the site of a Viking colony founded along the island's grassy southwestern coast that stretches in a fjord-indented ribbon between the glaciers and the sea. Archaeologists Jette Arneborg of the Danish National Museum, Joel Berglund of the Greenland National Museum, and Claus Andreasen of Greenland University could not have guessed what would be revealed when they excavated the ruins of the five-room, stone-and-turf house in the early 1990s. As the archaeologists dug through the permafrost and removed the windblown glacial sand that filled the rooms, they found fragments of looms and cloth. Scattered about were other household belongings, including an iron knife, whetstones, soapstone vessels, and a double-edged comb. Whoever lived here departed so hurriedly that they left behind iron and caribou antler arrows, weapons needed for survival in this harsh country, medieval Europe's farthest frontier. What drove the occupants away? Where did they go? [4][image] Map of Greenland showing settlements (Lynda D'Amico) [5][LARGER IMAGE] The disappearance of the Greenlanders has intrigued students of history for centuries. One old source held that Skraelings, or Inuit, who had crossed over from Ellesmere Island in the far north around A.D. 1000, migrated down the west coast and overran the settlement. Ivar Bardarson, steward of the Church's property in Greenland, and a member of a sister settlement 300 miles to the southeast, was said to have gathered a force and sailed northwest to drive the interlopers out, but "when they came hither, behold they found no man, neither Christian nor heathen, naught but some wild cattle and sheep, and they killed as many of the wild cattle and sheep as they could carry and with them returned to their houses." The death of the Western Settlement portended the demise of the larger eastern one a century later. Of the first 24 boatloads of land-hungry settlers who set out from Iceland in the summer of 986 to colonize new territory explored several years earlier by the vagabond and outlaw, Erik the Red, only 14 made it, the others having been forced back to port or lost at sea. Yet more brave souls, drawn by the promise of a better life for themselves, soon followed. Under the leadership of the red-faced, red-bearded Erik (who had given the island its attractive name, the better to lure settlers there), the colonists developed a little Europe of their own just a few hundred miles from North America, a full 500 years before Columbus set foot on the continent. They established dairy and sheep farms throughout the unglaciated areas of the south and built churches, a monastery, a nunnery, and a cathedral boasting an imported bronze bell and greenish tinted glass windows. The Greenlanders prospered. From the number of farms in both colonies, whose 400 or so stone ruins still dot the landscape, archaeologists guess that the population may have risen to a peak of about 5,000. Trading with Norway, under whose rule they eventually came, the Greenlanders exchanged live falcons, polar bear skins, narwahl tusks, and walrus ivory and hides for timber, iron, tools, and other essentials, as well luxuries such as raisins, nuts, and wine. Excavations of Erik's farm, Brattahlid ("Steep Slope"), in 1932 by Danish archaeologists (Greenland, which became Danish in 1814, is today a self-governing possession of Denmark), revealed the remains of a church, originally surrounded by a turf wall to keep farm animals out, and a great hall where settlers cooked in fire pits, ate their meals, recited sagas, and played board games. Behind the church they found ruins of a cow barn, with partitions between the stalls still in place, one of them the shoulder blade of a whale--a sign of Viking practicality in a treeless land where wood was always in short supply. [6][image] Church ruins with outer protective wall designed to keep out farm animals (Dale Mackenzie Brown) [7][LARGER IMAGE] [8][LARGER IMAGE] [9][image] In 1961 workmen discovered near the barn a tiny horseshoe-shaped chapel built for Erik's wife Thjodhilde. When Erik and his supporters arrived in Greenland, the old Norse gods were still worshiped. Erik, a believer, upheld the ancient fatalistic philosophy of his Viking ancestors, but Thjodhilde converted to Christianity. Erik refused to surrender his beliefs, and Thjodhilde held steadfastly to hers. In time he granted her a small church 6.5 feet wide and 11.5 feet long, with room for 20 to 30 worshipers. During the excavations of Thjodhilde's chapel and its immediate surroundings in the 1960s, Danish archaeologists uncovered 144 skeletons. Most of these indicated tall, strong individuals, not very different in build from modern Scandinavians. One male skeleton was found with a large knife between the ribs, evidence of violence on Greenland's frontier. A mass grave south of the church, containing 13 bodies. According to Neils Lynnerup of the Panum Institute of the University of Copenhagen, who performed forensic work on the remains, the bodies were male, ranging from teens to middle age, with head and arm wounds suggesting they may have died in battle. The most compelling finds were three skeletons interred close to the church wall, just beneath where the eaves would have been. According to medieval Church accounts, those buried closest to the church were first in line for Judgement Day. Who were these three individuals? The archaeologists' best guess was that they were none other than Thjodhilde, Erik and their famous son, Leif, who around the year 1,000 had set sail from Brattahlid on his epochal journey to America. Today, their bones rest on laboratory shelves in Copenhagen. With the islanders' early success came a desire to have someone of authority oversee the work of the Church in Greenland. Early in the twelfth century they dispatched one of their leaders, Einar Sokkason, to Norway to convince the king to send them a bishop. Bishop Arnald was chosen for the job, despite the hapless man's protestation that "I am no good at handling difficult people." Apparently the Greenlanders had a well-developed reputation for contentious behavior. Still, they provided Arnald with one of their finest farms, Gardar, on a fjord not far from Brattahlid. Here they erected a cathedral, built of the local reddish sandstone and dedicated to the patron saint of seafarers, St. Nicholas; with a meeting hall capable of holding several hundred people; a large barn for 100 cows; and tithe barns to contain the goods that would be religiously collected from the farmers by priests and set aside for Rome. [10][image] Ruins of the tithe barns where goods collected from the farmers in the Church's name were kept (Dale Mackenzie Brown) [11][LARGER IMAGE] Although the presence of the Church had originally uplifted the Greenlanders, it now became their burden. By the middle of the fourteenth century, it owned two-thirds of the island's finest pastures, and tithes remained as onerous as ever, some of the proceeds going to the support of the Crusades half way around the world and even to fight heretics in Italy. Church authorities, however, found it increasingly difficult to get bishops to come to the distant island. Several clerics took the title, but never actually went there, preferring to bestow their blessings from afar. Foundation stone of the Norse cathedral (Dale Mackenzie Brown) [12][LARGER IMAGE] [13][image] Life went sour for the Greenlanders in other ways. The number of Norwegian merchant vessels arriving in their ports, though only one or two a year in the best of times, dropped until none came at all. This meant that the islanders were cut off from the major source of iron and tools needed for the smooth running of their farms and the construction and maintenance of their boats. Norway's long dominance of the northern sea trade withered as Germany's Hanseatic League rose to ascendancy. Although the league's bigger ships could carry more cargo than Norwegian vessels, they apparently never anchored in Greenland. The dangerous ocean crossing would have put them at too much risk for too little gain, especially now that elephant ivory, once difficult to obtain, could be gotten easily from Africa and replaced walrus ivory in prominence. As the Greenlanders' isolation from Europe grew, they found themselves victims of a steadily deteriorating environment. Their farmland, exploited to the full, had lost fertility. Erosion followed severe reductions in ground cover. The cutting of dwarf willows and alders for fuel and for the production of charcoal to use in the smelting of bog iron, which yielded soft, inferior metal, deprived the soil of its anchor of roots. Pollen analysis shows a dramatic decline in these species during the Viking years. In addition, livestock probably consumed any regenerating scrub. Overgrazing, trampling, and scuffing by the Norsemen's sheep, goats and cattle, the core of the island's livelihood, left the land debased. Greenland's climate began to change as well; the summers grew shorter and progressively cooler, limiting the time cattle could be kept outdoors and increasing the need for winter fodder. During the worst years, when rains would have been heaviest, the hay crop would barely have been adequate to see the penned animals through the coldest days. Over the decades the drop in temperature seems to have had an effect on the design of the Greenlanders' houses. Originally conceived as single-roomed structures, like the great hall at Brattahlid, they were divided into smaller spaces for warmth, and then into warrens of interconnected chambers, with the cows kept close by so the owners might benefit from the animals' body heat. [14][image] Site of the great hall with sheep resting on the foundation. In a similar building, perhaps on the very spot, Leif Ericson may well have entertained family and friends with tales of his North American exploits. (Dale Mackenzie Brown) [15][LARGER IMAGE] When the Norsemen arrived in Greenland, they had the island and its waters to themselves. Now they had to contend with the Inuit, who were competing with them for animal resources. This was especially true in the Nordseta, the Greenlanders' traditional summer hunting grounds 240 miles north of the Eastern Settlement. For years the Norsemen had been traveling to the area; they killed the walruses, narwahls, and polar bears they needed for trade with Europe and for payment of Church tithes and royal taxes. They also boiled seal blubber, filled skin bags with the oil, and gathered valuable driftwood. Inuit-Norse relations seem to have been fairly friendly at times, hostile at others. Few Inuit objects have been unearthed at the farms. Various Norse items, including bits of chain mail and a hinged bronze bar from a folding scale, have been found at Inuit camps in Greenland, mainland Canada, and on Baffin, Ellesmere, and Devon Islands. These are suggestive of commerce between the two peoples, but they may also have been seized by Inuit during raids on hunting parties in the Nordseta or plundered from farms. Norse mention of the Inuit is curiously scant in the surviving documents. An old story tells of hunters coming across "small people," the Skraelings, with whom the Greenlanders apparently fought. The text says that when these people "are hit, their wounds turn white and they do not bleed, but when they die there is no end to the bleeding." The next account is that of Ivar Bardarson in his Description of Greenland; Bardarson reported on the take-over of the Western Settlement by the Skraelings, with the implication that they had killed the inhabitants. Years later, another source describes a Skraeling attack in the Eastern Settlement, in which 18 Greenlanders met their deaths and two boys and a woman were captured. As Canadian archaeologist Robert McGhee has pointed out, there is no physical evidence of massacres, the destruction of Norse property, or the takeover and reuse of Norse shelters by the Inuit, and nothing in Inuit tales of Inuit-Norse contact to back up Bardarson's claim. One valley farm, excavated in 1976 and 1977, revealed just how desperate some of the Greenlanders had become. During a freezing winter, the farmers killed and ate their livestock, including a newborn calf and lamb, leaving the bones and hoofs on the ground. Even the deerhound, probably the companion of many a hunt, may have been slaughtered for food; one of its leg bones bore the knicks of a knifeblade. Similar remains were found on another farm, but if, like their masters, the animals were starving, their fatless meat would have offered little nourishment. Whoever killed the animals was used to living in squalid conditions. The bone-littered earthen floors had been spread with an insulating layer of twigs that attracted mice and a variety of insect pests. Study of the farms' ancient insect fauna revealed the remains of flies. Brought inadvertently from Europe, the flies were dependent for their survival on the warm environment of the Norse houses and on the less than sanitary state of the interiors. Radiocarbon dating of their remains revealed that they died out suddenly when these conditions ceased to prevail around 1350, presumably when the structures were no longer inhabited. Some of the rooms had been used as latrines, possibly out of habit or because the occupants were reluctant to venture out into the searing cold. An ice core drilled from the island's massive icecap between 1992 and 1993 shows a decided cooling off in the Western Settlement during the mid-fourteenth century. Ruins of a barn. Upright stones divided the cow stalls; a whale shoulder blade (white partition on right) also served as a divider. (Dale Mackenzie Brown) [16][LARGER IMAGE] [17][image] A church graveyard at Herjolfsnes on the southernmost tip of Greenland sheds further light on the final days of the Eastern Settlement. Reports reached Danish archaeologists in the 1920s that the cemetery was being washed away by the sea and that bones and scraps of clothing from the graves were strewn on the beach. The archaeologists hurried to save what remained. The skeletons revealed a hard life; teeth showed heavy wear and the joints of many adults were thickened by rheumatism. Though the flesh had rotted away, the heavy woolen apparel the dead wore to the grave remained intact. No fewer than 30 robes, 17 hoods or cowls, five hats, and six woven stockings (knitting had yet to be invented) emerged from the frozen earth. Most of the robes were heavily patched, but were in good enough condition to be wearable. The clothes were thought to reflect French and Dutch fashions, an unexpected find in a country supposedly out of touch with the rest of the world at the time. The generously cut hoods provided ample covering for shoulders and featured a long, decorative streamer known as a liripipe that hung down the back and could be wrapped across the face or around the hands to provide extra warmth. The most intriguing find seemed to be a tall cap, rather like a stove-pipe hat but flared at the back and without a brim. The archaeologists thought they recognized it as a Burgundian cap, which they had seen in European paintings of the high middle ages. Yet oddly here it was in Greenland. How were they to explain this anomaly? Because of its location, Herjolfsnes had been the first port of call for ships from Iceland and northern Europe. Archaeologists wondered who might have come to Greenland after Norse traders ceased to arrive. The most likely answer was English sea rovers or Basque whalers. According to their own tradition, Basques founded a whaling station in Newfoundland as early as 1372. They had only to follow Leif Eriksson's route north to reach Greenland. The archaeologists working on the site surmised that these Basques might well have stepped ashore sporting the new fangled Burgundian cap, which some fashion-starved Greenlander rushed to copy. This suggested that the islanders, no matter how cut off they may have been from Europe, still hungered for things European. The questions persist: what happened in the end to the last of the Greenlanders? what fate did the people who laid their loved ones to rest in this graveyard by the sea meet? who buried them when they died, and where? did the Greenlanders give up the island and depart for North America, as was said of the western settlers? It is hard to imagine such a mass-migration occurring, if for no other reason than that the islanders lacked the boats to carry it out. Without a ready source of nails, bolts, and wood for repairs, any ships that may have survived from earlier days would have made a leaky fleet indeed. Were the Greenlanders killed off by the Black Plague? Iceland's population had been reduced by as much as two-thirds when an epidemic struck in 1402 and dragged on for two years. Norway had suffered similarly. Had the Greenlanders also been afflicted, mass graves would tell the tale of the dying, and none from this period have been discovered. Were the islanders subject to intermittent pirate raids? It is conceivable that ship-borne marauders, rather than Skraelings, could have descended on the Western Settlement, but who could they have been? Basques? Perhaps. The archaeological date for the "Burgundian cap", set at A.D. 1500, has since been over-turned by radiocarbon dating. The new date for the cap is around 1300, suggesting that it reflected Nordic rather than southern European fashion. Such high-crowned caps are mentioned in Icelandic sagas from 1200-1300 and have been found as examples of women's fashion from this period. Archaeologists initially questioned the feasibility of the theory of an attack on the Greenlanders by Basques, believing the cap to be exemplary of Basque-influenced fashion, which seemed to preclude the possibility that the Norse settlers and the Basques were enemies. The re-dated cap is no longer evidence of friendly Greenlander-Basque relations, and the Basques are once again possible culprits in the mystery of the disappearance of the Greenlanders. English and German pirates also made several brutal attacks on Iceland in the fifteenth century; possibly they struck Greenland as well, though the new dates for the Greenlanders' clothing suggests minimal, if any, contact with Europeans. One Inuit story, recorded by Niels Egede, a Dane who grew up in Greenland during the eighteenth century when Denmark recolonized the island, lends some credence to the story of European raids. The narrator, whose ancestors had passed down the tale, recounts how three alien ships sailed in from the southwest "to plunder." In the ensuing fray, several of the Norsemen, to whom he refers as Norwegians, were killed. "But after the Norwegians had mastered them," he relates, "two of the ships had to sail away and the third they captured. The next year a whole fleet arrived and fought with the Norwegians, plundering and killing to obtain food. The survivors put out their vessels, loaded with what was left, and sailed away south, leaving some people behind. The next year the pirates came back again, and when we saw them we fled, taking some of the Norwegian women and children with us up the fjord, and left the others in the lurch. When we returned in the autumn hoping to find some people again, we saw to our horror that everything had been carried away, and houses and farms were burned down so that nothing was left." Once again the absence of any archaeological evidence of such violence leaves the tale unsubstantiated. Of all the houses so far studied in the Western Settlement, only one can be said to have been destroyed by fire. If such raids happened in the larger Eastern Settlement there would be signs of the havoc they wrought. The churches of both colonies seem to have been stripped bare, but a people intent on protecting their contents would have removed the sacred items and hidden them or, if the Greenlanders were indeed the irreligious rapscallions some sources say they were, sold them. [18][image] A Danish monument to Eric the Red at Brattahlid (Dale Mackenzie Brown) [19][LARGER IMAGE] In the end, the answer to the Greenlander question may be a lot less dramatic than the theories that have surrounded it in mystery. Thomas McGovern of New York's Hunter College, who has participated in excavations in Greenland, has proposed that the Norsemen lost the ability to adapt to changing conditions. He sees them as the victims of hidebound thinking and of a hierarchical society dominated by the Church and the biggest land owners. In their reluctance to see themselves as anything but Europeans, the Greenlanders failed to adopt the kind of apparel that the Inuit employed as protection against the cold and damp or to borrow any of the Eskimo hunting gear. They ignored the toggle harpoon, which would have allowed them to catch seals through holes in the ice in winter when food was scarce, and they seem not even to have bothered with fishhooks, which they could have fashioned easily from bone, as did the Inuit. Instead, the Norsemen remained wedded to their farms and to the raising of sheep, goats, and cattle in the face of ever worsening conditions that must have made maintaining their herds next to impossible. McGovern also believes that as life became harder, the birthrate declined. The young people who did come along may have seen a brighter future waiting somewhere else. The depredations of the plague in Iceland and in Norway could have created vacancies overseas that able-bodied Greenlanders might have filled. Through the years there may have been a slow but persistent drift of Greenlanders to those places that had been home to their ancestors, further reducing the island's dwindling population. Not everyone would have left; some must have stayed on their homesteads, unable to give up old attachments and resolved to wait out their fate. One such stoic was found lying face down on the beach of a fjord in the 1540s by a party of Icelandic seafarers, who like so many sailors before them had been blown off course on their passage to Iceland and wound up in Greenland. The only Norseman they would come across during their stay, he died where he had fallen, dressed in a hood, homespun woolens and seal skins. Nearby lay his knife, "bent and much worn and eaten away." Moved by their find, the men took it as a memento and carried it with them to show when at last they reached home. Dale Mackenzie Brown, who lives in Alexandria, Virginia, was the editor of Time-Life Books' archaeology book series, Lost Civilizations. From shovland at mindspring.com Sat Sep 3 22:54:21 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Sat, 3 Sep 2005 18:54:21 -0400 (EDT) Subject: [Paleopsych] We the 80% Message-ID: <5758315.1125788061959.JavaMail.root@mswamui-chipeau.atl.sa.earthlink.net> Subject: We the 80% Declare that this is our country as much as it is theirs. Declare that it is immoral for a few to have so much when so many have so little. Declare that good government is a necessary part of civilization, to serve the common good. Declare that we should trade for the resources we need, not fight wars for them. Declare that good health care is a good investment, and should be provided for all. Declare that we should join the world community rather than trying to dominate it. Steve Hovland San Francisco September 3, 2005 From HowlBloom at aol.com Sun Sep 4 03:04:35 2005 From: HowlBloom at aol.com (HowlBloom at aol.com) Date: Sat, 3 Sep 2005 23:04:35 EDT Subject: [Paleopsych] joel--time, distinction-making, and aggregation Message-ID: <1f3.11340f58.304bbe43@aol.com> In a message dated 9/3/2005 1:04:43 PM Eastern Standard Time, isaacsonj at hotmail.com writes: Yes. In my process the opposites are termed generically "quids" and "quods". In one-dimension, quids are broken lines and quods are continuous lines; very much like generalized yin and yang. And the whole process may be viewed as a generalized yin/yang system. One subtle thing to be kept in mind. Quid/quod (or separation/aggregation) are derived (or emergent) attributes from more primitive activities that involve local distinction-making. So, ultimately (recursive) distinction-making is the engine of all there is. hb: this is hearty meat for thought. What is reponsible for the distinction-making process? What is its origin? Is time the maker of distinctions and of aggregations? Is time some sort of algorithm, a basic rule, for the extraction of new distinctions and aggregations from the data of the previous recursion, extracting implications from the previous instant of Planck time? Or, to put it differently, does time follow rules? Are those rules more complex than just "take the next step"? How does time extract the implicit and make it explicit? How does it translate what was into what is? Then how does it do it all over again? Is time a process, a mechanism? Is it something we can't comprehend with our current tool kit of metaphors? This is really a quesion the paper Pavel Kurakin and I have written moves toward answering in a very primitive and speculative way: _http://arxiv.org/ftp/quant-ph/papers/0504/0504088.pdf_ (http://arxiv.org/ftp/quant-ph/papers/0504/0504088.pdf) . Howard ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Recent Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Institute for Accelerating Change ; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From HowlBloom at aol.com Sun Sep 4 04:29:35 2005 From: HowlBloom at aol.com (HowlBloom at aol.com) Date: Sun, 4 Sep 2005 00:29:35 EDT Subject: [Paleopsych] Eshel, Joel, Paul, and Pavel--not to mention Ted and Greg Message-ID: Very good thinking. Below is the basic search pattern that I see in life forms from bacteria to humans. I have a question. Is this search pattern mirrored in inaminate evolution--in the 10 billion years of cosmic evolution that occurred before life began? Is there an anolog or precursor to this pattern in the evolution of inanimate matter. Howard everyone is insecure. insecurity is one of the things that keeps us attached to each other and to society. Uncertainty and the nervous sense that we?d better get a quick reality check is one of the prime movers of the group brain. insecurity is so basic to life that even ants and bacteria get insecure. they need to rub up against each other for reassurance over and over and over again. chimps too. They dash out of the group for an adventure, get insecure as hell, dash back, and when they plunge into the warmth of the crowd and rub up against as many of their sisters or brothers as it takes to calm them down, they accomplish something more than mere self-comfort. They give a bit of information on the territory they?ve just explored. Each does a little antenna-and-scouting work for the crowd. Each gets a little from the antenna work of her insecure sisters who?ve dashed out ebulliently to explore, then have gotten the shivers and come back to share their experience and get some much needed warmth. Even when we move into strange emotional territory, we need to dash back and share it with a friend to make sure we?re sane and to get reassurance. In the process we reveal a bit of emotional exploration to the friend. Ever wonder about why one of the largest churches in the history of mankind was able to make such an enormous business out of confession? There are some things so shameful we can?t even talk about them with our friends. So who?s in the business of listening to what we don?t dare tell a soul? Not only listening and affirming us, but absolving us to boot? Yes, the good old Catholic Church. Our insecurities keep us together as an information processing engine. Our restlessness keeps us going off in new directions so we? ll have something to share. Every time we panic and run to talk to a friend we are providing new stuff for the data cruncher of society to munch. so what's true of chimps and ants and microbes is gonna be true for you and me. In a message dated 9/2/2005 11:00:57 AM Eastern Standard Time, obi.fox at gmail.com writes: Howl, while I fundamentally agree with your position, my present fascination with the neurophysiology of the biological mechanism leads me to suggest that it may not be the function of "search" itself which will lead to the answer you seek - but rather that of the manner in which it is mitigated and modified by the function of avoidance. As my grandfather (dean of physics at Columbia for 30 yrs) taught me in my youth - one will never fully understand the qualities of attraction unless you are able to account for the factor of repulsion as well. When you look at much of the present research in neurophysiology a pattern begins to appear in which "search" is the primary S/R response of the biomass. It is a pretty straight-forward, target oriented, response of distance reduction. The patterns or "strategies" only emerge when you factor in the manner in which the braking mechanism (avoidance sequences) operate simultaneously and change/redirect the movement and orientation. The third factor in the equation, which is usually overlooked, is the role of proximity - particularly in relation to avoidance/repulsion. I strongly suspect that this combination (repulsion/proximity) is the factor which Tsallis's equation takes into account. As I am not a physicist (just a philosopher with a fascination with the behavioral mechanism) I can't give you much more than this clue. I am, however, reasonably sure that it is the critical factor in the movement and orientation of sentient organisms and strongly suspect it is, as you suggest, a reflection of the underlying physical laws of the universe. cheers OBi Fox On 8/31/05, _HowlBloom at aol.com_ (mailto:HowlBloom at aol.com) <_HowlBloom at aol.com_ (mailto:HowlBloom at aol.com) > wrote: Pavel, Joel, Paul, and Eshel? See if I've understood the following article correctly. In this cosmos things don't follow the sort of random spread of probabilities Ludwig Boltzmann believed in. Instead, old patterns jump from one level to another, showing up in new news. To understand the size and nature of the jumps, we have to understand something even deeper?the search strategies with which the cosmos explores what Stuart Kaufman calls "possibility space". The key quote from the article below is this one: "if physicists can adequately understand the details of this 'exploring behaviour', they should be able to predict values of q from first principles ". Now please bear with me. What I've been digging into for many decades is the manner in which the cosmos feels out her possibilities?the search strategies of nature. So have Eshel Ben-Jacob, Paul Werbos, Pavel Kurakin, and Joel Isaacson. Pavel and I, in our paper "Conversation (dialog) model of quantum transitions" (arXiv.org) suggest that we may find applications all up and down the scale of nature to one search strategy in particular, that used by a cloud of 20,000 smart particles?bees. Power laws help us move from the human-scale to the very big. They help us understand how patterns visible on one scale?the scale of the spiral of water that flushes your toilet, for example, can be scaled up to hurricanes, to vortex of the Red Spot on the surface of Jupiter, to hurricanes on Jupiter the size of thirty earths, and to the spirals of billions of stars called galaxies. Power laws or their equivalent also allow us to predict that if we give the cosmos another six or seven billion years, the spirals from your toilet will show up in swirls of multitudes of galaxies?they will show up in today's potato-shaped, still-embryonic galaxy clusters. Power laws can be used in forward or reverse. In addition to going from the small to the very big, they can help us move from the human-scale to the very small. Power laws help us understand how the swirl in your bathtub shows up in the swizzles of electrons twisting through a channel on a superconductor. On the level of life, we can see search patterns at work, search patterns in Dennis Bray's clusters of receptors on a cell wall, search patterns in Eshel Ben-Jacobs multi-trillion-member bacterial colonies, search patterns in Tom Seeley's colonies of bees, search patterns in E.O. Wilson's colonies of ants, and search patterns in colonies of termites. We can see search patterns in the motions of birds, and in the way these patterns have been modeled mathematically in Floys (mathematically-generated flocks of carnivorous Boids?see _http://www.aridolan.com/ofiles/JavaFloys.html_ (http://www.aridolan.com/ofiles/JavaFloys.html) ). We can see search patterns in Martha Sherwood's vampire bats, and search patterns in the areas of my fieldwork--human cultural fads and fashions and the multi-generational search patterns of art, religion, ideology, world-views, and science. If search patterns are the key to understanding the factor q, if they are the key to comprehending the magic factor that scales things up and down in giant, discontinuous leaps, then let's by all means take search patterns at the scale of life and apply them like hell. That's exactly what Pavel Kurakin and I have done in our paper. And it's what I've done in much of my work, including in a book that's been growing in the Bloom computer for fifteen years-- A Universe In Search Of Herself?The Case of the Curious Cosmos. Now the question is this. Have I misinterpreted the material below? And even if I've mangled it utterly, could an understanding of search patterns at one scale in the cosmos echo the patterns at other levels big and small? If the search patterns of life are reflected in the inanimate cosmos, do the search patterns of life in turn reflect the search patterns of the particles and processes of which they are made? And do the search patterns of an organism reflect the search patterns of her flock, her tribe, her culture, and of the total team of biomass? To what extent are competing search patterns a part of the cosmic process? Did competing search patterns only show up 3.85 billion years ago with the advent of life (assuming that the advent of life on earth took place at the same time as the advent of life?if there is any?elsewhere in the universe)? Are the gaps between competing search patterns also big ones, with their own chasms and jumps spaced out by their own mysterious q? Biomass has been probing this planet for 3.85 billion years now, and we are the fingers with which she feels out her possibilities. But we are also the fingers through which social clusters of protons 13.7 billion years old feel out their future. Is q related to the discipline of a probing strategy? Retrieved August 31, 2005, from the World Wide Web _http://www.newscientist.com/channel/fundamentals/mg18725141.700_ (http://www.newscientist.com/channel/fundamentals/mg18725141.700) NewScientist.com * HOME * |NEWS * |EXPLORE BY SUBJECT * |LAST WORD * |SUBSCRIBE * |SEARCH * |ARCHIVE * |RSS * |JOBS Click to Print Entropy: The new order * 27 August 2005 * From New Scientist Print Edition. Subscribe and get 4 free issues. * Mark Buchanan CONSTANTINO TSALLIS has a single equation written on the blackboard in his office. It looks like one of the most famous equations in physics, but look more closely and it's a little bit different, decorated with some extra symbols and warped into a peculiar new form. Tsallis, based at the Brazilian Centre for Research in Physics, Rio de Janeiro, is excited to have created this new equation. And no wonder: his unassuming arrangement of symbols has stimulated hundreds of researchers to publish more than a thousand papers in the past decade, describing strange patterns in fluid flows, in magnetic fields issuing from the sun and in the subatomic debris created in particle accelerators. But there is something even more remarkable about Tsallis's equation: it came to him in a daydream. In 1985, in a classroom in Mexico City, Tsallis was listening as a colleague explained something to a student. On the chalkboard they had written a very ordinary algebraic expression, pq, representing some number p raised to the power q In Tsallis's line of work - describing the collective properties of large numbers of particles - the letter "p" usually stands for probability: the probability that a particle will have a particular velocity, say. Tsallis stared at the formula from a distance and his mind drifted off. "There were these pqs all over the board," he recalls, "and it suddenly came to my mind - like a flash - that with powers of probabilities one might do some unusual but possibly quite interesting physics." The physics involved may be more than quite interesting, however. The standard means of describing the collective properties of large numbers of particles - known as statistical mechanics - has been hugely successful for more than a century, but it has also been rather limited in its scope: you can only apply it to a narrow range of systems. Now, with an insight plucked out of thin air, Tsallis may have changed all that. Developed in the 19th century, statistical mechanics enabled physicists to overcome an imposing problem. Ordinary materials such as water, iron or glass are made of myriad atoms. But since it is impossible to calculate in perfect detail how every individual atom or molecule will move, it seems as if it might never be possible to understand the behaviour of such substances at the atomic level. The solution, as first suggested by the Austrian physicist Ludwig Boltzmann, lay in giving up hope of perfect understanding and working with probabilities instead. Boltzmann argued that knowing the probabilities for the particles to be in any of their various possible configurations would enable someone to work out the overall properties of the system. Going one step further, he also made a bold and insightful guess about these probabilities - that any of the many conceivable configurations for the particles would be equally probable. Deeper beauty Boltzmann's idea works, and has enabled physicists to make mathematical models of thousands of real materials, from simple crystals to superconductors. But his work also has a deeper beauty. For a start, it reflects the fact that many quantities in nature - such as the velocities of molecules in a gas - follow "normal" statistics. That is, they are closely grouped around the average value, with a "bell curve" distribution. The theory also explains why, if left to their own devices, systems tend to get disorganised. Boltzmann argued that any system that can be in several different configurations is most likely to be in the more spread out and disorganised condition. Air molecules in a box, for example, can gather neatly in a corner, but are more likely to fill the space evenly. That's because there are overwhelmingly more arrangements of the particles that will produce the spread out, jumbled state than arrangements that will concentrate the molecules in a corner. This way of thinking led to the famous notion of entropy - a measure of the amount of disorder in a system. In its most elegant formulation, Boltzmann's statistical mechanics, which was later developed mathematically by the American physicist Josiah Willard Gibbs, asserts that, under many conditions, a physical system will act so as to maximise its entropy. And yet Boltzmann and Gibbs's statistical mechanics doesn't explain everything: a great swathe of nature eludes its grasp entirely. Boltzmann's guess about equal probabilities only works for systems that have settled down to equilibrium, enjoying, for example, the same temperature throughout. The theory fails in any system where destabilising external sources of energy are at work, such as the haphazard motion of turbulent fluids or the fluctuating energies of cosmic rays. These systems don't follow normal statistics, but another pattern instead. If you repeatedly measure the difference in fluid velocity at two distinct points in a turbulent fluid, for instance, the probability of finding a particular velocity difference is roughly proportional to the amount of that difference raised to the power of some exponent. This pattern is known as a "power law", and such patterns turn up in many other areas of physics, from the distribution of energies of cosmic rays to the fluctuations of river levels or wind speeds over a desert. Because ordinary statistical mechanics doesn't explain power laws, their atomic-level origins remain largely mysterious, which is why many physicists find Tsallis's mathematics so enticing. In Mexico City, coming out of his reverie, Tsallis wrote up some notes on his idea, and soon came to a formula that looked something like the standard formula for the Boltzmann-Gibbs entropy - but with a subtle difference. If he set q to 1 in the formula - so that pq became the probability p - the new formula reduced to the old one. But if q was not equal to 1, it made the formula produce something else. This led Tsallis to a new definition of entropy. He called it q entropy. Back then, Tsallis had no idea what q might actually signify, but the way this new entropy worked mathematically suggested he might be on to something. In particular, the power-law pattern tumbles out of the theory quite naturally. Over the past decade, researchers have shown that Tsallis's mathematics seem to describe power-law behaviour accurately in a wide range of phenomena, from fluid turbulence to the debris created in the collisions of high-energy particles. But while the idea of maximising q entropy seems to work empirically, allowing people to fit their data to power-law curves and come up with a value of q for individual systems, it has also landed Tsallis in some hot water. The new mathematics seems to work, yet no one knows what the q entropy really represents, or why any physical system should maximise it. Degrees of chaos And for this reason, many physicists remain sceptical, or worse. "I have to say that I don't buy it at all," says physicist Cosma Shalizi of the University of Michigan in Ann Arbor, who criticises the mathematical foundations of Tsallis's approach. As he points out, the usual Boltzmann procedure for maximising the entropy in statistical mechanics assumes a fixed value for the average energy of a system, a natural idea. But to make things work out within the Tsallis framework, researchers have to fix the value of another quantity - a "generalised" energy - that has no clear physical interpretation. "I have yet to encounter anyone," says Shalizi, "who can explain why this should be natural." To his critics, Tsallis's success is little more than sleight of hand: the equation may simply provide a convenient way to generate power laws, which researchers can then fit to data by choosing the right value of q "My impression," says Guido Caldarelli of La Sapienza University in Rome, "is that the method really just fits data by adjusting a parameter. I'm not yet convinced there's new physics here." Physicist Peter Grassberger of the University of Wuppertal in Germany goes further. "It is all nonsense," he says. "It has led to no new predictions, nor is it based on rational arguments." The problem is that most work applying Tsallis's ideas has simply chosen a value of q to make the theory fit empirical data, without tying q to the real dynamics of the system in any deeper way: there's nothing to show why these dynamics depart from Boltzmann's picture of equal probabilities. Tsallis, who is now at the Santa Fe Institute in New Mexico, acknowledges this is a limitation, but suggests that a more fundamental explanation is already on its way. Power laws, he argues, should tend to arise in "weakly chaotic" systems. In this kind of system, small perturbations might not be enough to alter the arrangement of molecules. As a result, the system won't "explore" all possible configurations over time. In a properly chaotic system, on the other hand, even tiny perturbations will keep sending the system into new configurations, allowing it to explore all its states as required for Boltzmann statistics. Tsallis argues that if physicists can adequately understand the details of this "exploring behaviour", they should be able to predict values of q from first principles . In particular, he proposes, some as yet unknown single parameter - closely akin to q - should describe the degree of chaos in any system. Working out its value by studying a system's basic dynamics would then let physicists predict the value of q that then emerges in its statistics. Other theoretical work seems to support this possibility. For example, in a paper soon to appear in Physical Review E, physicist Alberto Robledo of the National Autonomous University of Mexico in Mexico City has examined several classic models that physicists have used to explore the phenomenon of chaos. What makes these models useful is that they can be tuned to be more or less chaotic - and so used to explore the transition from one kind of behaviour to another. Using these model systems, Robledo has been able to carry out Tsallis's prescription, deriving a value of q just from studying the system's fundamental dynamics. That value of q then reproduces intricate power-law properties for these systems at the threshold of chaos. "This work shows that q can be deduced from first principles," Tsallis says. While Robledo has tackled theoretical issues, other researchers have made the same point with real observations. In a paper just published, Leonard Burlaga and Adolfo Vinas at NASA's Goddard Space Flight Center in Greenbelt, Maryland, study fluctuations in the properties of the solar wind - the stream of charged particles that flows outward from the sun - and show that they conform to Tsallis's ideas. They have found that the dynamics of the solar wind, as seen in changes in its velocity and magnetic field strength, display weak chaos of the type envisioned by Tsallis. Burlaga and Vinas have also found that the fluctuations of the magnetic field follow power laws that fit Tsallis's framework with q set to 1.75 (Physica A, vol 356, p 275). The chance that a more comprehensive formulation of Tsallis's q entropy might eventually be found intrigues physicist Ezequiel Cohen of the Rockefeller University in New York City. "I think a good part of the establishment takes an unfair position towards Tsallis's work," he says. "The critique that all he does is 'curve fitting' is, in my opinion, misplaced." Cohen has also started building his own work on Tsallis's foundations. Two years ago, with Christian Beck of Queen Mary, University of London, he proposed an idea known as "superstatistics" that would incorporate the statistics of both Boltzmann and Tsallis within a larger framework. In this work they revisited the limitation of Boltzmann's statistical mechanics. Boltzmann's models cannot cope with any system in which external forces churn up differences such as variations in temperature. A particle moving through such a system would experience many temperatures for short periods and its fluctuations would reflect an average of the ordinary Boltzmann statistics for all those different temperatures. Cohen and Beck showed that such averaged statistics, emerging out of the messy non-uniformity of real systems, take the very same form as Tsallis statistics, and lead to power laws. In one striking example, Beck showed how the distribution of the energies of cosmic rays could emerge from random fluctuations in the temperature of the hot matter where they were originally created. Cohen thinks that, if nothing else, Tsallis's powers of probabilities have served to reawaken physicists to fundamental questions they have never quite answered. After all Boltzmann's idea, though successful, was also based on a guess; Albert Einstein disliked Boltzmann's arbitrary assumption of "equal probabilities" and insisted that a proper theory of matter had to rest on a deep understanding of the real dynamics of particles. That understanding still eludes us, but Tsallis may have taken us closer. It is possible that, in his mysterious q entropy, Tsallis has discovered a kind of entropy just as useful as Boltzmann's and especially suited to the real-world systems in which the traditional theory fails. "Tsallis made the first attempt to go beyond Boltzmann," says Cohen. The door is now open for others to follow. Close this window Printed on Thu Sep 01 01:17:25 BST 2005 ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Recent Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute _www.howardbloom.net_ (http://www.howardbloom.net/) _www.bigbangtango.net_ (http://www.bigbangtango.net/) Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Institute for Accelerating Change ; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: _www.paleopsych.org_ (http://www.paleopsych.org/) for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see _www.howardbloom.net/lucifer _ (http://www.howardbloom.net/lucifer) For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see _www.howardbloom.net _ (http://www.howardbloom.net/) _______________________________________________ paleopsych mailing list _paleopsych at paleopsych.org _ (mailto:paleopsych at paleopsych.org) _http://lists.paleopsych.org/mailman/listinfo/paleopsych_ (http://lists.paleopsych.org/mailman/listinfo/paleopsych) ---------- Howard Bloom Author of The Lucifer Principle: A Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From The Big Bang to the 21st Century Recent Visiting Scholar-Graduate Psychology Department, New York University; Core Faculty Member, The Graduate Institute www.howardbloom.net www.bigbangtango.net Founder: International Paleopsychology Project; founding board member: Epic of Evolution Society; founding board member, The Darwin Project; founder: The Big Bang Tango Media Lab; member: New York Academy of Sciences, American Association for the Advancement of Science, American Psychological Society, Academy of Political Science, Human Behavior and Evolution Society, International Society for Human Ethology; advisory board member: Institute for Accelerating Change ; executive editor -- New Paradigm book series. For information on The International Paleopsychology Project, see: www.paleopsych.org for two chapters from The Lucifer Principle: A Scientific Expedition Into the Forces of History, see www.howardbloom.net/lucifer For information on Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, see www.howardbloom.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From waluk at earthlink.net Sun Sep 4 05:51:37 2005 From: waluk at earthlink.net (Gerry Reinhart-Waller) Date: Sat, 03 Sep 2005 22:51:37 -0700 Subject: [Paleopsych] We the 80% In-Reply-To: <5758315.1125788061959.JavaMail.root@mswamui-chipeau.atl.sa.earthlink.net> References: <5758315.1125788061959.JavaMail.root@mswamui-chipeau.atl.sa.earthlink.net> Message-ID: <431A8B69.2000703@earthlink.net> Trading for resources is a noble idea except when all parties are after the same resource i.e. oil. I'm clueless how oil can equably be bartered. There is not enough oil anywhere to meet needs of all global countries. Gerry Reinhart-Waller shovland at mindspring.com wrote: >Subject: We the 80% > >Declare that this is our country as much as it is theirs. > >Declare that it is immoral for a few to have so much when so many have so little. > >Declare that good government is a necessary part of civilization, to serve the common good. > >Declare that we should trade for the resources we need, not fight wars for them. > >Declare that good health care is a good investment, and should be provided for all. > >Declare that we should join the world community rather than trying to dominate it. > > >Steve Hovland >San Francisco >September 3, 2005 > > >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > > From checker at panix.com Sun Sep 4 14:50:02 2005 From: checker at panix.com (Premise Checker) Date: Sun, 4 Sep 2005 10:50:02 -0400 (EDT) Subject: [Paleopsych] NYT Mag: Passion and the Prisoner Message-ID: Passion and the Prisoner http://www.nytimes.com/2005/08/28/magazine/28WWLN.html By DAPHNE MERKIN I once wanted to write a novel called ''Bad Taste,'' about a female character who has incomprehensibly rotten instincts when it comes to making romantic choices. While I was toying with this idea, I was also deeply enamored of a man whose upper-middle-class presentation (which included engraved stationery of the thickest stock, on which he penned the most proper of thank-you notes) couldn't fully mask some alarming tendencies, one of which required him to report in monthly to a law-enforcement agent as to his whereabouts. This relationship had followed upon a two-year involvement with another questionable sort who ended up serving a yearlong prison sentence after we stopped going out, convicted of some white-collar malfeasance he may or may not have committed. By that low point in his life he was married to another woman, but all the same, he sent me letters from the clink describing his daily routine (which seemed to consist of working out in the gym and taking computer courses courtesy of the federal government). I always felt a faint outlaw thrill when an envelope bearing a Pennyslvania postmark and an anonymous, ''Stalag 17''-like return address (Bunk #, Unit #, Division #) arrived in the mail, as if I were implicated in the whole sordid drama, a Bonnie Parker by proxy. Although my yen for bad-boy types has always had its limits, at least in real life, I was reminded of these wayward ex-boyfriends by the ''Bonnie and Clyde'' doings that took place outside a Tennessee courthouse on Aug. 9. These starred Jennifer Forsyth Hyatte, a 31-year-old former prison nurse who stands accused of killing a corrections officer in the attempt to spring her 34-year-old convict husband, George Hyatte, who was serving 35 years for aggravated robbery and assault, as he was leaving a hearing in handcuffs and shackles. After his wife, in response to her husband's directive, ''Shoot 'em!'' opened fire on the two officers escorting Hyatte, fatally injuring one, the couple set off on a 300-mile escape route. In a strangely calm conclusion to the manhunt that this murderous outburst set off, the lovebirds were arrested -- ''without incident'' -- a mere day and a half later in a Best Value motel along Interstate 71 in Columbus, Ohio. Jennifer Hyatte, the mother of three from a previous marriage and herself the daughter of a former sheriff's deputy, had no prior criminal record. She was married less than three months to her current husband when she sacrificed her own prospects in a wild plot to free him. Her mother characterized her as a loving wife and mother; her ex-husband seconded these claims. Jennifer had sole custody of her children and had put herself through nursing school. The questions that inevitably arise are How did she go from being a solid citizen to acting like a bandit? And why? Whatever the reasons behind Mrs. Hyatte's perplexing behavior -- a rescue fantasy, a need to nurture, the sexual excitement of being with a violent person (also known as hybristophilia), a wish for attention, a sense of low self-esteem, a grandiose us-against-them scheme -- she is far from alone in her seemingly lunatic infatuation with a man behind bars. Indeed, she is part of what has been recognized as a growing phenomemon, one common enough to have spawned Web sites like [3]WriteaPrisoner.com and [4]inmates.com as well as psychological studies with titles like ''Women Who Love Men Who Kill.'' This is the phenomenon of women who are attracted to the scent of demonic males -- fatally dangerous guys like Erik and Lyle Menendez, Robert Chambers and Scott Peterson. (Both Menendez brothers married in prison; Chambers was reportedly so besieged by transfixed females vying to smuggle him contraband that he had to be transferred to another jail; and Peterson has received at least two marriage proposals). The indubitably handsome and unlamented Ted Bundy was perhaps the archetypal demonic male, one who successfully posed as the dreamboat next door time and again, with the charm and verbal facility to knock the socks off any young woman unlucky enough to meet up with him when he was out cruising for prey. But while it would make for a simpler hypothesis if we could attribute the allure of inmates to their brute physical appeal, the truth is that even a one-eyed serial killer like Henry Lee Lucas had women panting after him, while John Wayne Gacy -- no one's idea of attractive and gay to boot (he killed 33 young men during homosexual encounters) -- became involved with a woman in prison. I suppose we who believe in an unconscious life should understand by now that if it's difficult to figure out the rationale for your friends' marriages and love affairs, it's well nigh impossible to figure out why some women fall for miscreants. The apparent emotional illogic of killer cachet may make for a sweet lyric in a Waylon Jennings song -- ''Ladies love outlaws like babies love stray dogs'' -- but it has left cultural observers scrambling for answers. These range from assigning blame to Western culture as a whole for adulating male violence to blaming a particular family background for creating the sort of vulnerable female who is looking to have some power in a world that has granted her none by hooking up with a man who is both dependent on her and has exhibited his dominance over others. It has been more than 25 years since Gary Gilmore was executed after issuing his succinct last words, ''Let's do it.'' I had a crush on him from the moment he appeared on the scene for any number of reasons: his good looks; his soulful letters to his pretty girlfriend, Nicole; the wounded aura of defiance he carried with him. Even after reading everything ever written about him, from Norman Mailer's ''Executioner's Song,'' which glamorized him, to his brother Mikal's ''Shot in the Heart,'' which cut him down to pitiful and thuggish size, I think I'd still pick his photo out of a lineup of eligible men. What's a lady to do? Such is the unreasonable pull of pheromones, such are the crooked ways of love. Daphne Merkin, a critic and novelist, is a frequent contributor to the magazine. References 3. http://WriteaPrisoner.com/ 4. http://inmates.com/ From checker at panix.com Sun Sep 4 14:50:07 2005 From: checker at panix.com (Premise Checker) Date: Sun, 4 Sep 2005 10:50:07 -0400 (EDT) Subject: [Paleopsych] UTexas: Species evolve to the brink of evolution Message-ID: Species evolve to the brink of evolution http://www.utexas.edu/opa/news/2005/08/biology26.html August 26, 2005 AUSTIN, Texas--A biologist at The University of Texas at Austin has presented a new theory that sheds light on how organisms, including viruses like HIV, rapidly evolve in the face of vaccines and antibiotics. Dr. Lauren Ancel Meyers says the new model could help identify genes that increase a pathogen's ability to evolve quickly against immune responses. Knowing those genes could help scientists develop new and better vaccines. Meyers' model predicts that populations can evolve "genetic potential"--genes that can create new traits quickly and simply in changing environments. "In fluctuating environments, you may get populations evolving right to the brink of evolution," says Meyers. The organisms are poised to evolve in the face of environmental shifts, because they have genes that can produce a new trait essential to their survival with one or two simple mutations. Meyers' model for rapid evolution appears in the Aug. 26 issue of the journal PLoS Computational Biology. Genetic mutations create the variation that natural selection acts upon. But mutations can be disadvantageous or even deadly, so organisms have evolved so that most simple mutations have little or no biological impact. Mutations are buffered by repair mechanisms and redundancies, like other genes that perform the same function. For organisms constantly facing new challenges in ever-changing environments, however, there's an advantage to creating new traits quickly. Previous explanations of rapid evolution have focused on the rate at which mutations occur in the genome. These theories suggest that populations can evolve new traits faster if they are hypermutable, that is, they have faster rates of mutation. Meyers' idea is significantly different, because it shows populations can adapt quickly without a faster rate of genetic mutation. Instead, the populations evolve genes that can be easily altered to create new traits. "Evolution can accelerate without changing the mutation rate itself--it's the evolution of the ability to evolve--that's the novel insight of this work," says Meyers. Meyers is an assistant professor in the Section of Integrative Biology with a faculty position at the Santa Fe Institute. Co-authors on the paper include Meyers' father, Dr. Fredric Ancel, from the University of Wisconsin-Milwaukee, and Dr. Michael Lachmann, of the Max Planck Institute in Leipzig, Germany. For more information contact: [18]Lee Clippard, College of Natural Sciences, 512-232-0675. Related Sites: * [19]Dr. Lauren Ancel Meyers * [20]Section of Integrative Biology * [21]College of Natural Sciences * [22]Santa Fe Institute Related Stories: * [23]Computer science team designs smart agent, wins international supply chain manufacturing competition - 11 August 2005 * [24]New method for quantum cooling discovered by researchers at The University of Texas at Austin - 3 August 2005 * [25]Chemical engineer and biologist make list of world's top young innovators - 20 September 2004 * [26]Predicting the Path of Infectious Diseases: Mathematical modeling traces the spread of SARS and other illnesses through human contact - 6 October 2003 [27]Office of Public Affairs P.O. Box Z Austin, TX 78713 512-471-3151 Fax 512-471-5812 References 15. http://www.utexas.edu/opa/news/2005/08/biology26.html#info 16. http://www.utexas.edu/opa/news.html 17. http://www.utexas.edu/opa/news/archives.html 18. mailto:lclippard at mail.utexas.edu 19. http://www.biosci.utexas.edu/IB/faculty/MEYERS.HTM 20. http://www.biosci.utexas.edu/ib/ 21. http://www.utexas.edu/cons/ 22. http://www.santafe.edu/ 23. http://www.utexas.edu/opa/news/2005/08/comp_sci11.html 24. http://www.utexas.edu/opa/news/2005/08/physics03.html 25. http://www.utexas.edu/opa/news/04newsreleases/nr_200409/nr_eng_nat_sci040920.html 26. http://www.utexas.edu/features/archive/2003/meyers.html 27. http://www.utexas.edu/opa/ 28. mailto:utopa at www.utexas.edu From checker at panix.com Sun Sep 4 14:50:15 2005 From: checker at panix.com (Premise Checker) Date: Sun, 4 Sep 2005 10:50:15 -0400 (EDT) Subject: [Paleopsych] Newsday: Idle brain invites dementia Message-ID: Idle brain invites dementia http://www.newsday.com/mynews/ny-hsalz254396781aug25,0,5041689,print.story Researchers say daydreaming may cause changes that lead to the onset of Alzheimer's disease STAFF WRITER August 25, 2005 Scientists have scanned the brains of young people when they are doing, well, nothing, and they found that a region active during this daydreaming state is the one hard-hit by the scourge of old age: Alzheimer's. "We never expected to see this," said Randy L. Buckner, a Howard Hughes Medical Institute investigator at Washington University in St. Louis. He said he suspects these activity patterns may, over decades of daily use, wear down the brain, sparking a chemical cascade that results in the disease's classic deposits and tangles that damage the brain. The regions identified are active when people daydream or think to themselves, Buckner said. When these regions are damaged, an older person may not be able to access the thoughts to follow through on an action, or even make sense of a string of thoughts. The study appears this week in the Journal of Neuroscience. The scientists used a variety of brain-scanning devices in more than 760 adults of all ages. Usually, scanning is done when volunteers carry out a particular mental task, such as remembering a list of words. This time, they were scanned without anything to do. What emerged on the images was what Buckner and his colleagues call the brain's "default" state. The brain remains in this state when it's not concentrating on a task like reading or talking. It's the place where the mind wanders. This default region lines up perfectly with the regions that are initially damaged in Alzheimer's. "It may be the normal cognitive function of the brain that leads to Alzheimer's later in life," Buckner said. He suspects the brain's metabolic activity slows over time in this region, making it vulnerable to mind-robbing symptoms. The scientists say this finding could prove useful diagnostically - a way to identify the disease early, even before symptoms appear. "You have to get to this pathology before it has its biggest effect," said William Klunk, an associate professor of psychiatry at the University of Pittsburgh and a co-investigator in the current study. Klunk developed an imaging tool that tracks amyloid plaque deposited in the brains of living Alzheimer's patients. The next step will be to see whether the sticky amyloid-filled plaques are dependent on the brain's metabolism. If so, there could be novel ways to attack the disease. The latest thinking among Alzheimer's scientists is that the underpinnings of the disease may be decades in the making. About a decade ago, David Snowdon of the University of Kentucky Medical Center published what has become a classic study of health and aging. He followed 678 nuns, ranging in age from 75 to 107, and analyzed journal entries and essays written when they joined the order as young women. He identified an association between the writing and the risk for Alzheimer's far into the future. The richer the detail in the essays, the less likely the writers were to develop Alzheimer's. Others have confirmed these findings, including a study by Case Western Reserve University School of Medicine researchers. They recently published a study using high school records from the 1940s to identify nearly 400 graduates. They tracked their health status through adulthood into old age. A higher IQ in high school reduced the risk of Alzheimer's by about half. From checker at panix.com Sun Sep 4 14:50:25 2005 From: checker at panix.com (Premise Checker) Date: Sun, 4 Sep 2005 10:50:25 -0400 (EDT) Subject: [Paleopsych] NYT: Teaching of Creationism Is Endorsed in New Survey Message-ID: Teaching of Creationism Is Endorsed in New Survey http://www.nytimes.com/2005/08/31/national/31religion.html [There is a similar mass-elite divide over immigration and a much larger on over who shot JFK. Eighty percent of the public doubt the lone-nut hypothesis, while 100% of the media supported at, at least as far as I could tell from watching the coverage for the 40th anniversary of the assassination. My own view is that Kennedy, knowing that he had not long to live and wanted to be remembered, arranged for his own assassination and had lots of contradictory evidence planted in order to keep public fascination with the case going and going and going.] By [3]LAURIE GOODSTEIN In a finding that is likely to intensify the debate over what to teach students about the origins of life, a poll released yesterday found that nearly two-thirds of Americans say that creationism should be taught alongside evolution in public schools. The poll found that 42 percent of respondents held strict creationist views, agreeing that "living things have existed in their present form since the beginning of time." In contrast, 48 percent said they believed that humans had evolved over time. But of those, 18 percent said that evolution was "guided by a supreme being," and 26 percent said that evolution occurred through natural selection. In all, 64 percent said they were open to the idea of teaching creationism in addition to evolution, while 38 percent favored replacing evolution with creationism. The poll was conducted July 7-17 by the Pew Forum on Religion and Public Life and the Pew Research Center for the People and the Press. The questions about evolution were asked of 2,000 people. The margin of error was 2.5 percentage points. John C. Green, a senior fellow at the Pew Forum, said he was surprised to see that teaching both evolution and creationism was favored not only by conservative Christians, but also by majorities of secular respondents, liberal Democrats and those who accept the theory of natural selection. Mr. Green called it a reflection of "American pragmatism." "It's like they're saying, 'Some people see it this way, some see it that way, so just teach it all and let the kids figure it out.' It seems like a nice compromise, but it infuriates both the creationists and the scientists," said Mr. Green, who is also a professor at the University of Akron in [4]Ohio. Eugenie C. Scott, the director of the National Center for Science Education and a prominent defender of evolution, said the findings were not surprising because "Americans react very positively to the fairness or equal time kind of argument." "In fact, it's the strongest thing that creationists have got going for them because their science is dismal," Ms. Scott said. "But they do have American culture on their side." This year, the National Center for Science Education has tracked 70 new controversies over evolution in 26 states, some in school districts, others in the state legislatures. President Bush joined the debate on Aug. 2, telling reporters that both evolution and the theory of intelligent design should be taught in schools "so people can understand what the debate is about." Senator Bill Frist of [5]Tennessee, the Republican leader, took the same position a few weeks later. Intelligent design, a descendant of creationism, is the belief that life is so intricate that only a supreme being could have designed it. The poll showed 41 percent of respondents wanted parents to have the primary say over how evolution is taught, compared with 28 percent who said teachers and scientists should decide and 21 percent who said school boards should. Asked whether they believed creationism should be taught instead of evolution, 38 percent were in favor, and 49 percent were opposed. More of those who believe in creationism said they were "very certain" of their views (63 percent), compared with those who believe in evolution (32 percent). The poll also asked about religion and politics, government financing of religious charities, and gay men and lesbians in the military. Most of these questions were asked of a smaller pool of 1,000 respondents, and the margin of error was 2.5 percentage points, Pew researchers said. The public's impression of the Democratic Party has changed in the last year, the survey found. Only 29 percent of respondents said they viewed Democrats as being "friendly toward religion," down from 40 percent in August of 2004. Meanwhile, 55 percent said the Republican Party was friendly toward religion. Luis E. Lugo, the director of the Pew Forum on Religion and Public Life, said: "I think this is a continuation of the Republican Party's very successful use of the values issue in the 2004 election, and the Democrats not being able up until now to answer that successfully. Some of the more visible leaders, such as Howard Dean and others, have reinforced that image of a secular party. Of course, if you look at the Democratic Party, there's a large religious constituency there." Survey respondents agreed in nearly equal numbers that nonreligious liberals had "too much control" over the Democratic Party (44 percent), and that religious conservatives had too much control over the Republican Party (45 percent). On religion-based charities, two-thirds of respondents favored allowing churches and houses of worship to apply for government financing to provide social services. But support for such financing declined from 75 percent in early 2001, when Mr. Bush rolled out his religion-based initiative. On gay men and lesbians in the military, 58 percent of those polled said they should be allowed to serve openly, a modest increase from 1994, when 52 percent agreed. Strong opposition has fallen in that time, to 15 percent from 26 percent in 1994. References 3. http://query.nytimes.com/search/query?ppds=bylL&v1=LAURIE%20GOODSTEIN&fdq=19960101&td=sysdate&sort=newest&ac=LAURIE%20GOODSTEIN&inline=nyt-per 4. http://topics.nytimes.com/top/news/national/usstatesterritoriesandpossessions/ohio/index.html?inline=nyt-geo 5. http://topics.nytimes.com/top/news/national/usstatesterritoriesandpossessions/tennessee/index.html?inline=nyt-geo From checker at panix.com Sun Sep 4 14:50:20 2005 From: checker at panix.com (Premise Checker) Date: Sun, 4 Sep 2005 10:50:20 -0400 (EDT) Subject: [Paleopsych] Edge: John Horgan: In Defense of Common Sense Message-ID: John Horgan: In Defense of Common Sense http://www.edge.org/3rd_culture/horgan05/horgan05_index.html All these theories are preposterous, but that's not my problem with them. My problem is that no conceivable experiment can confirm the theories, as most proponents reluctantly acknowledge. The strings (or membranes, or whatever) are too small to be discerned by any buildable instrument, and the parallel universes are too distant. Common sense thus persuades me that these avenues of speculation will turn out to be dead ends. IN DEFENSE OF COMMON SENSE By John Horgan Introduction John Horgan, author of The End of Science, and feisty and provocative as ever, is ready for combat with scientists in the Edge community. "I'd love to get Edgies' reaction to my OpEd piece -- "In Defense of Common Sense" -- in The New York Times", he writes. Physicist Leonard Susskind, writing "In Defense of Uncommon Sense", is the first to take up Horgan's challenge ([10]see below). Susskind notes that in "the utter strangeness of a world that the human intellect was not designed for... physicists have had no choice but to rewire themselves. Where intuition and common sense failed, they had to create new forms of intuition, mainly through the use of abstract mathematics." We've gone "out of the range of experience." Read on. -- [11]JB JOHN HORGAN oversees the science writings program at the Stevens Institute of Technology. His books include The End of Science and Rational Mysticism. [12]John Horgan's Edge bio page [13]THE REALITY CLUB:[14] Verena Huber-Dyson, [15]Robert Provine, [16]Spencer Reiss, [17]Daniel Gilbert, [18]John McCarthy, [19]Leonard Susskind respond to John Horgan. [20]Horgan replies. _________________________________________________________________ IN DEFENSE OF COMMON SENSE As anyone remotely interested in science knows by now, 100 years ago Einstein wrote six papers that laid the groundwork for quantum mechanics and relativity, arguably the two most successful theories in history. To commemorate Einstein's "annus mirabilis," a coalition of physics groups has designated 2005 the World Year of Physics. The coalition's Web site lists more than 400 celebratory events, including conferences, museum exhibits, concerts, Webcasts, plays, poetry readings, a circus, a pie-eating contest and an Einstein look-alike competition. In the midst of all this hoopla, I feel compelled to deplore one aspect of Einstein's legacy: the widespread belief that science and common sense are incompatible. In the pre-Einstein era, T. H. Huxley, aka "Darwin's bulldog," could define science as "nothing but trained and organized common sense." But quantum mechanics and relativity shattered our common-sense notions about how the world works. The theories ask us to believe that an electron can exist in more than one place at the same time, and that space and time -- the I-beams of reality -- are not rigid but rubbery. Impossible! And yet these sense-defying propositions have withstood a century's worth of painstaking experimental tests. As a result, many scientists came to see common sense as an impediment to progress not only in physics but also in other fields. "What, after all, have we to show for ... common sense," the behaviorist B. F. Skinner asked, "or the insights gained through personal experience?" Elevating this outlook to the status of dogma, the British biologist Lewis Wolpert declared in his influential 1992 book "The Unnatural Nature of Science," "I would almost contend that if something fits in with common sense it almost certainly isn't science." Dr. Wolpert's view is widely shared. When I invoke common sense to defend or -- more often -- criticize a theory, scientists invariably roll their eyes. Scientists' contempt for common sense has two unfortunate implications. One is that preposterousness, far from being a problem for a theory, is a measure of its profundity; hence the appeal, perhaps, of dubious propositions like multiple-personality disorders and multiple-universe theories. The other, even more insidious implication is that only scientists are really qualified to judge the work of other scientists. Needless to say, I reject that position, and not only because I'm a science journalist (who majored in English). I have also found common sense -- ordinary, nonspecialized knowledge and judgment -- to be indispensable for judging scientists' pronouncements, even, or especially, in the most esoteric fields. For example, Einstein's intellectual heirs have long been obsessed with finding a single "unified" theory that can embrace quantum mechanics, which accounts for electromagnetism and the nuclear forces, and general relativity, which describes gravity. The two theories employ very different mathematical languages and describe very different worlds, one lumpy and random and the other seamless and deterministic. The leading candidate for a unified theory holds that reality stems from tiny strings, or loops, or membranes, or something wriggling in a hyperspace consisting of 10, or 16 or 1,000 dimensions (the number depends on the variant of the theory, or the day of the week, or the theorist's ZIP code). A related set of "quantum gravity" theories postulates the existence of parallel universes -- some perhaps mutant versions of our own, like "Bizarro world" in the old Superman comics -- existing beyond the borders of our little cosmos. "Infinite Earths in Parallel Universes Really Exist," the normally sober Scientific American once hyperventilated on its cover. All these theories are preposterous, but that's not my problem with them. My problem is that no conceivable experiment can confirm the theories, as most proponents reluctantly acknowledge. The strings (or membranes, or whatever) are too small to be discerned by any buildable instrument, and the parallel universes are too distant. Common sense thus persuades me that these avenues of speculation will turn out to be dead ends. Common sense -- and a little historical perspective -- makes me equally skeptical of grand unified theories of the human mind. After a half-century of observing myself and my fellow humans -- not to mention watching lots of TV and movies -- I've concluded that as individuals we're pretty complex, variable, unpredictable creatures, whose personalities can be affected by a vast range of factors. I'm thus leery of hypotheses that trace some important aspect of our behavior to a single cause. Two examples: The psychologist Frank Sulloway has claimed that birth order has a profound, permanent impact on personality; first-borns tend to be conformists, whereas later-borns are "rebels." And just last year, the geneticist Dean Hamer argued that human spirituality -- surely one of the most complicated manifestations of our complicated selves -- stems from a specific snippet of DNA. Although common sense biases me against these theories, I am still open to being persuaded on empirical grounds. But the evidence for both Dr. Sulloway's birth-order theory and Dr. Hamer's "God gene" is flimsy. Over the past century, moreover, mind-science has been as faddish as teenage tastes in music, as one theory has yielded to another. Everything we think and do, scientists have assured us, can be explained by the Oedipal complex, or conditioned reflexes, or evolutionary adaptations, or a gene in the X chromosome, or serotonin deficits in the amygdala. Given this rapid turnover in paradigms, it's only sensible to doubt them all until the evidence for one becomes overwhelming. Ironically, while many scientists disparage common sense, artificial-intelligence researchers have discovered just how subtle and powerful an attribute it is. Over the past few decades, researchers have programmed computers to perform certain well-defined tasks extremely well; computers can play championship chess, calculate a collision between two galaxies and juggle a million airline reservations. But computers fail miserably at simulating the ordinary, experience-based intelligence that helps ordinary humans get through ordinary days. In other words, computers lack common sense, and that's why even the smartest ones are so dumb. Yes, common sense alone can lead us astray, and some of science's most profound insights into nature violate it; ultimately, scientific truth must be established on empirical grounds. Einstein himself once denigrated common sense as "the collection of prejudices acquired by age 18," but he retained a few basic prejudices of his own about how reality works. His remark that "God does not play dice with the universe" reflected his stubborn insistence that specific causes yield specific effects; he could never fully accept the bizarre implication of quantum mechanics that at small scales reality dissolves into a cloud of probabilities. So far, Einstein seems to be wrong about God's aversion to games of chance, but he was right not to abandon his common-sense intuitions about reality. In those many instances when the evidence is tentative, we should not be embarrassed to call on common sense for guidance. [Editor's Note:[21] First published as an Op-Ed Page article in The New York Times on August 12th] [22]LEONARD SUSSKIND Felix Bloch Professor of Theoretical Physics, Stanford University IN DEFENSE OF UNCOMMON SENSE Leonard Susskind Responds to John Horgan [susskind100.jpg] John Horgan, the man who famously declared The End of Science shortly before the two greatest cosmological discoveries since the Big Bang, has now come forth to tell us that the world's leading physicists and cognitive scientists are wasting their time. Why? Because they are substituting difficult-to-understand and often shockingly unintuitive concepts for "everyman" common sense. Whose common sense? John Horgan's (admittedly a non-scientist) I presume. The complaint that science -- particularly physics -- has lost contact with common sense is hardly new. It was used against Einstein, Bohr, and Heisenberg, and even today is being used against Darwin by the right wing agents of "intelligent design." Every week I get several angry email messages containing "common sense" (no math) theories of everything from elementary particles to the rings of Saturn. The theories have names like "Rational Theory of the Phenomenons. Modern science is difficult and often counterintuitive. Instead of bombastically ranting against this fact, Horgan should try to understand why it is so. The reasons have nothing to do with the perversity of string theorists, but rather, they have to do with the utter strangeness of a world that the human intellect was not designed for. Let me explain. Up until the beginning of the 20th century, physics dealt with phenomena that took place on a human scale. The typical objects that humans could observe varied in the size from a bacterium to something smaller than a galaxy. Similarly, no human had ever traveled faster than a hundred miles an hour, or a experienced a gravitational field that accelerates objects more powerfully than the Earth's acceleration, a modest thirty two feet per second per second. Forces smaller than a thousandth of a pound, or bigger than a thousand pounds, were also out of the range of experience. Evolution wired us with both hardware and software that would allow us to easily "grock" concepts like force, acceleration, and temperature, but only over the limited range that applies to our daily lives -- concepts that are needed for our physical survival. But it simply did not provide us with wiring to intuit the quantum behavior of an electron, or velocities near the speed of light, or the powerful gravitational fields of black holes, or a universe that closes back on itself like the surface of the Earth. A classic example of the limitations of our neural wiring is the inability to picture more than three dimensions. Why, after all, would nature provide us with the capacity to visualize things that no living creature had ever experienced? Physicists have had no choice but to rewire themselves. Where intuition and common sense failed, they had to create new forms of intuition, mainly through the use of abstract mathematics: Einstein's four dimensional elastic space-time; the infinite dimensional Hilbert space of quantum mechanics; the difficult mathematics of string theory; and, if necessary, multiple universes. When common sense fails, uncommon sense must be created. Of course we must use uncommon sense sensibly but we hardly need Horgan to tell us that. In trying to understand the universe at both its smallest and biggest scales, physics and cosmology have embarked on a new age of exploration. In a sense we are attempting to cross a larger uncharted sea than ever before. Indeed, as Horgan tells us, it's a dangerous sea where one can easily lose ones way and go right off the deep end. But great scientists are, by nature, explorers. To tell them to stay within the boundaries of common sense may be like telling Columbus that if he goes more than fifty miles from shore he'll get hopelessly lost. Besides, good old common sense tells us that the Earth is flat. Horgan also complains about the lack of common sense in cognitive science, i.e., the science of the mind. But the more psychologists and neuroscientists learn about the workings of the mind, the more it becomes absolutely clear that human cognition does not operate according to principles of common sense. That a man can mistake his wife for a hat is-well-common nonsense. But it happens. Cognitive scientists are also undergoing a rewiring process. Finally I must take exception to Horgan's claim that "no conceivable experiment can confirm the theories [string theory and cosmological eternal inflation] as most proponents reluctantly acknowledge." Here I speak from first hand knowledge. Many, if not all, of the most distinguished theoretical physicists in the world -- Steven Weinberg, Edward Witten, John Schwarz, Joseph Polchinski, Nathan Seiberg, Juan Maldacena, David Gross, Savas Dimopoulos, Andrei Linde, Renata Kallosh, among many others, most certainly acknowledge no such thing. These physicists are full of ideas about how to test modern concepts -- from superstrings in the sky to supersymmetry in the lab. Instead of dyspeptically railing against what he plainly does not understand, Horgan would do better to take a few courses in algebra, calculus, quantum mechanics, and string theory. He might then appreciate, even celebrate, the wonderful and amazing capacity of the human mind to find uncommon ways to comprehend the incomprehensible. _________________________________________________________________ [23]JOHN McCARTHY Computer Scientist; Artificial Intelligence Pioneer, Stanford University [mccarthy100.jpg] John Horgan pontificates: "But computers fail miserably at simulating the ordinary, experience-based intelligence that helps ordinary humans get through ordinary days. In other words, computers lack common sense, and that's why even the smartest ones are so dumb." Horgan regards a lack of common sense as an intrinsic characteristic of computers; I assume he means computer programs. However, much artificial intelligence research has focussed on analyzing commonsense knowledge and reasoning. I refer to my 1959 article "Programs with common sense", my 1990 collection of articles "Formalizing common sense", Erik Mueller's forthcoming book "Commonsense reasoning", and the biennial international conferences on common sense. I fear John Horgan would find this work as distressingly technical as he finds physics. Common sense has proved a difficult scientific topic, and programs with human-level common sense have not yet been achieved. It may be another 100 years. The AI research has identified components of commonsense knowledge and reasoning, has formalized some of them in languages of mathematical logic, and has built some of them into computer programs. Besides the logic based approach, there have been recent attempts to understand common sense as an aspect of the human nervous system. Research on formalizing common sense physics, e.g. that objects fall when pushed off a table, are not in competition with physics as studied by physicists. Rather physics is imbedded in common sense. Thus applying Newton's F = ma requires commonsense reasoning. Physics texts and articles do not consist solely of equations but contain common sense explanations. When Horgan says that string theory is untestable, he is ignoring even the popular science writing about string theory. This literature tells us that the current untestability of string theory is regarded by the string theorists as a blemish they hope to fix. _________________________________________________________________ [24]DANIEL GILBERT Psychologist, Harvard University [gilbert100.jpg] Horgan's Op-Ed piece is such a silly trifle that it doesn't dignify serious response. The beauty of science is that it allows us to transcend our intuitions about the world, and it provides us with methods by which we can determine which of our intuitions are right and which are not. Common sense tell us that the earth is flat, that the sun moves around it, and that the people who know the least often speak the loudest. Horgan's essay demonstrates that at least one of our common sense notions is true. _________________________________________________________________ [25]SPENCER REISS Contributing Editor, Wired Magazine [reiss100.jpg] Surely Susskind is joking: "Why, after all, would nature provide us with the capacity to visualize things that no living creature had ever experienced?" Art? Music? Heaven? God? The Red Sox win the World Series? Science fiction, for chrissake! Buy the man a drink! This is the kind of stuff that gives scientists a bad name. _________________________________________________________________ [26]ROBERT R. PROVINE Psychologist and Neuroscientist, University of Maryland; Author, Laughter [provine100.jpg] Hunter-Gatherers Make Poor Physicists and Cognitive Neuroscientists: Horgan 0, Susskind 1 Horgan continues to expand his franchise that is based on the technique of assertively posing provocative and often reasonable propositions. The boldness of his assertions earns him an audience that he would not otherwise achieve. But as in The End of Science, he picks a fight that he is not prepared to win and never delivers a telling blow. Susskind effectively exploits a basic weakness in Horgan's thesis, the fallibility of common sense, especially in scientific context. Researchers working at the frontiers of many sciences use mathematical and theoretical prostheses to expand the range of phenomena that can be studied, escaping some of the limits of their evolutionary history and its neurological endowment. The startling truth is that we live in a neurologically-generated, virtual cosmos that we are programmed to accept as the real thing. The challenge of science is to overcome the constraints of our neurological wetware and understand a physical world that we know only second-hand and incompletely. In fact, we must make an intuitive leap to accept the fact that there is a problem at all. Common sense and the brain that produces it evolved in the service of our hunter-gatherer ancestors, not physicists and cognitive neuroscientists. Unassisted, the brain of Horgan or any other member of our species is not up to task of engaging certain scientific problems. Sensory science provides the most obvious discrepancies between the physical world and our neurological model of it. We humans evolved the capacity to detect a subset of stimuli available to us on the surface of planet Earth. Different animals with different histories differ in their absolute sensitivity to a given stimulus and in the bandwidth to with they are sensitive. And some species have modes of sensation that we lack, such as electric or magnetic fields. Each species is a theory of the environment in which it evolved and it can never completely escape the limitations of its unique evolutionary history. But the problem of sensing the physical cosmos is even more complicated, because we do not directly sense physical stimuli, but are aware of only their neurological correlates. There is not, for example, any "blue" in electromagnetic radiation, pitch of B-flat in pressure changes in the air, or sweetness in sucrose. All are neurological derivatives of the physical world, not the thing itself. Neurological limits on thinking are probably as common as those on sensing, but they are more illusive -- it's harder to think about what we can't think about than what we can't sense. A good example from physics is our difficulty in understanding the space-time continuum -- our intellect fails us when we move beyond the dimensions of height, width, and depth. Other evidence of our neurological reality-generator is revealed by its malfunction in illusions, hallucinations, and dreams, or in brain damage, where the illusion of reality does not simply degrade, but often splinters and fragments in unanticipated ways. The intellectual prostheses of mathematics, computers, and instrumentation loosen but do not free our species of the constraints of its neurological heritage. We do not build random devices to detect stimuli that we cannot conceive, but build outward from a base of knowledge. A neglected triumph of science is how far we have come with so flawed an instrument as the human brain and its sensoria. Another is in realizing the limits of common sense and its knowledge base of folk wisdom. _________________________________________________________________ [27]VERENA HUBER-DYSON Logician; Emeritus Professor, University of Calgary [huber-dyson100.jpg] IN PRAISE OF EVOLVING COMMON SENSE It seems to me that John Horgan in his Edge piece "In Defense of Common Sense" is confusing "common sense" with "prejudice". The human capacity for common sense reasoning is undergoing an evolutionary process as science and technology are progressing. Just look back over the last two millennia for spectacular illustrations of this pretty obvious observation. Presumably Mr. Horgan watches TV, uses his personal computer and takes airplanes to get places he cannot reach on foot nor by his questionably commonsensical motor car. If he does not know how to fix whatever trouble his car may come up with -- like some people do -- he really should not drive it. To some of my colleagues the telescope serves as the extension of their vision to others the cloud chamber extends the reach of their cognition, just the way his car serves Mr Horgan to get around. In the cloud chamber we witness effects of events too small to see directly. Oh there are so many wonderful illustrations of this evolution of the human cognitive faculties. Ideas, models, conjectures acquiring reality by circumstantial evidence and repeated reasoning become part of our life; as they get entrenched our common sense expands through familiarity. Sometime our notions have to be adjusted, or some, like the idea of the ether, become obsolete. That too is progress. Common sense that refuses to evolve becomes prejudice, or bigotry to use a more bold expression. I have seen quite a bit of scientific evolution in my time. In my childhood the planetary model of the atom was the way we were thinking of matter; now it has become a metaphor or a handy tool, useful under certain conditions. The same is about to happen with strings. We have learned to think more abstractly, we do not really need to think of strings as wiggly worms much too small to see. We have become quite adept at mathematical modeling. I'd love to be around to see the evolution of cognition happening ever so much faster. Even the men in the street are keeping pace. Let us not encourage spoil-sports like Mr Horgan. _________________________________________________________________ [28]JOHN HORGAN My modest defense of common sense as a guide for judging theories -- particularly when empirical evidence is flimsy -- has provoked a predictable shriek of outrage from Lenny Susskind. His attempt to lump me together with advocates of intelligent design is more than a little ironic, since in rebuking me he displays the self-righteous arrogance of a religious zealot damning an infidel. Moreover, as a proponent (!!) recently acknowledged in the New York Times, string theory and its offshoots are so devoid of evidence that they represent "a faith-based initiative." Susskind urges me to "take courses in algebra, calculus, quantum mechanics, and string theory" before I mouth off further about strings. In other words, I must become a string theorist to voice an opinion about it. This assertion recalls the insistence of Freudians -- another group notoriously hostile to outside criticism and complaints about testability -- that only those fully indoctrinated into their mind-cult can judge it. Susskind's protestations to the contrary, string theory can be neither falsified nor verified by any empirical test. At best, experiments can provide only necessary but insufficient evidence for components -- such as supersymmetry -- of certain variants of string theory. That is why in 2002 I bet the physicist Michio Kaku $1000 that by 2020 no one will be awarded a Nobel prize for work on string theory or similar quantum-gravity theory. (I discuss the bet with Kaku, Lee Smolin, Gordon Kane, and other physicists at [29]"Long Bet"). Would Susskind care to make a side bet? As to the other respondents: John McCarthy merely confirms my assertion that computer programmers have failed to simulate common sense -- except that McCarthy expends many more words to make his point than I do. And like Lenny Susskind, Robert Provine and Verena Huber-Dyson merely point out that many scientific theories violate popular, common-sense intuitions about nature and yet prove to be empirically correct. No kidding. I said just that in my essay. The question that I raised -- and that all these respondents have studiously avoided -- is what we should do when presented with theories such as psychoanalysis or string theory, which are not only counterintuitive but also lacking in evidence. Common sense tells me that in these cases common sense can come in handy. References 12. http://www.edge.org/3rd_culture/bios/horgan.html 21. http://www.nytimes.com/2005/08/12/opinion/12horgan.html 22. http://www.edge.org/3rd_culture/bios/susskind.html 23. http://www.edge.org/3rd_culture/bios/mccarthy.html 24. http://www.edge.org/3rd_culture/bios/gilbert.html 25. http://www.edge.org/3rd_culture/bios/reiss.html 26. http://www.edge.org/3rd_culture/bios/provine.html 27. http://www.edge.org/3rd_culture/bios/huber-dyson.html 28. http://www.edge.org/3rd_culture/bios/horgan.html 29. http://www.longbets.org/12%3Ehttp://www.longbets.org/12 From shovland at mindspring.com Sun Sep 4 16:29:49 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Sun, 4 Sep 2005 09:29:49 -0700 (GMT-07:00) Subject: [Paleopsych] Diode Array, Claim of clean energy Message-ID: <1639258.1125851389401.JavaMail.root@mswamui-blood.atl.sa.earthlink.net> " I have invented, patented1, and tested2 a chip containing very many very small diodes that absorbs uniform ambient heat and releases D.C. Electrical power. This is a superior energy source that is very inexpensive and will power small appliances out of the box without a need for power wiring anywhere in the world. This has great potential to improve the prosperity of mankind. The chip will quickly become an open source commodity. Many applications should also be open. It is also a hard science tool for science fiction. Michael Huff3 at the Stanford MEM network, a network of nanotechnology developers, has given me a quote that $50,000 would pay for developing this chip. He could receive grant funding directly to improve the accountability as I am an unaffiliated inventor that can not personally produce the chip . 1 U.S. Patent 3,890,161, DIODE ARRAY. As a 1975 patent, it may be available in image form only, not yet in electronically searchable text form. The original materials specified in the patent have been superseded by C60 carbon buckyballs as anodes on an N type InSb (semiconductor) substrate. 2 In 1993 I commissioned the preparation and testing of an adapted satellite transponder chip containing 5,600 Au on GaAs diodes fabricated in a patch as an expedient for assemblers to find one diode where diodes operating at high frequencies have to be very small. Conductive paste was applied over the face of the chip to connect all the diodes in consistent alignment parallel as required. Next the chip was immersed in a constant temperature pure vegetable oil bath inside a shielding box in the California desert. The chip produced ~25 kTB watts where an output more than 1/2 kTB watts validates the theory that electrical thermal noise (Johnson Nouse) can be rectified and aggregated. If replication of this test is desired, the chips may still be available as draw down obsolete stock from Virginia Diodes Inc. www.virginiadiodes.com . I have lost contact with the lab that adapted and tested the chip. The C60 / N type InSb version of the chip should perform much better. ~100 watts / cm2 @ 20o C @ 50% diode efficiency @ 1011 buckyballs / cm2 is estimated. 3 mhuff at mems-exchange.org http://www.memsnet.org/links/foundries Aloha, Charles M. Brown (808) 828-0297 4264 Ala Muku Pl. #C-3 Kilauea, Kauai, Hawaii 96754 abundance at logonhi.net www.diodearray.com http://peswiki.com/index.php/OS:CBC:Main_Page " From shovland at mindspring.com Sun Sep 4 16:38:00 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Sun, 4 Sep 2005 09:38:00 -0700 (GMT-07:00) Subject: [Paleopsych] More on Nano-Diode Arrays for electricity Message-ID: <17538618.1125851881104.JavaMail.root@mswamui-blood.atl.sa.earthlink.net> Charles M. Brown has invented, patented, and demonstrated at low power a chip which absorbs ambient heat and produces electrical power. There is plenty of ambient heat everywhere on Earth. He expects mature versions of the chip to yield thousands of watts per sq. inch inside a heat transfer system using circulating liquid. The device consists of billions of nanometer scale diodes connected in uniform alignment parallel small arrays which are, because the operating voltage is low, connected in series. Heat induces a small random movement of mobile electrical charges inside materials. This effect is known as Johnson noise after a telephone company engineer who studied the sources of static in radio telescopes. Diodes are two terminal devices which conduct electricity well in one direction and block it from flowing the other way. Random electrical movements emerge from a diode as a fluctuating forward current half the time. Once the current emerges from its diode of origin it is blocked from retreat through its diode of origin and all parallel adjacent diodes and continues to the electrical load of the circuit. It does impose a counter voltage which is the array operating voltage as it does this. This counter voltage deenergizes the random electrical movements which still flow in this direction which is easier than the blocked direction. This deenergization is simultaneously strictly equal local refrigeration and electric circuit power. The fluctuating outputs of all diodes are aggregated without network loss to become useful amounts of coequal electrical power and refrigeration. Each diode is expected to yield a nanowatt @ 65 degrees F / 17 degrees C, Full thermal bandwidth of ~ 1Thz, a rectification factor of 1/2, and a device efficiency of 50%. The yield is derived by multiplying the above factors by Boltzman's constant ~1.38 x 10 ^-23. Uniformly spaced carbon nanotube anodes define each diode. The rest of the diode is a shared cathode layer of N conductive Si doped InSb abutting the anodes on one end and a shared ohmic contact metal layer on the other side. The ability to fabricate this device is emerging gradually. Most Engineers and Physicist are not yet aware that this type of device can exist. In 1993, a feasibility investigation prototype yielded ~50 nanowatts which is above the scientific criteria of ~2 nanowatts. Unfortunately, Mr. Brown has lost contact with that test lab. One protocol is for other researchers to independently investigate Diode Arrays and report their results. Another possibility is for a development team to fabricate prototypes that will lead to commercial products. The promise is that the Diode Array will produce abundant, autonomous, cheap, clean, compact, reliable, safe, and quiet energy. Furthermore, refrigeration will become an electrical power asset instead of expense. From waluk at earthlink.net Mon Sep 5 02:43:50 2005 From: waluk at earthlink.net (Gerry Reinhart-Waller) Date: Sun, 04 Sep 2005 19:43:50 -0700 Subject: [Paleopsych] We the 80% In-Reply-To: <21965696.1125846726299.JavaMail.root@mswamui-valley.atl.sa.earthlink.net> References: <21965696.1125846726299.JavaMail.root@mswamui-valley.atl.sa.earthlink.net> Message-ID: <431BB0E6.3040102@earthlink.net> Spot-on. Bush, in several addresses he made to the nation has stated in definitive terms that the United States will aid the refugees from Hurricane Katrina AND maintain our presence in Iraq. When one splits manpower and resources, a job less than 60% is what is accomplished. Gerry Reinhart-Waller shovland at mindspring.com wrote: >True. But if you are going fight wars for >resources you have to have the financial >and physical resources to do it, and I'm >not sure we are in that position any more. > >We have a huge challenge ahead of us, >but there are a lot of possibilities if we >only choose to pursue them. > > > >-----Original Message----- >From: Gerry Reinhart-Waller >Sent: Sep 3, 2005 10:51 PM >To: shovland at mindspring.com, > The new improved paleopsych list >Subject: Re: [Paleopsych] We the 80% > >Trading for resources is a noble idea except when all parties are after >the same resource i.e. oil. I'm clueless how oil can equably be >bartered. There is not enough oil anywhere to meet needs of all global >countries. > >Gerry Reinhart-Waller > >shovland at mindspring.com wrote: > > > >>Subject: We the 80% >> >>Declare that this is our country as much as it is theirs. >> >>Declare that it is immoral for a few to have so much when so many have so little. >> >>Declare that good government is a necessary part of civilization, to serve the common good. >> >>Declare that we should trade for the resources we need, not fight wars for them. >> >>Declare that good health care is a good investment, and should be provided for all. >> >>Declare that we should join the world community rather than trying to dominate it. >> >> >>Steve Hovland >>San Francisco >>September 3, 2005 >> >> >>_______________________________________________ >>paleopsych mailing list >>paleopsych at paleopsych.org >>http://lists.paleopsych.org/mailman/listinfo/paleopsych >> >> >> >> >> > > > > > From shovland at mindspring.com Sun Sep 4 20:11:34 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Sun, 4 Sep 2005 13:11:34 -0700 (GMT-07:00) Subject: [Paleopsych] Articles on Peasant Revolts in Wikipedia Message-ID: <12621952.1125864695108.JavaMail.root@mswamui-chipeau.atl.sa.earthlink.net> Unfortunately, there is nothing about the peasant revolts in America circa 2006... http://en.wikipedia.org/wiki/Category:Peasant_revolts From shovland at mindspring.com Sun Sep 4 22:16:45 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Sun, 4 Sep 2005 18:16:45 -0400 (EDT) Subject: [Paleopsych] The Unarmed American Message-ID: <10450424.1125872205572.JavaMail.root@mswamui-chipeau.atl.sa.earthlink.net> A lesson learned from New Orleans. -------------- next part -------------- A non-text attachment was scrubbed... Name: TheUnarmedAmerican.jpg Type: image/pjpeg Size: 153725 bytes Desc: not available URL: From checker at panix.com Mon Sep 5 20:57:02 2005 From: checker at panix.com (Premise Checker) Date: Mon, 5 Sep 2005 16:57:02 -0400 (EDT) Subject: [Paleopsych] SW: On Anthropic Reasoning Message-ID: Cosmology: On Anthropic Reasoning http://scienceweek.com/2005/sw050909-1.htm The following points are made by M. Livio and M.J. Rees (Science 2005 309:1022): 1) Does extraterrestrial intelligent life exist? The fact that we can even ask this question relies on an important truth: The properties of our Universe have allowed complexity (of the type that characterizes humans) to emerge. Obviously, the biological details of humans and their emergence depend on contingent features of Earth and its history. However, some requirements would seem generic for any form of life: galaxies, stars, and (probably) planets had to form; nucleosynthesis in stars had to give rise to atoms such as carbon, oxygen, and iron; and these atoms had to be in a stable environment where they could combine to form the molecules of life. 2) We can imagine universes where the constants of physics and cosmology have different values. Many such "counterfactual" universes would not have allowed the chain of processes that could have led to any kind of advanced life. For instance, even a universe with the same physical laws and the same values of all physical constants but one -- a cosmological constant Lambda (the "pressure" of the physical vacuum) higher by more than an order of magnitude -- would have expanded so fast that no galaxies could have formed. Other properties that appear to have been crucial for the emergence of complexity are (i) the presence of baryons (particles such as protons and neutrons); (ii) the fact that the Universe is not infinitely smooth, allowing for the formation of structure (quantified as the amplitude of the fluctuations in the cosmic microwave background, Q); and (iii) a gravitational force that is weaker by a factor of nearly 10^(40) than the microphysical forces that act within atoms and molecules -- were gravity not so weak, there would not be such a large difference between the atomic and the cosmic scales of mass, length, and time. 3) A key challenge confronting 21st-century physics is to decide which of these dimensionless parameters such as Q and Lambda are truly fundamental -- in the sense of being explicable within the framework of an ultimate, unified theory -- and which are merely accidental. The possibility that some are accidental has certainly become viable in the context of the "eternal inflation" scenario [1-3], where there are an infinity of separate "big bangs" within an exponentially expanding substratum. Some versions of string theory allow a huge variety of vacua, each characterized by different values of (or even different dimensionality) [4]. Both these concepts entail the existence of a vast ensemble of pocket universes -- a "multiverse." If some physical constants are not fundamental, then they may take different values in different members of the ensemble. Consequently, some pocket universes may not allow complexity or intelligent life to evolve within them. Humans would clearly have to find themselves in a pocket universe that is "biophilic." Some otherwise puzzling features of our Universe may then simply be the result of the epoch in which we exist and can observe. In other words, the values of the accidental constants would have to be within the ranges that would have allowed intelligent life to develop. The process of delineating and investigating the consequences of these biophilic domains is what has become known as "anthropic reasoning".[5] References (abridged): 1. P. J. Steinhardt, in The Very Early Universe, G. W. Gibbons, S. Hawking, S. T. C. Siklos, Eds. (Cambridge Univ. Press, Cambridge, 1983), p. 251 2. A. Vilenkin, Phys. Rev. D 27, 2848 (1983) 3. A. D. Linde, Mod. Phys. Lett. A 1, 81 (1986) 4. S. Kachru, R. Kallosh, A. Linde, S. P. Trivedi, Phys. Rev. D 68, 046005 (2003) 5. A. G. Riess et al., Astron. J. 116, 1009 (1998) Science http://www.sciencemag.org -------------------------------- Related Material: COSMOLOGY: ON THE ANTHROPIC PRINCIPLE The following points are made by Lawrence M. Krauss (Nature 2003 423:230): 1) The recognition, in the light of observational data, that Einstein's infamous cosmological constant might not be zero has changed almost everything about the way we think about the Universe, from reconsidering its origin to re-evaluating its ultimate future. But perhaps the most significant change in cosmological thinking involves a new willingness to discuss what used to be an idea that was not normally mentioned in polite company: the "anthropic principle". 2) This idea suggests that the precise values of various fundamental parameters describing our Universe might be understood only as a consequence of the fact that we exist to measure them. To paraphrase the cosmologist Andrei Linde, "If the Universe were populated everywhere by intelligent fish, they might wonder why it was full of water. Well, if it weren't, they wouldn't be around to observe it!". 3) The reason that physicists have been so reluctant to consider the anthropic principle seriously is that it goes against the grain of current attitudes. Most physicists have hoped that an ultimate physical explanation of reality would explain why the Universe must look precisely the way it does, rather than why it more often than not would not. Into the fray has entered James Bjorken. In a paper (Phys. Rev. D 2003 67:043508) entitled "Cosmology and the Standard Model", Bjorken proposes a new "scaling" approach, based on well-established notions in particle theory, for exploring how anthropically viable a small cosmological constant might be. 4) The realization that an extremely small, but non-zero, cosmological constant might exist has changed the interest of physicists in anthropic explanations of nature precisely because the value it seems to take is otherwise so inexplicable. In 1996, physicist Steven Weinberg and his colleagues Hugo Martel and Paul Shapiro argued that if the laws of physics allow different universes to exist with a cosmological constant chosen from an underlying probability distribution, then galaxies, stars and presumably astronomers might not ultimately evolve unless the cosmological constant were not much larger than the one we apparently observe today. Nature http://www.nature.com/nature -------------------------------- Notes by ScienceWeek: The "cosmological constant" is a mathematical term introduced by Einstein into the equations of general relativity, the purpose to obtain a solution of the equations corresponding to a "static universe". The term describes a pressure (if positive) or a tension (if negative) which can cause the Universe to expand or contract even in the absence of any matter ("vacuum energy"). When the expansion of the Universe was discovered, Einstein apparently began to regard the introduction of this term as a mistake, and he described the cosmological constant as the "greatest mistake of my life". But the term has reappeared as the proposed source of apparent accelerated cosmic expansion. -------------------------------- Related Material: ON QUINTESSENCE AND THE EVOLUTION OF THE COSMOLOGICAL CONSTANT The following points are made by P.J.E. Peebles (Nature 1999 398:25): 1) Contrary to expectations, the evidence is that the Universe is expanding at approximately twice the velocity required to overcome the gravitational pull of all the matter the Universe contains. The implication of this is that in the past the greater density of mass in the Universe gravitationally slowed the expansion, while in the future the expansion rate will be close to constant or perhaps increasing under the influence of a new type of matter that some call "quintessence". 2) Quintessence began as Einstein's cosmological constant, Lambda. It has negative gravitational mass: its gravity pushes things apart. 3) Particle physicists later adopted Einstein's Lambda as a good model for the gravitational effect of the active vacuum of quantum physics, although the idea is at odds with the small value of Lambda indicated by cosmology. 4) Theoretical cosmologists have noted that as the Universe expands and cools, Lambda tends to decrease. As the Universe cools, symmetries among forces are broken, particles acquire masses, and these processes tend to release an analogue of latent heat. The vacuum energy density accordingly decreases, and with it the value of Lambda. Perhaps an enormous Lambda drove an early rapid expansion that smoothed the primeval chaos to make the near uniform Universe we see today, with a decrease in Lambda over time to its current value. This is the cosmological inflation concept. 5) The author suggests that the recent great advances in detectors, telescopes, and observatories on the ground and in space have given us a rough picture of what happened as our Universe evolved from a dense, hot, and perhaps quite simple early state to its present complexity. Observations in progress are filling in the details, and that in turn is driving intense debate on how the behavior of our Universe can be understood within fundamental physics. Nature http://www.nature.com/nature -------------------------------- Notes by ScienceWeek: Active vacuum of quantum physics: This refers to the idea that the vacuum state in quantum mechanics has a zero-point energy (minimum energy) which gives rise to vacuum fluctuations, so the vacuum state does not mean a state of nothing, but is instead an active state. If a theory or process does not change when certain operations are performed on it, the theory or process is said to possess a symmetry with respect to those operations. For example, a circle remains unchanged under rotation or reflection, and a circle therefore has rotational and reflection symmetry. The term "symmetry breaking" refers to the deviation from exact symmetry exhibited by many physical systems, and in general, symmetry breaking encompasses both "explicit" symmetry breaking and "spontaneous" symmetry breaking. Explicit symmetry breaking is a phenomenon in which a system is not quite, but almost, the same for two configurations related by exact symmetry. Spontaneous symmetry breaking refers to a situation in which the solution of a set of physical equations fails to exhibit a symmetry possessed by the equations themselves. In general, the term "latent heat" refers to the quantity of heat absorbed or released when a substance changes its physical phase (e.g., solid to liquid) at constant temperature. The inflationary model, first proposed by Alan Guth in 1980, proposes that quantum fluctuations in the time period 10^(-35) to 10^(-32) seconds after time zero were quickly amplified into large density variations during the "inflationary" 10^(50) expansion of the Universe in that time frame. From checker at panix.com Mon Sep 5 20:57:11 2005 From: checker at panix.com (Premise Checker) Date: Mon, 5 Sep 2005 16:57:11 -0400 (EDT) Subject: [Paleopsych] SW: On Anthropic Reasoning Message-ID: Cosmology: On Anthropic Reasoning http://scienceweek.com/2005/sw050909-1.htm The following points are made by M. Livio and M.J. Rees (Science 2005 309:1022): 1) Does extraterrestrial intelligent life exist? The fact that we can even ask this question relies on an important truth: The properties of our Universe have allowed complexity (of the type that characterizes humans) to emerge. Obviously, the biological details of humans and their emergence depend on contingent features of Earth and its history. However, some requirements would seem generic for any form of life: galaxies, stars, and (probably) planets had to form; nucleosynthesis in stars had to give rise to atoms such as carbon, oxygen, and iron; and these atoms had to be in a stable environment where they could combine to form the molecules of life. 2) We can imagine universes where the constants of physics and cosmology have different values. Many such "counterfactual" universes would not have allowed the chain of processes that could have led to any kind of advanced life. For instance, even a universe with the same physical laws and the same values of all physical constants but one -- a cosmological constant Lambda (the "pressure" of the physical vacuum) higher by more than an order of magnitude -- would have expanded so fast that no galaxies could have formed. Other properties that appear to have been crucial for the emergence of complexity are (i) the presence of baryons (particles such as protons and neutrons); (ii) the fact that the Universe is not infinitely smooth, allowing for the formation of structure (quantified as the amplitude of the fluctuations in the cosmic microwave background, Q); and (iii) a gravitational force that is weaker by a factor of nearly 10^(40) than the microphysical forces that act within atoms and molecules -- were gravity not so weak, there would not be such a large difference between the atomic and the cosmic scales of mass, length, and time. 3) A key challenge confronting 21st-century physics is to decide which of these dimensionless parameters such as Q and Lambda are truly fundamental -- in the sense of being explicable within the framework of an ultimate, unified theory -- and which are merely accidental. The possibility that some are accidental has certainly become viable in the context of the "eternal inflation" scenario [1-3], where there are an infinity of separate "big bangs" within an exponentially expanding substratum. Some versions of string theory allow a huge variety of vacua, each characterized by different values of (or even different dimensionality) [4]. Both these concepts entail the existence of a vast ensemble of pocket universes -- a "multiverse." If some physical constants are not fundamental, then they may take different values in different members of the ensemble. Consequently, some pocket universes may not allow complexity or intelligent life to evolve within them. Humans would clearly have to find themselves in a pocket universe that is "biophilic." Some otherwise puzzling features of our Universe may then simply be the result of the epoch in which we exist and can observe. In other words, the values of the accidental constants would have to be within the ranges that would have allowed intelligent life to develop. The process of delineating and investigating the consequences of these biophilic domains is what has become known as "anthropic reasoning".[5] References (abridged): 1. P. J. Steinhardt, in The Very Early Universe, G. W. Gibbons, S. Hawking, S. T. C. Siklos, Eds. (Cambridge Univ. Press, Cambridge, 1983), p. 251 2. A. Vilenkin, Phys. Rev. D 27, 2848 (1983) 3. A. D. Linde, Mod. Phys. Lett. A 1, 81 (1986) 4. S. Kachru, R. Kallosh, A. Linde, S. P. Trivedi, Phys. Rev. D 68, 046005 (2003) 5. A. G. Riess et al., Astron. J. 116, 1009 (1998) Science http://www.sciencemag.org -------------------------------- Related Material: COSMOLOGY: ON THE ANTHROPIC PRINCIPLE The following points are made by Lawrence M. Krauss (Nature 2003 423:230): 1) The recognition, in the light of observational data, that Einstein's infamous cosmological constant might not be zero has changed almost everything about the way we think about the Universe, from reconsidering its origin to re-evaluating its ultimate future. But perhaps the most significant change in cosmological thinking involves a new willingness to discuss what used to be an idea that was not normally mentioned in polite company: the "anthropic principle". 2) This idea suggests that the precise values of various fundamental parameters describing our Universe might be understood only as a consequence of the fact that we exist to measure them. To paraphrase the cosmologist Andrei Linde, "If the Universe were populated everywhere by intelligent fish, they might wonder why it was full of water. Well, if it weren't, they wouldn't be around to observe it!". 3) The reason that physicists have been so reluctant to consider the anthropic principle seriously is that it goes against the grain of current attitudes. Most physicists have hoped that an ultimate physical explanation of reality would explain why the Universe must look precisely the way it does, rather than why it more often than not would not. Into the fray has entered James Bjorken. In a paper (Phys. Rev. D 2003 67:043508) entitled "Cosmology and the Standard Model", Bjorken proposes a new "scaling" approach, based on well-established notions in particle theory, for exploring how anthropically viable a small cosmological constant might be. 4) The realization that an extremely small, but non-zero, cosmological constant might exist has changed the interest of physicists in anthropic explanations of nature precisely because the value it seems to take is otherwise so inexplicable. In 1996, physicist Steven Weinberg and his colleagues Hugo Martel and Paul Shapiro argued that if the laws of physics allow different universes to exist with a cosmological constant chosen from an underlying probability distribution, then galaxies, stars and presumably astronomers might not ultimately evolve unless the cosmological constant were not much larger than the one we apparently observe today. Nature http://www.nature.com/nature -------------------------------- Notes by ScienceWeek: The "cosmological constant" is a mathematical term introduced by Einstein into the equations of general relativity, the purpose to obtain a solution of the equations corresponding to a "static universe". The term describes a pressure (if positive) or a tension (if negative) which can cause the Universe to expand or contract even in the absence of any matter ("vacuum energy"). When the expansion of the Universe was discovered, Einstein apparently began to regard the introduction of this term as a mistake, and he described the cosmological constant as the "greatest mistake of my life". But the term has reappeared as the proposed source of apparent accelerated cosmic expansion. -------------------------------- Related Material: ON QUINTESSENCE AND THE EVOLUTION OF THE COSMOLOGICAL CONSTANT The following points are made by P.J.E. Peebles (Nature 1999 398:25): 1) Contrary to expectations, the evidence is that the Universe is expanding at approximately twice the velocity required to overcome the gravitational pull of all the matter the Universe contains. The implication of this is that in the past the greater density of mass in the Universe gravitationally slowed the expansion, while in the future the expansion rate will be close to constant or perhaps increasing under the influence of a new type of matter that some call "quintessence". 2) Quintessence began as Einstein's cosmological constant, Lambda. It has negative gravitational mass: its gravity pushes things apart. 3) Particle physicists later adopted Einstein's Lambda as a good model for the gravitational effect of the active vacuum of quantum physics, although the idea is at odds with the small value of Lambda indicated by cosmology. 4) Theoretical cosmologists have noted that as the Universe expands and cools, Lambda tends to decrease. As the Universe cools, symmetries among forces are broken, particles acquire masses, and these processes tend to release an analogue of latent heat. The vacuum energy density accordingly decreases, and with it the value of Lambda. Perhaps an enormous Lambda drove an early rapid expansion that smoothed the primeval chaos to make the near uniform Universe we see today, with a decrease in Lambda over time to its current value. This is the cosmological inflation concept. 5) The author suggests that the recent great advances in detectors, telescopes, and observatories on the ground and in space have given us a rough picture of what happened as our Universe evolved from a dense, hot, and perhaps quite simple early state to its present complexity. Observations in progress are filling in the details, and that in turn is driving intense debate on how the behavior of our Universe can be understood within fundamental physics. Nature http://www.nature.com/nature -------------------------------- Notes by ScienceWeek: Active vacuum of quantum physics: This refers to the idea that the vacuum state in quantum mechanics has a zero-point energy (minimum energy) which gives rise to vacuum fluctuations, so the vacuum state does not mean a state of nothing, but is instead an active state. If a theory or process does not change when certain operations are performed on it, the theory or process is said to possess a symmetry with respect to those operations. For example, a circle remains unchanged under rotation or reflection, and a circle therefore has rotational and reflection symmetry. The term "symmetry breaking" refers to the deviation from exact symmetry exhibited by many physical systems, and in general, symmetry breaking encompasses both "explicit" symmetry breaking and "spontaneous" symmetry breaking. Explicit symmetry breaking is a phenomenon in which a system is not quite, but almost, the same for two configurations related by exact symmetry. Spontaneous symmetry breaking refers to a situation in which the solution of a set of physical equations fails to exhibit a symmetry possessed by the equations themselves. In general, the term "latent heat" refers to the quantity of heat absorbed or released when a substance changes its physical phase (e.g., solid to liquid) at constant temperature. The inflationary model, first proposed by Alan Guth in 1980, proposes that quantum fluctuations in the time period 10^(-35) to 10^(-32) seconds after time zero were quickly amplified into large density variations during the "inflationary" 10^(50) expansion of the Universe in that time frame. From checker at panix.com Mon Sep 5 20:57:58 2005 From: checker at panix.com (Premise Checker) Date: Mon, 5 Sep 2005 16:57:58 -0400 (EDT) Subject: [Paleopsych] U.S. Dept. of State: How to Identify Misinformation Message-ID: Greetings from the UVa library, folks! Sarah and I are here on our annual trip to Charlottesville. How to Identify Misinformation http://usinfo.state.gov/media/Archive/2005/Jul/27-595713.html How can a journalist or a news consumer tell if a story is true or false? There are no exact rules, but the following clues can help indicate if a story or allegation is true. * Does the story fit the pattern of a conspiracy theory? * Does the story fit the pattern of an "urban legend?" * Does the story contain a shocking revelation about a highly controversial issue? * Is the source trustworthy? * What does further research tell you? Does the story fit the pattern of a conspiracy theory? Does the story claim that vast, powerful, evil forces are secretly manipulating events? If so, this fits the profile of a conspiracy theory. Conspiracy theories are rarely true, even though they have great appeal and are often widely believed. In reality, events usually have much less exciting explanations. The U.S. military or intelligence community is a favorite villain in many conspiracy theories. For example, the Soviet disinformation apparatus regularly blamed the U.S. military or intelligence community for a variety of natural disasters as well as political events. In March 1992, then-Russian foreign intelligence chief Yevgeni Primakov admitted that the disinformation service of the Soviet KGB intelligence service had concocted the false story that the AIDS virus had been created in a US military laboratory as a biological weapon. When AIDS was first discovered, no one knew how this horrifying new disease had arisen, although scientists have now used DNA analysis to determine that "all HIV-1 strains known to infect man" are closely related to a simian immunodeficiency virus found in a western equatorial African chimpanzee, Pan troglodytes troglodytes. But the Soviets used widespread suspicions about the U.S. military to blame it for AIDS. ([1]More details on this.) In his book 9/11: The Big Lie, French author Thierry Meyssan falsely claimed that no plane hit the Pentagon on September 11, 2001. Instead, he claimed that the building had been struck by a cruise missile fired by elements within the U.S. government. No such vast conspiracy existed and many eyewitness accounts and evidence gathered on the scene confirmed that the hijacked airliner had struck the building. But, nevertheless, the book was a best-seller in France and has been translated into 19 languages, demonstrating the power that even the most groundless conspiracy theories can have. ([2]More details on 9/11: The Big Lie.) Does the story fit the pattern of an "urban legend?" Is the story startlingly good, bad, amazing, horrifying, or otherwise seemingly "too good" or "too terrible" to be true? If so, it may be an "urban legend." Urban legends, which often circulate by word of mouth, e-mail, or the Internet, are false claims that are widely believed because they put a common fear, hope, suspicion, or other powerful emotion into story form. For example, after the September 11 attacks, a story arose that someone had survived the World Trade Center collapse by "surfing" a piece of building debris from the 82^nd floor to the ground. Of course, no one could survive such a fall, but many initially believed this story, out of desperate hope that some people trapped in the towers miraculously survived their collapse. ([3]More details on this.) Another September 11 urban legend is that an undamaged Bible was found in the midst of the crash site at the Pentagon. In reality, it was a dictionary. But, if a Bible had survived unscathed, that would have seemed much more significant, and been seen by many as a sign of divine intervention. ([4]More details on this.) Since 1987, the false story that Americans or others are kidnapping or adopting children in order to use them in organ transplants has been widely believed. There is absolutely no evidence that any such event has ever occurred, but such allegations have won the most prestigious journalism prizes in France in 1995 and Spain in 1996. ([5]More details on this.) This urban legend is based on fears about both organ transplantation and international adoptions, both of which were relatively new practices in the 1980s. As advances in medical science made organ transplantation more widespread, unfounded fears began to spread that people would be murdered for their organs. At the same time, there were also unfounded fears about the fate of infants adopted by foreigners and taken far from their home countries. The so-called "baby parts" rumor combined both these fears in story form, which gave it great credibility even though there was absolutely no evidence for the allegation. In late 2004, a reporter for Saudi Arabia's Al Watan newspaper repeated a version of the organ trafficking urban legend, falsely claiming that U.S. forces in Iraq were harvesting organs from dead or wounded Iraqis for sale in the United States. This shows how the details of urban legends can change, to fit different circumstances. (More details in [6]English and [7]Arabic.) Highly controversial issues AIDS, organ transplantation, international adoption, and the September 11 attacks are all new, frightening or, in some ways, discomforting topics. Such highly controversial issues are natural candidates for the rise of false rumors, unwarranted fears and suspicions. Another example of a highly controversial issue is depleted uranium, a relatively new armor-piercing substance that was used by the U.S. military for the first time during the 1991 Gulf War. There are many exaggerated fears about depleted uranium because people associate it with weapons-grade uranium or fuel-grade uranium, which are much more dangerous substances. When most people hear the word uranium, a number of strongly held associations spring to mind, including the atomic bomb, Hiroshima, nuclear reactors, radiation illness, cancer, and birth defects. Depleted uranium is what is left over when natural uranium is enriched to make weapons-grade or fuel-grade uranium. In the process, the uranium loses, or is depleted, of almost half its radioactivity, which is how depleted uranium gets its name. But facts like this are less important in peoples' minds than the deeply ingrained associations they have with the world "uranium." For this reason, most people believe that depleted uranium is much more dangerous than it actually is. (More details on depleted uranium in [8]English and [9]Arabic.) Another highly controversial issue is that of forbidden weapons, such as chemical or biological weapons. The United States is regularly, and falsely, accused of using these weapons. (More details on this in [10]English and [11]Arabic.) In the same way, many other highly controversial issues are naturally prone to misunderstanding and false rumors. Any highly controversial issue or taboo behavior is ripe material for false rumors and urban legends. Consider the source Certain websites, publications, and individuals are known for spreading false stories, including: * [12]Aljazeera.com, a deceptive, look-alike website that has sought to fool people into thinking it is run by the Qatari satellite television station Al Jazeera * [13]Jihad Unspun, a website run by a Canadian woman who converted to Islam after the September 11 attacks when she became convinced that Osama bin Laden was right * [14]Islam Memo (Mafkarat-al-Islam), which spreads a great deal of disinformation about Iraq. (More details on Islam Memo and Jihad Unspun in [15]English and [16]Arabic.) There are many conspiracy theory websites, which contain a great deal of unreliable information. Examples include: * [17]Rense.com * Australian "private investigator" [18]Joe Vialls, who died in 2005 * [19]Conspiracy Planet Extremist groups, such as splinter communist parties, often publish disinformation. This can be especially difficult to identify if the false allegations are published by front groups. Front groups purport to be independent, non-partisan organizations but actually controlled by political parties or groups. Some examples of front groups are: * The [20]International Action Center, which is a front group for a splinter communist party called the [21]Workers World Party * The [22]Free Arab Voice, a website that serves as a front for Arab communist Muhammad Abu Nasr and his colleagues. (More details on Muhammad Abu Nasr in [23]English or [24]Arabic.) Research the allegations The only way to determine whether an allegation is true or false is to research it as thoroughly as possible. Of course, this may not always be possible given publication deadlines and time pressures, but there is no substitute for thorough research, going back to the original sources. Using the Internet, many allegations can be fairly thoroughly researched in a matter of hours. For example, in July 2005, the counter-misinformation team researched the allegation that U.S. soldiers in Iraq had killed innocent Iraqi boys playing football and then "planted" rocket-propelled grenades (RPGs) next to them, to make it appear that they were insurgents. Using a variety of search terms in "Google," a researcher was able to find the [25]article and photographs upon which the allegations were based. Because weapons did not appear in the initial photographs, but did appear in later photographs, some observers believed this was evidence that the weapons had been planted and that the boys who had been killed were not armed insurgents. The researcher was also able to find [26]weblog entries (numbered 100 and 333, on June 26 and July 15, 2005) from the commanding officer of the platoon that was involved in the incident and another member of his platoon. The weblog entries made it clear that: * the teenaged Iraqi boys were armed insurgents; * after the firefight between U.S. troops and the insurgents was over, the dead, wounded and captured insurgents were initially photographed separated from their weapons because the first priority was to make sure that it was impossible for any of the surviving insurgents to fire them again; * following medical treatment for the wounded insurgents, they were photographed with the captured weapons displayed, in line with Iraqi government requirements; * the insurgents were hiding in a dense palm grove, where visibility was limited to 20 meters, not a likely place for a football game, and they were seen carrying the RPGs on their shoulders. Thus, an hour or two of research on the Internet was sufficient to establish that the suspicions of the bloggers that the weapons had been planted on innocent Iraqi boys playing football were unfounded. Finally, if the counter-misinformation team can be of help, ask us. We can't respond to all requests for information, but if a request is reasonable and we have the time, we will do our best to provide accurate, authoritative information. Created: 27 Jul 2005 Updated: 27 Jul 2005 References 1. http://usinfo.state.gov/media/Archive/2005/Jan/14-777030.html 2. http://usinfo.state.gov/media/Archive/2005/Jun/28-581634.html 3. http://www.snopes.com/rumors/survivor.htm 4. http://www.snopes.com/rumors/bible.htm 5. http://usinfo.state.gov/media/Archive_Index/The_Baby_Parts_Myth.html 6. http://usinfo.state.gov/media/Archive/2005/Jan/14-475342.html 7. http://usinfo.state.gov/ar/Archive/2005/May/13-191292.html 8. http://usinfo.state.gov/media/Archive/2005/Jan/24-107572.html 9. http://usinfo.state.gov/ar/Archive/2005/May/13-329204.html 10. http://usinfo.state.gov/media/Archive/2005/Mar/11-723838.html 11. http://usinfo.state.gov/ar/Archive/2005/May/13-315186.html 12. http://aljazeera.com/ 13. http://www.jihadunspun.net/ 14. http://www.islammemo.cc/ 15. http://usinfo.state.gov/media/Archive/2005/Apr/08-205989.html 16. http://usinfo.state.gov/ar/Archive/2005/May/13-401696.html 17. http://www.rense.com/ 18. http://www.vialls.com/ 19. http://www.conspiracyplanet.com/ 20. http://www.iacenter.org/ 21. http://www.workersworld.net/wwp 22. http://www.freearabvoice.org/ 23. http://usinfo.state.gov/media/Archive/2005/Apr/08-205989.html 24. http://usinfo.state.gov/ar/Archive/2005/May/13-401696.html 25. http://www.nogw.com/download/2005_plant_weapons.pdf 26. http://www.roadstoiraq.com/index.php?p=361 From christian.rauh at uconn.edu Thu Sep 8 15:38:35 2005 From: christian.rauh at uconn.edu (Christian Rauh) Date: Thu, 08 Sep 2005 15:38:35 +0000 Subject: [Paleopsych] BBC E-mail: Yes to cloning with two mothers Message-ID: <20050908_153835_001470.christian.rauh@uconn.edu> Christian Rauh saw this story on BBC News Online and thought you should see it. ** Yes to cloning with two mothers ** UK scientists have been granted permission to clone a human embryo that will have genetic material from two mothers. < http://news.bbc.co.uk/go/em/fr/-/1/hi/health/4225564.stm > ** BBC Daily E-mail ** Choose the news and sport headlines you want - when you want them, all in one daily e-mail < http://www.bbc.co.uk/dailyemail/ > ** Disclaimer ** The BBC is not responsible for the content of this e-mail, and anything written in this e-mail does not necessarily reflect the BBC's views or opinions. Please note that neither the e-mail address nor name of the sender have?been verified. If you do not wish to receive such e-mails in the future or want to know more about the BBC's Email a Friend service, please read our frequently asked questions. http://news.bbc.co.uk/1/hi/help/4162471.stm From checker at panix.com Thu Sep 8 22:01:23 2005 From: checker at panix.com (Premise Checker) Date: Thu, 8 Sep 2005 18:01:23 -0400 (EDT) Subject: [Paleopsych] CHE: A Physicist Flows Between Fields Message-ID: A Physicist Flows Between Fields The Chronicle of Higher Education, 5.9.2 http://chronicle.com/weekly/v52/i02/02a01101.htm By SCOTT SMALLWOOD Most graduate students only dream of choosing between multiple job offers. Todd M. Squires not only had that choice but then had the trickier task of choosing between departments in different disciplines. When the postdoctoral researcher at the California Institute of Technology went on the job market this year, he had 10 interviews and received five offers. The physics department at New York University wanted him. So did the chemical-engineering department at the Massachusetts Institute of Technology. At the University of Illinois at Urbana-Champaign, two different engineering departments -- chemical and mechanical -- made him job offers. In the end, he decided to stay in California, taking a job as an assistant professor in the chemical-engineering department at the University of California at Santa Barbara. The weird thing is, he isn't really a chemical engineer. His Ph.D. from Harvard University is in physics. And for the past three years, he has been splitting two different postdoctoral fellowships at Cal Tech -- one in the physics department and one in applied mathematics. When pressed, he calls himself an "in-betweener" or, jokingly, a "fluid mechanic." He studies microfluidics -- basically, the way a tiny bit of fluid moves. How does, say, water behave when you put it in a channel the width of a human hair? Or how might tiny crystals floating in the fluid in the semicircular canals in your ear make you dizzy when you look up? At Cal Tech, Mr. Squires, 32, held an independent postdoc position, meaning that he was not tied to another professor's research project and was able to range widely. He worked with a professor there on a major review article about microfluidic devices. He and an MIT professor explored ideas that might one day lead to tiny battery-powered microfluidic chips. He also got fascinated by the design of the semicircular canals that help vertebrates balance. Now he's kicking around a small project involving sharks. It may seem disjointed, but for Mr. Squires, who describes himself as very gregarious, being at the intersection of a number of fields feels just right. Fluid mechanics, he says, "has the perfect mixture of things that are intellectually interesting but also things that I can talk to my parents about." Happenstance After an early childhood in Wisconsin, Mr. Squires grew up in Southern California. His mother taught elementary school; his father worked in marketing for food companies. He stayed close to home for college, graduating from the University of California at Los Angeles with bachelor's degrees in both physics and Russian. Happenstance, he admits, got him into both fields. In high school, he had to choose between taking physics and physiology. "I didn't know the difference and just picked one at random," he says. At college, he tried to pass out of his foreign-language requirement by taking the Spanish exam, but he didn't score high enough. So he enrolled in Russian and ended up loving it. In addition to Russian, Mr. Squires speaks fluent French and passable Arabic. He loved traveling the world, but does less of it now that he is married and the father of two children under 22 months. He is adept at explaining his research in simple terms. He sounds a bit like an excited kid when he starts talking about how microfluidics devices could be created using the tools that have been developed for making microchips. Imagine, he says, tiny chemistry labs where a slew of reactions could be done with a single chip. Or imagine taking a tiny drop of blood and doing a full set of lab work. Imagine an implantable device that monitors the level of a certain drug in your bloodstream. Then he pauses, worried that he's spinning too many science-fiction tales. "I don't want to sound like a wild-eyed pitchman," he says, "but there's a whole lot of possibilities." Two generations ago, he says, "when you had the first computers that filled a room, who would have thought that now we would use computers for all the things we do?" He has a good sense of the overall possibility of the field because he worked with Stephen Quake, a professor of bioengineering at Stanford University, on a 50-page review article that will appear in the journal Reviews of Modern Physics. He has not stopped dreaming about putting microfluidic devices into the human body. He has also spent time studying one that's already there. That's essentially what the canals in our ears are. In graduate school, he collaborated on mathematical models to examine the cause of one kind of vertigo. That then prompted him to examine how the structures work. After studying the physics of the canals, Mr. Squires says, he speculated that the canals need to be the size they are to work properly. Essentially, he says, evolution has created a sense of balance that is as good as it is going to get. Sunny Days At Santa Barbara, the search committee was attracted by Mr. Squires's "maturity and breadth," according to Matthew Tirrell, dean of the College of Engineering. For instance, as a postdoc, Mr. Squires had organized sessions at scientific meetings -- a task generally reserved for more seasoned scholars, says Mr. Tirrell. "He has the capacity to summarize the whole field and he's also produced some interesting research on fluid motion," the dean says. But why pick Santa Barbara over MIT's chemical-engineering department, which is generally regarded as tops in the field? It was a tough call, Mr. Squires says. "If decisions are that hard," he says, "I figure that either all the options are great or all the options are terrible." In this case, having his family in California made staying out West attractive. And the sunshine didn't hurt. "I wouldn't boil it all down to the weather, but lifestyle is part of it," he says. "Having lunch with my kids, being able to live near the beach, being able to bike to work." Mr. Tirrell cringes when location is mentioned. "We're continually fighting the idea that's the only thing we have to offer," he says. Regardless, he is excited that his chemical-engineering program, which is considered a top-10 department, won out over some higher-ranked programs. "Part of my pleasure in attracting him is the fact that it shows it's not a no-brainer that you're going to go to MIT." Mr. Squires says that ultimately the interdisciplinary focus of Santa Barbara made the difference. At Santa Barbara he plans to keep working with other researchers -- no matter what department they're in. "Not quite fitting anywhere has its advantages," he says. "It means you can kind of fit everywhere." From checker at panix.com Thu Sep 8 22:01:39 2005 From: checker at panix.com (Premise Checker) Date: Thu, 8 Sep 2005 18:01:39 -0400 (EDT) Subject: [Paleopsych] Runner's World: How Many Calories Are You Really Burning? Message-ID: How Many Calories Are You Really Burning? http://www.runnersworld.com/article/printer_friendly/0,5046,s6-197-0-0-8402,00.html If you think running and walking both torch the same number of calories per mile, you better put down that cookie by: Amby Burfoot A few months ago I got into an argument with someone who's far smarter than I am. I should have known better, but you know how these things go. Needless to say, I lost the argument. Still, I learned something important in the process. David Swain is a bicyclist who likes to ride across the country every couple of years. Since I spend most of my time on my feet, I figured I could teach him something about walking and running. Perhaps I should have paid more attention to Swain's Ph.D. in exercise physiology, his position as director of the Wellness Institute and Research Center at Old Dominion University, and his work on the "Metabolic Calculations" appendix to the American College of Sports Medicine's Guidelines for Exercise Testing and Prescription. Both Swain and I are interested in the fitness-health connection, which makes walking and running great subjects for discussion. To put it simply, they are far and away the leading forms of human movement. Every able-bodied human learns how to walk and run without any particular instruction. The same cannot be said of activities such as swimming, bicycling, skateboarding, and hitting a 3-iron. This is why walking and running are the best ways to get in shape, burn extra calories, and improve your health. Our argument began when I told Swain that both walking and running burn the same number of calories per mile. I was absolutely certain of this fact for two unassailable reasons: (1) I had read it a billion times; and (2) I had repeated it a billion times. Most runners have heard that running burns about 100 calories a mile. And since walking a mile requires you to move the same body weight over the same distance, walking should also burn about 100 calories a mile. Sir Isaac Newton said so. Swain was unimpressed by my junior-high physics. "When you perform a continuous exercise, you burn five calories for every liter of oxygen you consume," he said. "And running in general consumes a lot more oxygen than walking." What the Numbers Show I was still gathering my resources for a retort when a new article crossed my desk, and changed my cosmos. In "Energy Expenditure of Walking and Running," published last December in Medicine & Science in Sports & Exercise, a group of Syracuse University researchers measured the actual calorie burn of 12 men and 12 women while running and walking 1,600 meters (roughly a mile) on a treadmill. Result: The men burned an average of 124 calories while running, and just 88 while walking; the women burned 105 and 74. (The men burned more than the women because they weighed more.) Swain was right! The investigators at Syracuse didn't explain why their results differed from a simplistic interpretation of Newton's Laws of Motion, but I figured it out with help from Swain and Ray Moss, Ph.D., of Furman University. Running and walking aren't as comparable as I had imagined. When you walk, you keep your legs mostly straight, and your center of gravity rides along fairly smoothly on top of your legs. In running, we actually jump from one foot to the other. Each jump raises our center of gravity when we take off, and lowers it when we land, since we bend the knee to absorb the shock. This continual rise and fall of our weight requires a tremendous amount of Newtonian force (fighting gravity) on both takeoff and landing. Now that you understand why running burns 50 percent more calories per mile than walking, I hate to tell you that it's a mostly useless number. Sorry. We mislead ourselves when we talk about the total calorie burn (TCB) of exercise rather than the net calorie burn (NCB). To figure the NCB of any activity, you must subtract the resting metabolic calories your body would have burned, during the time of the workout, even if you had never gotten off the sofa. You rarely hear anyone talk about the NCB of workouts, because this is America, dammit, and we like our numbers big and bold. Subtraction is not a popular activity. Certainly not among the infomercial hucksters and weight-loss gurus who want to promote exercise schemes. "It's bizarre that you hear so much about the gross calorie burn instead of the net," says Swain. "It could keep people from realizing why they're having such a hard time losing weight." Thanks to the Syracuse researchers, we now know the relative NCB of running a mile in 9:30 versus walking the same mile in 19:00. Their male subjects burned 105 calories running, 52 walking; the women, 91 and 43. That is, running burns twice as many net calories per mile as walking. And since you can run two miles in the time it takes to walk one mile, running burns four times as many net calories per hour as walking. Run Slow or Walk Fast? I didn't come here to bash walking, however. Walking is an excellent form of exercise that builds aerobic fitness, strengthens bones, and burns lots of calories. A study released in early 2004 showed that the Amish take about six times as many steps per day as adults in most American communities, and have about 87-percent lower rates of obesity. In fact, I had read years ago that fast walking burns more calories than running at the same speed. Now was the time to test this hypothesis. Wearing a heart-rate monitor, I ran on a treadmill for two minutes at 3.0 mph (20 minutes per mile), and at 3.5, 4.0, 4.5, 5.0, and 5.5 mph (10:55 per mile). After a 10-minute rest to allow my heart rate to return to normal, I repeated the same thing walking. Here's my running vs. walking heart rate at the end of each two-minute stint: 3.0 (99/81), 3.5 (104/85), 4.0 (109/94), 4.5 (114/107), 5.0 (120/126), 5.5 (122/145). My conclusion: Running is harder than walking at paces slower than 12-minutes-per-mile. At faster paces, walking is harder than running. How to explain this? It's not easy, except to say that walking at very fast speeds forces your body to move in ways it wasn't designed to move. This creates a great deal of internal "friction" and inefficiency, which boosts heart rate, oxygen consumption, and calorie burn. So, as Jon Stewart might say, "Walking fast...good. Walking slow...uh, not so much." The bottom line: Running is a phenomenal calorie-burning exercise. In public-health terms--that is, in the fight against obesity--it's even more important that running is a low-cost, easy-to-do, year-round activity. Walking doesn't burn as many calories, but it remains a terrific exercise. As David Swain says, "The new research doesn't mean that walking burns any fewer calories than it used to. It just means that walkers might have to walk a little more, or eat a little less, to hit their weight goal." What's the Burn? A Calorie Calculator You can use the formulas below to determine your calorie-burn while running and walking. The "Net Calorie Burn" measures calories burned, minus basal metabolism. Scientists consider this the best way to evaluate the actual calorie-burn of any exercise. The walking formulas apply to speeds of 3 to 4 mph. At 5 mph and faster, walking burns more calories than running. Your Total Calorie Burn/Mile Your Net Calorie Burn/Mile Running .75 x your weight (in lbs.) .63 x your weight Walking .53 x your weight .30 x your weight Adapted from "Energy Expenditure of Walking and Running," Medicine & Science in Sport & Exercise, Cameron et al, Dec. 2004. From checker at panix.com Thu Sep 8 22:01:29 2005 From: checker at panix.com (Premise Checker) Date: Thu, 8 Sep 2005 18:01:29 -0400 (EDT) Subject: [Paleopsych] SW: On Human-Non-Human Primate Neural Grafting Message-ID: Science Policy: On Human-Non-Human Primate Neural Grafting http://scienceweek.com/2005/sw050909-6.htm The following points are made by M. Greene et al (Science 2005 309:385): 1) If human neural stem cells were implanted into the brains of other primates what might this do to the mind of the recipient? Could such grafting teach us anything of value for treatment of neurological injury and disease? Could we change the capacities of the engrafted animal in a way that leads us to reexamine its moral status? These questions have gained significance since publication of research involving grafting human neural stem cells into the brains of fetal monkeys [1]. In 2004, the authors formed a multidisciplinary working group; two plenary meetings over 12 months provide the basis for this report. 2) There is considerable controversy (reflected within the discussion group) over the likely value of interspecies stem cell work for progress toward therapies [2]. We cannot graft human neural stem cells into human beings solely for experimental purposes, even if they will lead to human therapies. Group members arguing for the value of research on human cells in non-human primates (NHPs) pointed out that because the aim is to learn about human neural stem cells it makes most sense to use human lines. The fact that available NHP lines are few and poorly characterized [3] is an additional reason to use human lines. Another consideration is the need to assess candidate human cell lines for viability, potential to differentiate, and safety with regard to such possibilities as tumor formation. NHPs may be appropriate for in vivo screening. 3) Skeptics argued that differences between humans and NHPs could render results uninterpretable and that the preferred path for many questions is to study NHP neural stem cells in NHPs. Assessments of the scientific merit of the research must form and develop along with the field itself. 4) The authors unanimously rejected ethical objections grounded on unnaturalness or crossing species boundaries [4]. Whether it is possible to draw a meaningful distinction between the natural and the unnatural is a matter of dispute. However, stipulating that research is "unnatural" says nothing about its ethics. Much of modern medical practice involves tools, materials, and behaviors that cannot be found in nature but are not unethical as a consequence 5) Another concern is that human to non-human primate (H-NHP) neural grafting is wrong because it transgresses species boundaries [5]. However, the notion that there are fixed species boundaries is not well supported in science or philosophy. Moreover, human-nonhuman chimerism has already occurred through xenografting. For example, the safety and efficacy of engrafting fetal pig cells has been studied in people with Parkinson's disease and Huntington's disease without moral objection. Indeed, some have suggested that porcine sources may be less morally contentious than the use of human fetal tissue. Merely because something has been done does not prove it right. However, the authors see no new ethical or regulatory issues regarding chimeras themselves. 6) The central challenge is whether introducing human cells into NHP brains raises questions about moral status. A variety of reasons have been given for according different moral standing to humans and NHPs. In the Abrahamic traditions, humans are set apart by God as morally special and are given stewardship over other forms of life (Genesis 1:26-28). For Kantians, human capacities for rationality and autonomy demand that we be treated as ends in ourselves. Mill finds, in the richness of human mental life, an especially fecund source of utility. Singer, although strongly defending equal consideration of nonhuman interests, argues that self-awareness affects the ethically allowable treatment of a creature by changing the kinds of interests it can have. 7) In conclusion: The authors support the National Academy's recommendation that H-NHP neural grafting experiments be subject to special review. The authors agree that such review should complement, not replace, current review by animal-use panels and institutional review boards. The authors further recommend that experiments involving H-NHP neural grafting be required, wherever possible, to look for and report changes in cognitive function. Explicit data collection on cognition and behavior will help to ensure that ethical guidelines can be developed appropriately as the field advances. References (abridged): 1. V. Ourednik et al., Science 293, 1820 (2001) 2. J. S. Robert, Bioessays 26, 1005 (2004) 3. K.-Y. F. Pau, D. Wolf, Reprod. Biol. Endocrinol. 2, 41 (2004) 4. P. Karpowicz, C. B. Cohen, D. van der Kooy, Nat.Med. 10, 331 (2004) 5. F. Fukuyama, Washington Post, 15 February 2004, p. B04 Science http://www.sciencemag.org -------------------------------- Related Material: NEUROBIOLOGY: ON NEURAL STEM CELL INTERACTIONS The following points are made by A.E. Wurmser et al (Science 2004 304:1253): 1) The ability of stem cells to both self-renew and differentiate into many different cell types enables these versatile cells to generate and repair tissues and organs. Yet studies of the fruit fly Drosophila and of mammalian skin, intestine, bone marrow, and brain reveal that these inherent stem cell features are tightly regulated by the cells and proteins that constitute the extracellular environment (or "niche") that stem cells inhabit (1). For example, Shen et al. (2) have demonstrated that endothelial cells (ECs) that are enriched in the niche occupied by neural stem cells (NSCs) regulate NSC proliferation and induce these stem cells to become neurons in vitro. 2) It is well established that NSCs are not randomly distributed throughout the brain, but rather are concentrated around blood vessels (3-5). This location places NSCs in close proximity to the ECs that line blood vessels, facilitating communication between these two cell types (3-5). To test the degree of intercellular communication between NSCs and ECs, Shen et al (1) cultured NSCs and monitored changes in their behavior when ECs were brought into close proximity (2). These investigators maintained cultures of mouse embryonic NSCs (derived from the cerebral cortex of 10- to 11-day-old mouse embryos) by adding fibroblast growth factor-2. Under these conditions, NSCs proliferated slowly and many of them exited the cell cycle, choosing to differentiate instead (2). However, when NSCs were cocultured with ECs their proliferation rate doubled, resulting in the formation of large interconnected sheets of undifferentiated cells. 3) One aspect of the Shen et al strategy was to introduce ECs into NSC cultures by means of transwell inserts. The pores of the transwells were too small to allow cell-cell contact between NSCs and ECs, but were large enough to enable signaling factors secreted by ECs to diffuse into the NSC cultures. Remarkably, the removal of transwells containing ECs triggered the coordinated differentiation of proliferating NSCs into neurons. Only 9% of NSCs unexposed to ECs expressed mature neuronal markers, compared with 31 to 64% of NSCs exposed to the EC transwells. This trend also was observed with cultured NSCs derived from the subventricular zone of adult mouse brain (2). Thus, signaling molecules secreted by ECs induced a shift in the mixed population of proliferating and differentiating NSCs, pushing them toward self-renewal while simultaneously priming them for the production of neurons. References (abridged): 1. E. Fuchs et al., Cell 116, 769 (2004) 2. Q. Shen et al., Science 304, 1338 (2004) 3. T. D. Palmer et al., J. Comp. Neurol. 425, 479 (2000) 4. A. Capela, S. Temple, Neuron 35, 865 (2002) 5. A. Louissaint et al., Neuron 34, 945 ( 2002) Science http://www.sciencemag.org -------------------------------- Related Material: NEUROBIOLOGY: ON HUMAN NEURAL STEM CELLS The following points are made by Pasko Rakic (Nature 2004 427:685): 1) Neural stem cells are a focus of strong interest because of the possibility that they could be used to replace neurons that have been damaged or lost -- perhaps as a result of injury such as trauma or stroke, or through neurodegenerative disorders such as Parkinson's disease. These stem cells can give rise to neurons and their supporting cells (glia) and it is hoped that something akin to neural stem cells in the adult human brain could be stimulated to generate replacement neurons. 2) Non-mammalian vertebrates, such as the salamander, can regenerate large portions of their brain and spinal cord, but humans have evidently lost this capacity during evolution. Therefore, most research on neural stem cells is carried out on mammals such as rodents, which are genetically closer to humans. However, although mammalian genomes may be similar, this similarity masks vast species differences in the way the brain is organized and in its capacity for regeneration and susceptibility to environmental insults. The failure of brain repair in clinical trials based on the promising results seen after the use of similar procedures in rodents is sobering testimony to the importance of such species-specific distinctions. 3) Human neural stem cells behave differently from their rodent equivalents in culture(1), but direct study of human brain tissue by Sanai et al(2) demonstrates additional significant and clinically relevant species-specific differences. A large number of postmortem and biopsy samples reveal two basic findings. First, neural stem cells that can potentially give rise to neurons, as well as to two types of glial cell (astrocytes and oligodendrocytes), are situated in a region of the forebrain known as the subventricular zone. Second, a pathway known as the rostral migratory stream -- which in adult rodents contains neurons that migrate from the subventricular zone to the brain region concerned with sensing smell -- is absent in humans. 4) In adult mammals, including humans, the subventricular zone (more commonly known as the subependymal zone[3-5]) contains cells that have the characteristics of glial cells and that can generate neuronal cells in culture. Sanai et al(2) show that in adult humans these "glial progenitor cells" form a prominent layer, or ribbon, that is restricted to a specific region in the brain that lines the lateral cerebral ventricle. This region is also present in non-human primates, but it is thinner and less well delineated than in humans(4). References (abridged): 1. Ginis, I. & Rao, M. S. Exp. Neurol. 184, 61-77 (2003) 2. Sanai, N. et al. Nature 427, 740-744 (2004) 3. Lewis, P. D. Nature 217, 974-975 (1968) 4. McDermott, K. W. & Lantos, P. L. Brain Res. Dev. Brain Res. 57, 269-277 (1990) 5. Weickert, C. S. et al. J. Comp. Neurol. 423, 359-372 (2000) Nature http://www.nature.com/nature From checker at panix.com Sat Sep 10 02:02:11 2005 From: checker at panix.com (Premise Checker) Date: Fri, 9 Sep 2005 22:02:11 -0400 (EDT) Subject: [Paleopsych] NYT: The History of Chromosomes May Shape the Future of Diseases Message-ID: The History of Chromosomes May Shape the Future of Diseases http://www.nytimes.com/2005/08/30/science/30gene.html By [3]CARL ZIMMER The common ancestor of humans and the rhesus macaque monkey lived about 25 million years ago. But despite that vast gulf of time, our chromosomes still retain plenty of evidence of our shared heritage. A team of scientists at the National Cancer Institute recently documented this evidence by constructing a map of the rhesus macaque's DNA, noting the location of 802 genetic markers in its genome. Then they compared the macaque map to a corresponding map of the human genome. The order of thousands of genes was the same. "About half of the chromosomes are pretty much intact," said William Murphy, a member of the team, now at Texas A&M University. The other chromosomes had become rearranged over the past 25 million years, but Dr. Murphy and his colleagues were able to reconstruct their evolution. Periodically, a chunk of chromosome was accidentally sliced out of the genome, flipped around and inserted backward. In other cases, the chunk was ferried to a different part of the chromosome. All told, 23 of these transformations took place, and within these blocks of DNA, the order of the genes remained intact. "It's fairly easy to see how you can convert the chromosomes from the macaque to the human," Dr. Murphy said. This new macaque study, which is set to appear in a future issue of the journal Genomics, is just one of many new papers charting the history of chromosomes - in humans and other species. While scientists have been studying chromosomes for nearly a century, only in the last few years have large genome databases, powerful computers and new mathematical methods allowed scientists to trace these evolutionary steps. Scientists hope that uncovering the history of chromosomes will have practical applications to diseases like cancer, in which rearranged chromosomes play a major part. Scientists have known for over 70 years that chromosomes can be rearranged. With a microscope, it is possible to make out the banded patterns on chromosomes and to compare the pattern in different species. Scientists discovered that different populations of fruit fly species could be distinguished by inverted segments in their chromosomes. Later, molecular biologists discovered how cells accidentally rearranged large chunks of genetic material as they made new copies of their chromosomes. By the 1980's, scientists were able to identify some major events in chromosome evolution. Humans have 23 pairs of chromosomes, for example, while chimpanzees and other apes have 24. Scientists determined that two ancestral chromosomes fused together after the ancestors of humans split off from other apes some six million years ago. But a more detailed understanding of how chromosomes had changed would have to wait until scientists had amassed more information. The mystery could not be solved with data alone. Deciphering the history of chromosomes is like a fiendishly difficult puzzle. One well-studied version of it is known as the pancake problem. You have a stack of pancakes of different sizes, and you want to sort them into a neat pile from small to big. You can only do so by using a spatula to flip over some of the pancakes. Even a dozen pancakes make this a viciously hard problem to solve. "Flipping chromosomes is a lot like flipping pancakes," said Pavel Pevzner of the University of California, San Diego. In the mid-1990's, Dr. Pevzner and Sridhar Hannenhalli of the University of Pennsylvania invented a fast method for comparing chromosomes from two different species and determining the fewest number of rearrangements - the equivalent of pancake flips - that separate them. They introduced the method with a series of talks with titles like "Transforming Cabbage Into Turnips" and "Transforming Mice Into Men." "That opened the floodgates," said Bernard Moret of the University of New Mexico. Scientists have used methods like Dr. Pevzner's to study different groups of species. Dr. Pevzner himself joined with Dr. Murphy and 23 other scientists to analyze the last 100 million years of mammal evolution. They compared the genomes of humans to cats, dogs, mice, rats, pigs, cows and horses, using a program developed by Harris A. Lewin and his colleagues at the University of Illinois, called the Evolution Highway. The program allowed them to trace how each lineage's chromosomes had become rearranged over time. They published their results in the July 22 issue of Science. The scientists found some chromosomes barely altered and others heavily reworked. They also discovered that the rate for rearrangements was far from steady. After the end of the Cretaceous Period, when large dinosaurs became extinct, the chromosomes of mammals began rearranging two to five times as fast as before. That may reflect the evolutionary explosion of mammals that followed the dinosaur extinctions, as mammals rapidly occupied new ecological niches as predators and grazers, fliers and swimmers. More puzzling is the fact that different lineages became rearranged faster than others. "The dog's chromosomes have been evolving at least two to three times cats' or humans'," Dr. Murphy said. "And the mice and rats have been going even faster than the dogs." (Rodents are by no means the record holder. A 2004 study found that sunflower chromosomes have been rearranging about three times as fast as rodents'.) The new results raise questions about how evolution makes chromosome rearrangements part of a species' genome. In many cases, these mutations cause diseases, so natural selection should make them disappear quickly from a population. But scientists have also documented some rearrangements that are not hazardous or that are even beneficial. This year, for example, scientists discovered that some Northern Europeans carry a large inverted segment on one of their chromosomes. This inversion boosts the fertility of women who carry it. Chromosome rearrangements may also play a role in the origin of new species. Scientists often find that closely related species living in overlapping ranges have rearranged chromosomes. The mismatch of chromosomes may make it impossible for the two species to hybridize. As a result, the rearrangements may then spread through the entire new species. But Dr. Murphy isn't willing to speculate whether rodents have a faster rate of chromosome rearrangements because of the way they form new species. "There really isn't enough genome sequence to be sure," he said. The Science study and the newer study on macaques suggest that chromosomes tend to break in certain places, a hypothesis first offered by Dr. Pevzner in 2003. "Genomes do not play dice," Dr. Pevzner said. "Certain regions of the genome are being broken over and over again." It is too early to say why these regions have become break points, said Evan Eichler of the University of Washington, who was not involved in the mammal study. "There's something about these regions that makes them hot, and we have to figure out what that hot factor is," he said. Dr. Eichler argues that it is important to figure out what that is because a number of human congenital diseases are associated with chromosome rearrangements at these same break points. "Here you have a beautiful connection," he said. "The same thing that causes big-scale rearrangement between a human and chimp or a gorilla, these same sites are often the site of deletion associated with diseases." Some of these diseases involve chromosome rearrangements in a fertilized egg, leading to congenital disorders. Cancer cells also undergo large-scale chromosome rearrangements, often at the same break points identified in the recent evolution study. "We could have inherited some weaknesses in our genome that we have to understand and deal with medically," said David Haussler of the University of California, Santa Cruz. "And that has to do with the history of how our genome is built." From checker at panix.com Sat Sep 10 02:02:24 2005 From: checker at panix.com (Premise Checker) Date: Fri, 9 Sep 2005 22:02:24 -0400 (EDT) Subject: [Paleopsych] NYT: Scientific Savvy? In U.S., Not Much Message-ID: Scientific Savvy? In U.S., Not Much http://www.nytimes.com/2005/08/30/science/30profile.html [International comparisons, please! The U.S., I think, will fall close to the regression line on I.Q. It's way above it in per capita income. It would be fun to regress per capita income against both I.Q. and scientific literacy. My bet it that the addition to the coefficient of correlation will be very slight.] By [3]CORNELIA DEAN CHICAGO - When Jon D. Miller looks out across America, which he can almost do from his 18th-floor office at Northwestern University Medical School in Chicago, he sees a landscape of haves and have-nots - in terms not of money, but of knowledge. Dr. Miller, 63, a political scientist who directs the Center for Biomedical Communications at the medical school, studies how much Americans know about science and what they think about it. His findings are not encouraging. While scientific literacy has doubled over the past two decades, only 20 to 25 percent of Americans are "scientifically savvy and alert," he said in an interview. Most of the rest "don't have a clue." At a time when science permeates debates on everything from global warming to stem cell research, he said, people's inability to understand basic scientific concepts undermines their ability to take part in the democratic process. Over the last three decades, Dr. Miller has regularly surveyed his fellow citizens for clients as diverse as the National Science Foundation, European government agencies and the Lance Armstrong Foundation. People who track Americans' attitudes toward science routinely cite his deep knowledge and long track record. "I think we should pay attention to him," said Eugenie Scott, director of the National Center for Science Education, who cites Dr. Miller's work in her efforts to advance the cause of evolution in the classroom. "We ignore public understanding of science at our peril." Rolf F. Lehming, who directs the science foundation's surveys on understanding of science, calls him "absolutely authoritative." Dr. Miller's data reveal some yawning gaps in basic knowledge. American adults in general do not understand what molecules are (other than that they are really small). Fewer than a third can identify DNA as a key to heredity. Only about 10 percent know what radiation is. One adult American in five thinks the Sun revolves around the Earth, an idea science had abandoned by the 17th century. At one time, this kind of ignorance may not have meant much for the nation's public life. Dr. Miller, who has delved into 18th-century records of New England town meetings, said that back then, it was enough "if you knew where the bridge should be built, if you knew where the fence should be built." "Even if you could not read and write, and most New England residents could not read or write," he went on, "you could still be a pretty effective citizen." No more. "Acid rain, nuclear power, infectious diseases - the world is a little different," he said. It was the nuclear power issue that first got him interested in public knowledge of science, when he was a graduate student in the 1960's. "The issue then was nuclear power," he said. "I used to play tennis with some engineers who were very pro-nuclear, and I was dating a person who was very anti-nuclear. I started doing some reading and discovered that if you don't know a little science it was hard to follow these debates. A lot of journalism would not make sense to you." Devising good tests to measure scientific knowledge is not simple. Questions about values and attitudes can be asked again and again over the years because they will be understood the same way by everyone who hears them; for example, Dr. Miller's surveys regularly ask people whether they agree that science and technology make life change too fast (for years, about half of Americans have answered yes) or whether Americans depend too much on science and not enough on faith (ditto). But assessing actual knowledge, over time, "is something of an art," he said. He varies his questions, as topics come and go in the news, but devises the surveys so overall results can be compared from survey to survey, just as SAT scores can be compared even though questions on the test change. For example, he said, in the era of nuclear tests he asked people whether they knew about strontium 90, a component of fallout. Today, he asks about topics like the workings of DNA in the cell because "if you don't know what a cell is, you can't make sense of stem cell research." Dr. Miller, who was raised in Portsmouth, Ohio, when it was a dying steel town, attributes much of the nation's collective scientific ignorance to poor education, particularly in high schools. Many colleges require every student to take some science, but most Americans do not graduate from college. And science education in high school can be spotty, he said. "Our best university graduates are world-class by any definition," he said. "But the second half of our high school population - it's an embarrassment. We have left behind a lot of people." He had firsthand experience with local school issues in the 1980's, when he was a young father living in DeKalb, Ill., and teaching at Northern Illinois University. The local school board was considering closing his children's school, and he attended some board meetings to get an idea of members' reasoning. It turned out they were spending far more time on issues like the cost of football tickets than they were on the budget and other classroom matters. "It was shocking," he said. So he and some like-minded people ran successfully for the board and, once in office, tried to raise taxes to provide more money for the classroom. They initiated three referendums; all failed. Eventually, he gave up, and his family moved away. "This country cannot finance good school systems on property taxes," he said. "We don't get the best people for teaching because we pay so little. For people in the sciences particularly, if you have some skill, the job market is so good that teaching is not competitive." Dr. Miller was recruited to Northwestern Medical School in 1999 by administrators who knew of his work and wanted him to study attitudes and knowledge of science in light of the huge changes expected from the genomic revolution. He also has financing - and wears a yellow plastic bracelet - from the Lance Armstrong Foundation, for a project to research people's knowledge of clinical trials. Many research organizations want to know what encourages people to participate in a trial and what discourages them. But Dr. Miller said, "It's more interesting to ask if they know what a clinical trial is, do they know what a placebo is." The National Science Foundation is recasting its survey operations, so Dr. Miller is continuing surveys for other clients. One involves following people over time, tracing their knowledge and beliefs about science from childhood to adulthood, to track the way advantages and disadvantages in education are compounded over time and to test his theory that people don't wait until they are adults to start forming opinions about the world. Lately, people who advocate the teaching of evolution have been citing Dr. Miller's ideas on what factors are correlated with adherence to creationism and rejection of Darwinian theories. In general, he says, these fundamentalist views are most common among people who are not well educated and who "work in jobs that are evaporating fast with competition around the world." But not everyone is happy when he says things like that. Every time he goes on the radio to talk about his findings, he said, "I get people sending me cards saying they will pray for me a lot." From checker at panix.com Sat Sep 10 02:02:36 2005 From: checker at panix.com (Premise Checker) Date: Fri, 9 Sep 2005 22:02:36 -0400 (EDT) Subject: [Paleopsych] Runners World: Should You Run Naked? Message-ID: Should You Run Naked? http://www.runnersworld.com/article/printer_friendly/0,5046,s6-187-0-0-6844,00.html Nothing came between ancient Olympians and their performance. Were they onto something? by: Amby Burfoot If you ask me, the ancient Olympians were a lot smarter than we are. They had the good sense to run, jump, and throw in the nude. When you put anything between your skin and the environment--like shorts and a singlet, for example--you only decrease your body's cooling efficiency (even if you're more...comfortable in certain areas). The so-called "modern" Olympians of 1896 were smarter than us, too. They did their running, jumping, and throwing in April. Some athletes complained about the chilly, damp weather, but Spiridon Louis gave thanks to Zeus all the way to his (clothed) marathon victory in 2:58:50. Unfortunately, Olympic Marathons have been getting hotter ever since. The 1900 Olympic Marathon started at 2:36 p.m. under a 95-degree Parisian sun. Twelve years later, in Stockholm, a Portuguese runner died in the sweltering Olympic Marathon. Many of us remember Gabriele Andersen Schiess staggering across the finish line in the 1984 Women's Olympic Marathon in Los Angeles. In Athens this month, both the men's and women's marathons will start at 6 p.m., when average temperatures are in the mid-80s, though the city has a record August high of 109. And the marathoners will be running on black asphalt that has been simmering for 12 hours. "It's a terrible disservice that the marathoners will be forced to compete in conditions where they can't perform their best, and could actually hurt themselves," says Dr. William Roberts, medical director of the Twin Cities Marathon and president of the American College of Sports Medicine. To help athletes deal with the Athens weather, the U.S. Olympic Committee has been holding educational meetings since last September, when it organized a conference called "Heat, Humidity and Air Pollution: What to Expect in Athens 2004." In May, the top U.S. marathoners gathered in Colorado Springs for the latest update. "We believe the heat actually opens the window of possibilities for our marathoners," says U.S. men's Olympic distance coach Bob Larsen. "We'll leave no stone unturned in our search for scientific approaches to running in the heat." The lessons learned by the marathon team will also work for you. Here are some of the highlights. Heat Acclimation Many years of heat acclimation research have convinced most experts that you can do a good job of adjusting to the heat in eight days, a better job in 14, and perhaps better still in 21. The last physiological variable to adapt is your sweat rate, which takes eight to 14 days to reach maximum efficiency. Other, faster responders include increased plasma volume, decreased sodium concentration in the blood, decreased heart rate while running, decreased perceived exertion, and increased running economy. U.S. track athletes will be given the chance to attend a pre-Olympic training camp on Crete about two weeks before they move to Athens. The runners will follow a heat-training protocol outlined by Randy Wilber, Ph.D., of the USOC sports sciences department, who suggests the following: First run in the morning or evening cool; then move to warmer times of the day; finally, increase the length and intensity of your midday workouts. Perhaps no runner has thought more about heat training and racing than Alberto Salazar. Before the 1984 Olympic Marathon he traveled to the U.S. Army Labs in Natick, Massachusetts, to get tested in a heat chamber (where sweat production is measured) and learned to chug two quarts of fluid before every workout. But then he crashed. He now believes he did too many hard 20-milers in the heat. "I was exhausted from the first step of the marathon," he says. He finished 15th in 2:14:19. Today Salazar is coaching Dan Browne, one of the 2004 U.S. marathon qualifiers. He plans to have Browne do occasional workouts in a Nike heat chamber and to cut back on the intensity of his speedwork. "No one's going to run 2:06 in Athens, so we don't have to worry about training for that pace," Salazar says. Hydration Everyone knows drinking fluids is supposed to help you run faster. But you have to slow down to grab your drinks. America's Steve Spence worked on this dilemma when he was training for the hot, humid World Championships Marathon in Tokyo in 1991. Spence set up a water table on his local track, and then practiced drinking while running intervals at faster-than-marathon pace. "I figured if I got good at taking my drinks at this pace, it would come easy in the marathon," he says. Spence claimed the bronze medal. A couple of months ago, Alan Culpepper, another 2004 marathon qualifier, visited the Gatorade Sports Science Institute in Illinois to get a better idea of his sweat production and hydration needs. When he ran for an hour in a heat chamber cranked up to 85 degrees, he sweat 1.4 liters. He also learned that he is a salty sweater. "I'm much more aware now of my drinking and sodium needs," Culpepper says. "I feel more prepared to handle the heat challenges in Athens." Superhydration Storing extra water would be nice, but runners aren't camels. Still, two simple substances seem capable of promoting superhydration: common salt and glycerol, a liquid supplement. A New Zealand study presented at this year's American College of Sports Medicine meeting showed that well-trained runners who prehydrated with a heavily salted drink were able to exercise 20 percent longer in 90-degree weather than when they prehydrated with a minimally salty beverage. Not all glycerol studies have shown an improvement in hydration status or endurance performance, but a two-year-old study with Olympic distance triathletes produced convincing results. In a randomized, double blind, crossover study in 87-degree conditions, the triathletes slowed down much less with glycerol than without it. "Glycerol lets you increase the amount of standing water on board," says U.S. marathon guru David Martin, Ph.D. "It's nice to have that extra amount during a long, hot race." Spence readily admits he used glycerol in Tokyo, Keith Brantly says he used it in his best marathons, and Salazar says Browne will probably test glycerol to see how it works for him. Cool Vests In January, a team from the University of Georgia studied college distance runners covering 5-K in a 90-degree heat chamber with and without ice vests to cool their core before their efforts. The "precooled" runners finished 13 seconds faster, which is more than the gap that will separate many gold-medalists and fourth-place finishers in Athens. Recently, the folks at Nike Sport Research have been working to design an improved cooling vest that places more of the body's surfaces closer to larger volumes of ice. Only field hockey players have tested it (successfully, Nike says), but Lance Armstrong and Paula Radcliffe were both trying the vest in early summer. Cool Clothing You already know that a white shirt will absorb less heat than a black one. And for the past decade you've read about the amazing advances of breathable microfibers. But wait, those shirts are designed to keep you warm and dry in the winter. Do you really want that on a hot summer's day? Nope. So four years ago Nike produced a shirt that several U.S. runners wore in the Sydney Olympics. This white shirt sat off the skin on small bumps (allowing air to circulate), was constructed of a large fishnet weave (more air circulation), didn't absorb sweat (leaving it on the skin to cool you via evaporation), and was made of recycled plastic bottles. Home run! Too bad Nike called the shirt the Stand-Off Distance Singlet (because of the way it stood off your skin), which sounded too much like a shirt with a body-odor problem. This year Nike has produced something called the Nike Sphere Cool Marathon Singlet, with aerodynamic seam placement, mesh construction, and patent-pending "Zoned Venting" technology. But give me a Stand-Off Distance Singlet, and I'll show you a really great hot-weather running shirt. Here's my advice to the U.S. marathoners: Bring your scissors to Athens and cut your racing singlet as short as you can. Research by exercise physiologist Timothy Gavins, Ph.D., has shown that "the chimney effect" can improve body cooling. This refers to air moving up the bottom of your untucked shirt and out the top. Or just run naked. You'll be reconnecting with your Olympic forebears, increasing your chances of a medal, and giving a big boost to NBC's Olympic TV ratings. From checker at panix.com Sat Sep 10 02:02:42 2005 From: checker at panix.com (Premise Checker) Date: Fri, 9 Sep 2005 22:02:42 -0400 (EDT) Subject: [Paleopsych] Re: Runners World: Should You Run Naked? Message-ID: ---------- Forwarded message ---------- Date: Fri, 2 Sep 2005 11:52:51 +0200 From: Amara Graps Reply-To: World Transhumanist Association Discussion List To: wta-talk at transhumanism.org Subject: [wta-talk] Runners World: Should You Run Naked? Beyond sports - There are many many reason for why I think peeling off the clothes is a good idea. Here I repost something I sent to the extropians three years ago. Amara Date: Sat, 19 Jan 2002 03:42:48 +0100 To: extropians at extropy.org From: Amara Graps Subject: Beingness Sender: owner-extropians at extropy.org Reply-To: extropians at extropy.org Spike wrote: > Regarding public nudity, there is a prominent nude beach near where > I grew up, just north of Kennedy Space Center, Playalinda beach. [...] > on the most perfect beach weather days. But the last thing one > will see after starting that hike and the first thing one will see at > the other end are countless naked people. That would be fine, > except for the fact that the kinds of people who generally go nude > on the beach are exactly those who you would really prefer not to > see naked. Ever. Not even in ones worst nightmare. Now now Spike. Naturist beaches and resorts are freedom, in an ultimate sense. What better way to see the marvelous variety of shapes and sizes in which the the human body manifests itself? Social roles, economic classes, sex roles reduced or removed, and we can be who we are, simply. "If it were perfectly natural to go nude, we'd all be born that way." General Naturist/Nudist Information http://www.mbay.net/~cgd/naturism/nlink01.htm Being and Nakedness http://www.naturistplace.com/ REC.NUDE FAQ: Naturist Site Reports: http://www.faqs.org/faqs/nude-faq/beaches/ The following pieces are from: Humorous Introduction to Naturism http://www.netnude.com/main/intro.html#intro {begin quote} Nobody knows for certain exactly how many naturists there are in the world, but the numbers of those enjoying a clothes optional lifestyle appear to be increasing. Unfortunately, naturism still carries a stigma, born largely of ignorance of the truth. To some naturists are well meaning but slightly dotty individuals, who meander naked through wooded glades, pausing in catalogue poses behind strategically placed leaves. To others, they are immoral hedonists, congregating in mixed groups to enjoy pleasures of the flesh in orgy situations not seen since Caesar hung up his laurels. Or they are perverts trying to corrupt the 'normal'way of life. As with the majority of prejudices based on lies, misunderstandings and half-truths, the reality of life for the average naturist is very different indeed. I COULD NEVER BE A NUDIST ANYWAY - JUST LOOK AT MY BODY! That's the whole point though. Naturism isn't about looking at bodies - naturists are not exhibitionists. It's just about enjoying the freedom that a clothes optional atmosphere brings. Naturism is about accepting the human body for what it is - nothing to be ashamed of. So the men don't need to hit the gym for six months, buffing their muscles to within an inch of their lives in order to gain entry. And the women don't have to look like Baywatch babes. The media is largely responsible for promoting this idea of body perfection, but the truth is that the vast majority of people do not now, nor are they likely to ever resemble this false ideal. So for naturists there is no such thing as too fat, too thin, too short, too tall, too hairy. Nobody's going to comment on the size or shape of your breasts or critically evaluate your genitalia. And if you have any surgical scars or other distinguishing marks you needn't worry - ignore them just like everyone else will. For most people, their initial discomfort disappears very quickly, once they realize they are not being judged on their appearance. BUT WHAT DO PEOPLE GET OUT OF IT? IF IT'S NOT ABOUT SEX, JUST WHAT IS IT ABOUT? It's about relaxation, freedom from restriction - and to a very large degree, it's about honesty. Naturists are judged on their personalities alone. They take away the trappings that most of us have around us every day. They have less to 'hide behind'. This is very healthy, because it means that friendships are built on truth - as people get used to being open with each other, there is less temptation to embellish![...] Being nude can also be incredibly relaxing. The feeling of air, sun and water on the skin is a terrific stress reliever. [...] IT CAN'T BE HEALTHY FOR CHILDREN THOUGH, SURELY. On the contrary, children who grow up in a naturist environment usually have far fewer hang-ups than other kids. Once again, they are not being subjected to premature sexual situations - they grow up around other children and adults, understanding that the body is not something to be hidden and ashamed of. They know anatomy of the human body, and it is less of a 'taboo' to be explored at the earliest opportunity. There are fewer incidences of teenage pregnancy, sexually transmitted diseases and criminal behaviour amongst nudist children than amongst other children. WHERE WOULD I PUT MY SUNGLASSES!? As for the sunglasses, well friends have found that nipple rings are the perfect holders for their Ray Bans. If you don't fancy body piercing, though, a small bag slung around your neck or carried with you is the perfect repository for your small change and other necessities. {end quote} From checker at panix.com Sat Sep 10 02:06:19 2005 From: checker at panix.com (Premise Checker) Date: Fri, 9 Sep 2005 22:06:19 -0400 (EDT) Subject: [Paleopsych] CHE: A Chilly Climate on the Campuses (several articles) Message-ID: A Chilly Climate on the Campuses (several articles) The Chronicle of Higher Education, 5.9.9 http://chronicle.com/weekly/v52/i03/03b00701.htm FORUM A Chilly Climate on the Campuses Rarely has the climate on college campuses seemed such a cause for concern. Over the summer, two broad coalitions issued major new statements reaffirming academic freedom and autonomy. The American Council on Education and 27 other higher-education organizations said they were responding to university presidents, who need guidance in the face of increasing challenges to those principles. Almost concurrently, the presidents of 16 major universities around the world, also citing constraints on intellectual discussion, brought out their own statement reaffirming academic rights and responsibilities. The American Federation of Teachers weighed in too, calling for stronger opposition to recent attempts to involve the government in university business. Scientists warn that antiterrorism measures and public controversy over such issues as stem-cell research and evolution are making it more difficult to conduct and share research. Historians worry about the rapidly increasing level of classification of government documents. Foreign students and scholars say they face new obstacles in the wake of September 11 to studying, teaching, and publishing in the United States. At the same time, conservative students and scholars are calling attention to the hostile climate they say they have long felt on campuses. The campaign by David Horowitz, a leading conservative activist, for an "academic bill of rights" that he says would promote intellectual diversity has garnered nationwide attention, and variations of the measure are making their way through several state legislatures. An organization of concerned parents, NoIndoctrination.org, has begun to post allegations of bias against students in the classroom on the World Wide Web. What is notable is not that so many people are talking about a big chill, but that so many different people -- representing very different perspectives -- are doing so. Scientists see efforts to promote the teaching of intelligent design as a threat to their intellectual integrity; religious believers say they and their beliefs are still unwelcome in the academic marketplace of ideas. The president of Harvard touches off a firestorm with remarks about women's aptitude for science -- a sign to many of his critics of the chilly reception women find on campus, to his defenders that some topics are off limits today. Jewish students complain of anti-Israeli bias among professors; scholars in Middle East studies say they are being harassed by pro-Israel groups. Affirmative action, immigration, ethnic studies, gay studies -- the topics spark more and more public controversy. Some professors suggest they are censoring their comments on them. Is today's intellectual climate chillier than it once was? If so, for whom? Why? The Chronicle asked a number of commentators for their views. ------------------ http://chronicle.com/weekly/v52/i03/03b00702.htm Freer for Some, More Inhibited for Others By ROBERT M. O'NEIL Close observers of American higher education are regularly asked to assess the campus climate for free inquiry and expression. Their responses tend to be disappointingly eclectic -- disappointing, that is, to those outside the academy who seek a simple answer. Any honest appraisal of the current condition of campus speech is mixed, an amalgam of good news and bad news. Some sectors of American campuses seem freer to speak than ever before, while others may be inhibited to a degree not seen in quite some years. The resulting paradox largely accounts for the news media's fascination with the issue. Three groups have benefited from increased tolerance and attention. Gay, lesbian, and bisexual students and faculty members, who until recently revealed their sexual orientation and related views at grave risk, may not yet be free of artificial constraints, and are surely not fully accepted at all institutions. Yet, even at the highest levels of administration, they are more widely accepted than in the past. Much the same can be said of politically conservative students, whose views were not always welcome, especially during times of political turmoil like the Vietnam War era. Through the efforts of concerned national groups like NoIndoctrination.org, and of advocates like David Horowitz and his allies -- who include legislators, alumni, and other policy makers -- the far-right end of the campus political spectrum now seems better protected in speaking freely on national policy, course offerings, fee-allocation rules, and most other topics. Finally, and ironically, among the beneficiaries are those who utter sexist, racist, or homophobic remarks. Mercifully, the speech-code mania of the late 80s and early 90s has abated. Offensive slurs and the like may be no more acceptable these days than ever, but they are less likely to be proscribed by campus rules. Informal constraints do and should exist, including condemnation by senior administrators, but such uncongenial people need not be punished or banished. There are also some prominent casualties in the current mixed climate. Some outspoken liberals who have been Horowitz's targets have clearly felt constrained by legislative interest in his academic bill of rights, and by direct inquiries in several states into allegedly biased classroom dialogue. Notably inhibited on many campuses are those who hold deeply emotional views on either side of Middle East tensions. Whether pro-Israeli or pro-Palestinian, strident critics of American policy risk being labeled "an enemy of ____" or "anti-____" simply for making a strong public statement about current events in this profoundly troubled part of the world. Least definite among dimensions of the current intellectual climate is speech related to September 11, the Iraq war, and other national-security issues. Outspoken antiwar professors have, for example, fared better than one might have anticipated four years ago. Plaintiffs in suits challenging the USA Patriot Act and other measures seem to have suffered no reprisals. Yet foreign scholars, even from relatively nonsensitive parts of the world, have found this country less hospitable than it was before September 11, and even their ability to collaborate on research projects with American scientists is now subject to growing constraints. On this one point, the jury is very much still out. Robert M. O'Neil is founding director of the Thomas Jefferson Center for the Protection of Free Expression and a professor of law at the University of Virginia. _________________________________________________________ http://chronicle.com/weekly/v52/i03/03b00703.htm The New McCarthyism By JONATHAN R. COLE A rising tide of anti-intellectualism and intolerance of university research and teaching that offends ideologues and today's ruling prince is putting academic freedom -- one of the core values of the university -- under more sustained and subtle attack than at any time since the dark days of McCarthyism in the 1950s. As professors are publicly savaged for their ideas, often by outside groups, colleges are coming under pressure to fire them or control what they say in the classroom. Witness the furor last year over a purported "documentary" by the Boston-based David Project, Columbia Unbecoming, that charged professors with anti-Israel bias, or the Orwellian efforts by the national group Students for Academic Freedom that -- in the name of ending the alleged politicization of the academy -- attempt to limit legitimate scholarly discourse. As political ideology trumps scholarly consensus, the government is undermining the peer-review system and the norms of scholarship. Conservative ideologues in Congress, for example, are trying to place political appointees on committees to monitor area-studies programs; the Bush Administration and its followers on Capitol Hill and in statehouses are trying to intimidate professors whose work on topics like global warming or the transmission of HIV calls into question administration priorities. Such arbiters of truth are selectively bullying professors by investigating their work or threatening to withdraw federal grant support for projects whose content they find substantively offensive. In resisting stem-cell research and supporting teaching intelligent design along with evolution, they have cast doubt on scientific expertise and legitimated the latest form of anti-intellectualism in America. The USA Patriot Act allows the government to secretly monitor what students and faculty members read or transmit over the Internet; and the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 places such extraordinary constraints on laboratory scientists that some of our most distinguished immunologists are abandoning important research -- for example, on vaccines to prevent smallpox, anthrax, and West Nile virus -- that could help deter terrorism. Foreign students and researchers from scores of nations are finding it increasingly difficult to obtain visas to study or work in the United States, disrupting the flow of the best talent to American universities. The current attacks on academic freedom are not the only threats to free discussion in the university: Too many subjects, like those related to identity politics and challenges to reigning academic dogma, are also considered off limits. The result is that it has become increasingly difficult within the academy itself to have an open, civil debate about many topics. Scholars and scientists are often exercising their right to remain silent rather than face the potential scorn, ridicule, sanctions, and ostracism that challenging shoddy evidence and poor reasoning on politically sensitive topics can invite. Why does that matter? Universities remain perhaps the last sanctuary for the relatively unbridled and unfettered search for truth and new important ideas. Without a climate of free inquiry, creativity and discovery will suffer. Today American research universities are the single most important source for major new discoveries that improve the health and social and economic welfare of people around the world. Tie a tourniquet around that free flow of intellectual energy, and we will halt the production of knowledge that is necessary for conquering disease and poverty and for improving the quality of everyday life. The sad fact, however, is that few academic leaders and prominent members of their faculties are rising to the defense of academic freedom. Where is the Robert Hutchins of today, who protected the idea of the university against ideological foes during the 1940s and 1950s? As Hutchins said, it is "not how many professors would be fired for their beliefs, but how many think they might be." It is time to recognize the seriousness of the current attacks, analyze carefully the bases for them, scrutinize evidence on their incidence and consequences, and organize a defense of the university against those intent on undermining its values and quality. Jonathan R. Cole is a university professor and former provost and dean of faculties at Columbia University. ______________________________ http://chronicle.com/weekly/v52/i03/03b00801.htm The Ideological Corruption of Scholarly Principles By MARK BAUERLEIN To listen to the professoriate and the scholarly organizations, one would think that a purge was being readied. The sullen alarm, the feverish visions of witch hunts, the "chill" -- it's the Red Scare all over again. The most protected labor group in our time, tenured faculty members, regards conservative sallies as nothing less than the harbinger of "New McCarthyism." But however much they raise the specter of persecution, there is a difference. People lost jobs and reputations 50 years ago. Today the attack on left-of-center bias doesn't jeopardize anybody's job, and to be criticized by the right is a distinction. Has anyone been materially damaged by NoIndoctrination.org or the American Council of Trustees and Alumni? So why are professors upset? Because something is, indeed, threatened: the ideological grip of liberals and leftists on campuses. Under the public eye, they can no longer propagate their viewpoints as if they possessed the only rational and moral approach to cultural or political matters. That is a reasonable limitation, but it strikes professors as a predatory incursion -- which just shows how insulated professors are. They've lived too long without challenge, and the dissenting voice comes off as evil-spirited or stupid. So much conformity has an institutional effect: Liberal and leftist beliefs are so abiding that they have sunk deep into academic practices and acquired a disciplinary sheen. That conversion of ideological belief into scholarly principle was exemplified by Roger Bowen, general secretary of the American Association of University Professors, in a defense of liberal bias at an American Enterprise Institute symposium last February. In anthropology, he explained, the focus lies "on questioning religious and cultural myth, particularly myth that celebrates national, cultural, or racial superiorities." He continued, "Sociologists tend to inquire on the origins of inequality as a source of alienation," while political scientists "focus on questions of legitimacy" and historians "look at progress frequently in terms of overcoming inequalities of the past." Each of those tendencies, Bowen acknowledged, matches progressivist or liberal premises, with scholars playing adversarial roles. How easily they slide into measures of competence! What if a graduate student in history argued in a job talk that what has been termed "progress" has, in fact, introduced new and pernicious forms of inequality? His work wouldn't be recognized as authentic history. Hence an ideological judgment may be expressed as a disciplinary one. Disciplines should be based on subject matters and rules of evidence, not on agendas such as questioning social hierarchies. Once professors began to make certain political aims essential to the disciplinarity of humanities and social-science disciplines, they institutionalized their politics and impoverished their campuses. Here is where conservatives, libertarians, and traditionalists have felt a chill for decades. When the professoriate worries as much about bias at the root of disciplines as they do about conservative proposals for balance in the classroom, then we can take their reactions seriously. Mark Bauerlein is a professor of English at Emory University. _____________________________________________________________ http://chronicle.com/weekly/v52/i03/03b00802.htm We Are Part of the Problem By MARC H. BRODSKY Recent responses by scholarly societies, professional organizations, publishers, and professors to what they perceive as government restrictions or public disapproval illustrate the adage that the most effective censorship is one that is self-imposed. That's a real danger for scientific discourse today. Consider reactions to the threat to freedom of speech posed by regulations that the U.S. Department of Treasury issued through its Office of Foreign Assets Control (known as OFAC). The regulations, which have a long history, authorize the president to impose trade embargoes against nations deemed to be enemies. In 1988 Congress explicitly exempted "informational materials" from such regulations, subsequently making it clear that it did not intend to stop publishers, directly or indirectly, from importing or exporting information protected under the First Amendment to the U.S. Constitution. Despite that, in 1999 the Treasury Department's office ruled that editing and other steps normally taken by publishers with imported information from embargoed countries were not exempt from regulation. Alerted by the government in 2003, some publishers sought interpretation of how much processing, like editing and peer review, they might do with manuscripts from embargoed countries. The Treasury Department reiterated its unfounded regulations on editing imported materials. But instead of challenging the regulations, several publishers -- academic, professional-society, and commercial -- censored themselves and stopped handling articles and books from Cuba and Iran. Even I, a firm advocate of free speech unfettered by government regulation, at first found myself parsing OFAC statements and rulings. But upon reflection, I said to myself and my colleagues that such an exercise was a waste of time, as the rulings clearly contradicted well-established freedoms and supporting legislation, as well as my mission to disseminate scientific information. We needed to continue publishing and to challenge OFAC in court. Nevertheless, many respectable organizations persisted in self-censorship, even after OFAC, subsequent to our court challenge, backed down by issuing a general license (but still insisting it had the authority to issue regulations). Even now, some publishers, out of fear, still occasionally censor themselves. Similarly, while some scientists have stood up to religious groups that present "intelligent design" as an alternative to evolution, too many have not. Today some religious conservatives, in the guise of a nonexistent scientific controversy, advocate presenting intelligent design in school science courses as just as valid as evolution -- which they claim is only an unproven theory. Evolutionary theory is well-documented science. Intelligent-design advocates have many fallacies and hypocrisies in their arguments, but nevertheless there is an insidious result: self-censorship by some academics and schoolteachers. Rather than explain to their students that evolution is one of science's most valuable and well-established bases for scientific progress, they withdraw from the fray, fearing pressure from religious groups -- and even the White House -- that seek a role for religious alternatives in the science classroom. I fear that such chilling pressure is winning in a struggle over scientific discourse today. What this means is that academics must reflect on their basic mission. They need to avoid letting pressures from government and others lead them to self-imposed steps that would undermine the dissemination of correct and valued science. Marc H. Brodsky is executive director and CEO of the American Institute of Physics and chair of the Professional and Scholarly Publishing Division of the Association of American Publishers. ___________________________________________ Rights for Some People, Not Others By CHON A. NORIEGA For those of us who study minority issues, today's intellectual climate is -- as it has always been -- contentious. But it's becoming chilly in new, and frightening, ways. Within an hour of releasing a policy brief on noncitizens and voting in California, our Chicano-studies center here in Los Angeles received a deluge of fax and e-mail messages and telephone calls. Most expressed unbridled hatred and disgust for the report, its author, and our center, vowing to fight our alleged campaign "to turn the United States into Mexico." And oh yes, they promised to ask the state to cut off our funds. Both talk radio and anti-immigration groups continued the effort for several months before turning to other incidents. What had we done? The report -- written by Joaquin Avila, an expert on minority voting rights who had twice argued successfully before the U.S. Supreme Court and been awarded one of the MacArthur Fellows Program's "genius grants" -- drew upon recent census data that showed an increasing number of California cities with large noncitizen adult populations. Avila acknowledged the limited prospects for political integration in the state, but nonetheless recommended further debate and research on noncitizen participation in local government. That included neighborhood councils, but also voting for local office, something that already occurs in Chicago, New York City, and Maryland. While some of those who protested were open to reasoned debate, most contented themselves with comments like, "Mexicans need to learn birth control." Perhaps more troubling, everyone (including the news media) conflated noncitizens with "illegals," conveniently ignoring the status of legal resident. In the process, they quickly slid into assuming that only U.S. citizens were entitled to civic participation or protection of the law. What was most surprising was that those who contacted us -- even those who made explicit threats -- signed their names and, in some cases, provided their phone numbers and addresses. No one wanted "balance" on this issue; they simply wanted us to go away and were confident they spoke for all Americans and taxpayers (assuming the two were the same). If there is a chill taking place on the campuses, it stems from those kind of presumptions: Some people have rights -- including freedom of expression -- and others do not. One of the consequences of this turn is that the public university is now seen as the advocate, if not the author, of the research its faculty members produce, rather than as a site for presenting, examining, and challenging ideas. The CNN host Lou Dobbs exemplified that attitude when he criticized our report: "And it has the imprimatur of UCLA, one of the nation's most respected universities, calling for voting rights for illegal aliens?" Of course, protests against immigrant and minority rights are nothing new. Several colleagues have spoken to me matter-of-factly about the filing-cabinet drawers where they keep angry letters they have received over the decades. One still receives hate mail for a study noting that more than half of all births in California are now to Latinos, as if he were personally responsible! What is new is that recent critics are more emboldened: An organized sector of the electorate, with significant access to the mass media, prefers to silence public efforts to study the profound demographic changes and social disparities in our society. If those efforts succeed, our society will fly backward into the future, like Walter Benjamin's angel of history, our gaze fixated on a past that will seem like "one single catastrophe which keeps piling wreckage upon wreckage." Oddly enough, that effort to restrict knowledge and rights is touted as a vision of progress. Chon A. Noriega is a professor of film, television, and digital media and director of the Chicano Studies Research Center at the University of California at Los Angeles, and editor of Aztl?n: A Journal of Chicano Studies. ______________________________________________ The Chill Is Nothing New By GREG LUKIANOFF There is a chill on campus, but that's nothing new. For decades, campus speech has been chilled by speech codes and other attempts to prevent expression that might offend. Some would like to imagine that the excesses of "political correctness" are ancient history, but repression in the name of tolerance hasn't gone anywhere. Oppressive speech codes are not only still around -- they have actually multiplied, even after numerous court decisions declared them unconstitutional. Within the past year, college students have been punished for such things as expressing a religious objection to homosexuality and arguing that corporal punishment may be acceptable. Students in Illinois were told they could not hold a protest mocking affirmative action. Christian students in Florida were banned from showing The Passion of the Christ. A student in New Hampshire was expelled from the dorms after posting a flier that joked that female students could lose weight by taking the stairs. Those are just a few examples. The riskiest speech on campus is still religious or conservative expression or satire of the university's values. Another longstanding source of the campus chill is as old as college itself: the desire of administrators to avoid public criticism. Instances from the past few years are, again, easy to find. Several institutions, including Harvard Business School, have reprimanded student journalists for being critical of the administration. A University of Oklahoma faculty member was marginalized and relegated to a basement office, apparently for creating an "uncollegial environment" that happened to include blowing the whistle on university impropriety. At Shaw University, a professor was summarily fired for criticizing the administration. The growing bureaucratization of colleges also contributes to the chill. To avoid liability, campus policies banish speech to tiny "free-speech zones" and regulate pamphleteering, romantic relationships, and countless other aspects of academic life. Unfortunately, recent legal decisions in Massachusetts, California, and Illinois have confused what were once clearly distinct student rights and administrative duties, threatening to make matters worse. What is relatively new, however, is the public backlash against the academy. That has been provoked by comments like those of a University of New Mexico scholar who quipped on September 11, 2001, that "anyone who can bomb the Pentagon has my vote"; of a Saint Xavier University faculty member who condemned an Air Force cadet as a "disgrace"; and of a professor at Columbia University who called for "a million Mogadishus" in Iraq. And who hasn't heard of Ward Churchill, of the University of Colorado, who likened the victims of September 11 to Adolph Eichmann? The University of Colorado was absolutely correct, however, when it concluded that speech like Churchill's is fully protected. As student-rights advocates have argued for decades, free speech means nothing if it does not include the provocative, unpopular, or even offensive. Unlike other threats to campus candor, those cases have truly caught the academy's attention -- perhaps because faculty members now see their free-speech rights in question. While decrying increased public scrutiny, higher education has been hesitant to accept that it might share the blame for the erosion in public confidence. Those inside the academy may see their institutions as paragons of enlightenment, but the outside world increasingly views them as bloated corporations with frightening power over their children's future. Now that the cost of top colleges has skyrocketed to more than $40,000 a year -- close to what the median American household makes annually -- the very least students should be able to expect is that their basic rights be respected. There are certainly new and potentially serious threats to free speech presented by the Patriot Act, intellectual-property law, and dangerously vague legislative proposals. But colleges could do much to restore their credibility and prevent greater "outside interference" by confronting the free-speech problems that have plagued them for years. The academy would do well to remember: The first step to recovery is admitting that you have a problem. Greg Lukianoff is the director of legal and public advocacy for the Foundation for Individual Rights in Education. __________________________________ http://chronicle.com/weekly/v52/i03/03b01002.htm The Uncertain Consequences of Political Pressure By MICHAEL B?RUB? Is it becoming more difficult to speak openly on campus or to share information? I think so, and I fear that untenured and adjunct faculty members are the most vulnerable. In the past two years, we've seen a national campaign on the part of conservative activists to get state legislatures directly involved in academic oversight. That campaign is being conducted under the banner of "intellectual diversity," and one of its goals is to investigate instances of liberal "bias" in classrooms. Conservatives, who have become increasingly outraged at the fact that most college faculty members tend to be liberal, have promoted a couple of recent studies purporting to show that liberals actively discriminate against conservative scholars in hiring and promotion, just as we allegedly discriminate against conservative students in the classroom. How conservatives intend to combat the liberal tilt of some fields -- especially in the arts and humanities -- remains unclear, since they do not seem to be encouraging promising young conservatives to undertake graduate study in such fields. Of course, conservatives have been complaining about liberal campuses at least since the publication of William F. Buckley's God and Man at Yale, in 1951. But since September 11, 2001, liberalism has been on the defensive throughout the country, and some right-wing pundits have gone so far as to speak of liberals as traitors and enemies. Indeed, it seems to me that many of the "movement conservatives" who make up the Republican base are animated less by opposition to specific liberal beliefs -- like support for stem-cell research or affirmative action -- than by a more general opposition to pockets of political independence. Independent journalists, independent judges, independent filmmakers, independent professors -- all are anathema. So the ravings of a Ward Churchill, who compared the victims of September 11 to Nazis, are seen as emblematic of the professoriate as a whole (whereas I consider them simply the ravings of Ward Churchill). In my own state, David Horowitz has succeeded in getting the Pennsylvania House of Representatives to approve, along party lines, HR 177, a bill that creates a select subcommittee to determine, among other things, whether "students are graded based on academic merit, without regard for ideological views, and that academic freedom and the right to explore and express independent thought is available to and practiced freely by faculty and students." The subcommittee will hold hearings and conduct investigations until June 30 of next year (and possibly until November 30). An amendment to the bill provides that faculty members be given at least 48 hours' notice of any allegation against them before a hearing, and that they be allowed to testify. It's clear from the political rhetoric, however, that although the bill emphasizes providing students with an academic environment conducive to learning, the people who wrote and passed it don't seem too worried about whether African-American or gay students enjoy such an academic environment for learning. No, they're thinking about conservative students bringing allegations against liberal professors, and they've kindly offered those liberal professors 48 hours' notice and the chance to face their accusers. The truly curious thing about the bill is that it may not wind up pitting libertarian students and fans of the free-market economist Friedrich von Hayek against leftist professors who allegedly want the state to run our lives, and it may not target professors working on race, gender, or sexuality. Instead, according to reports I've seen, the constituency that seems most pleased by HR 177 is the local religious right, some of whom see it as their best chance to get intelligent design taught in biology classes. They draw strength from Horowitz-inspired initiatives like the one in Pennsylvania, just as they are inspired by President Bush's recent endorsement of intelligent design, and they view it as a way to combat the Darwinist "bias" of the natural sciences. So it's not clear, just yet, how this attempt at legislative "oversight" will play itself out. Michael B?rub? is a professor of English at Pennsylvania State University at University Park. __________________________________ http://chronicle.com/weekly/v52/i03/03b01101.htm The Pernicious Concept of 'Balance' By ELLEN WILLIS Demands for more political "balance" on the campuses are one of the scarier developments in today's intellectual climate. David Horowitz's campaign for a misnamed academic bill of rights and the related legislative initiatives it has inspired aim not to enhance academic freedom but to discredit the university as an independent institution. "Balance" is a pernicious concept, implying as it does both that all ideas are equally valid and that they can be unproblematically defined in academe as liberal or conservative -- especially by outside observers who have only passing knowledge of what is being said or taught. Some conservatives have expressed outrage that the views of professors are at odds with the views of students, as if ideas were entitled to be represented in proportion to their popularity and students were entitled to professors who share their political or social values. One of the more important functions of college -- that it exposes young people to ideas and arguments they have not encountered at home -- is redefined as a problem. To a radical right that feels entitled to dominate not only government but all social institutions, the academy is a particular irritant: It not only allows liberals and leftists to express their views, but provides them with the opportunity to make a living, get tenure, publish books, and influence students. Indeed, the academy is inherently a liberal institution, in the sense that it is grounded in the credo of the Enlightenment: the free pursuit and dissemination of knowledge for its own sake. But the right's charge that the professoriate is dominated by liberals requires some, pardon the expression, deconstruction. For the right, "liberal" has become an epithet -- roughly equivalent to the "Godless Communist" of an earlier era -- that applies to anyone who is not a conservative Republican or a Christian fundamentalist. Most people who are attracted to academic life fit that definition for fairly obvious reasons: We prefer reading, writing, and research to business; care more about job security than the chance to get rich; and are comfortable with (secular) Enlightenment values. The balance-mongers make much of polls purporting to reveal that most professors vote Democratic, but that says less about the liberalism of professors than about the fact that what used to be the right-wing lunatic fringe is now the Republican mainstream. As a practical matter -- no matter how much proponents of balance protest that they are merely trying to raise awareness of this issue -- redressing the "underrepresentation" of the far right in academe requires coercion: the intimidation of offending liberal professors by students or infiltrators who monitor their classes, and pressure on legislative officials, donors, and trustees to influence faculty hiring decisions and the curriculum. That said, it is equally important to acknowledge serious internal obstacles to intellectual freedom and diversity on the contemporary campus. The real political debates in academe have mainly to do not with voting behavior but with the social implications of scholarly and pedagogical methods and disciplinary paradigms. And those debates are too often settled, or stifled, by the ubiquitous tendency of academic departments to exclude or marginalize scholars whose approach diverges from prevailing orthodoxy. While conservatives talk as if that practice is confined to the academic left, in fact the disciplinary police are often profoundly conservative. Economists' exclusion of dissenters from free-market libertarian orthodoxy; psychologists' ostracism of psychoanalysts; philosophers' marginalizing of those who emphasize social and political rather than linguistic problems -- all contribute to a pervasive positivism that silences critical thinking about the present socioeconomic system. Nor is the phenomenon absent from the hard sciences: It may be harder for a camel to pass through the eye of a needle than for a biologist working on something other than the genome to get a job or a grant these days. All these pressures for conformity come at a time when the mainstream public conversation has diminishing space for serious social criticism. Trade publishers by and large refuse to publish it; leading review media tend to ignore it; fewer and fewer periodicals feature it. There is increasing disdain for the essay, the traditional vehicle for much social critique. The need to make a living has pushed more writers into the academy (whether they are really suited for it or not). Now good academic jobs are drying up as universities hire fewer tenure-track faculty members. That, too, is chilling. Ellen Willis is a professor of journalism and director of the Cultural Reporting and Criticism program at New York University. ________________________________________ The Death of John Stuart Mill By STANLEY KURTZ For conservatives it's been tough to speak openly on campus for decades. Knowing the politics of my field (anthropology), and mindful of Stanley Fish's 1990 call to bar some members of the National Association of Scholars from curriculum and tenure committees at Duke University, for years I avoided joining the association. When I finally threw caution to the wind (still carefully directing mailings to my home, not college), I discovered that my clever association chapter in Boston used envelopes with no external identification. That reminded me of how, during the McCarthy era, my father had to get his subscription to the leftist I.F. Stone's Weekly delivered to an empty apartment. Consider postcolonial studies, one of the most influential paradigms in today's academy. Begun by the late Edward Said, ostensibly as a theory-inflected political analysis of colonialism, postcolonial studies in effect has introduced a new form of blacklist. Said attacked numerous prominent scholars and intellectuals as anti-Muslim bigots in league with "the Zionist lobby." Chastising them with imposing the colonial stereotypes of "Orientalism," he found them guilty of daring to note connections between some strains of contemporary Islam and terrorism. His followers have used the label to tar their opponents, thus enabling a takeover of substantial parts of the academy, particularly in the humanities and social sciences. Nowadays, in Middle East studies, postcolonialists are everywhere, while the generation of successors to scholars whom Said attacked, like Bernard Lewis, has been lost. Further, entire subfields are defined by their politics. In anthropology, it is typical to see job listings calling for a specialist in "critical-race theory," "medical justice," "critical-medical anthropology," "gender and social justice," "postcolonial studies," or just plain "critical theory." All those are open code for someone on the left. Leftist professors treat mere calls for balance as suppression of speech, usually saying they are defending liberal principles. Yet many of those same professors junked John Stuart Mill's classic defense of the marketplace of ideas -- the need for multiple and clashing intellectual perspectives -- long ago. Like Said, they follow Michel Foucault in dismissing the call for intellectual balance as a ruse of power. Such scholars rationalize their near-total dominance of the academy by picturing it as the last beleaguered redoubt of an embattled left. After all, Republicans control the White House, Congress, and soon perhaps the courts, they reason. So why can't we control the academy? With only narrow Republican majorities in our political system, each party is compelled to debate and compromise with the other. How is that a justification for an academy where you can sooner find a military recruiter on campus than real debate? The erstwhile campus marketplace of ideas has been bought out by a monopoly. Mill is dead, and the professors killed him. And now that students and the public have complained -- now that the problem has been named by continuing complaints in the blogosphere, empirical studies of faculty bias, and student protests at Columbia and other universities -- the academy cries foul. Those who for years have trashed liberalism appeal to it -- as if their hiring practices and intellectual manners embodied a sort of Millian paradise. Too late. Liberalism now lies with their critics. Stanley Kurtz is a fellow at the Hoover Institution and a contributing editor at the National Review Online. ________________________________________ http://chronicle.com/weekly/v52/i03/03b01202.htm Religious, Philosophical, and Socioeconomic Diversity By CAROL M. SWAIN I have found myself an outsider in a place that values conformity. What makes me an outsider are my roots in the lower class, my strong Christian faith, and my race. The chill I feel on campus is that of an accomplished woman who, more often than not, finds herself devalued. Navigating a campus is difficult if your path, like mine, is nontraditional. When I first entered college as a high-school dropout with a GED, I encountered professors who warned me that I would not perform as well as other students. I defied their expectations. Now, as a professor who has five degrees and several prizes under my belt, I find myself an outsider for new reasons. As a born-again Christian since 1999, I have encountered overt and subtle forms of intimidation. Often this takes the form of openly disparaging remarks made by colleagues about the intelligence of believers. There is hostility directed against anyone who refuses to conform to the prevailing ethos of his or her institution and to the secularized liberal elites who decide who and what has value. I have watched helplessly as bright, conservative students are victimized again and again by faculty members who use the power of grading to push them toward conformity. Those students who fight back usually end up with reduced grade-point averages and fewer opportunities to matriculate at elite professional institutions. I believe institutions of higher learning can, and should, do better. Many operate in ways that reveal no real desire for diversity or inclusion beyond the visible differences of gender and race. They have little interest in diversifying their faculties in terms of political philosophy or religious beliefs. Instead, the elite institutions, with which I am most familiar, have seemingly decided that they are in sole possession of the intellectual knowledge, values, and insights needed to train future leaders -- and that such knowledge is secular and material. Never mind that the great universities of our nation, from Harvard on down, were in most cases founded by God-fearing men and women with different perspectives from today's. Institutional leaders should urge faculty and staff members to reject ideological conformism, and they should honor forms of diversity now neglected, including religious, philosophical, and socioeconomic diversity. If universities are to be true to their educational missions, they must cease and desist from their tendencies to exclude. Alas, the recent high-profile focus by activists such as David Horowitz on this longstanding issue is long overdue. Carol M. Swain is a professor of political science and law at Vanderbilt University and founding director of the Veritas Institute for Racial Justice and Reconciliation. _________________________________________________ http://chronicle.com/weekly/v52/i03/03b01301.htm Academic Freedom or Government Intrusion By AMY GUTMANN In preparing students for lifelong learning and democratic citizenship, today's great universities are more open than ever to intellectual diversity. Students learn to cross traditional disciplinary boundaries when they examine issues like AIDS and global terrorism from the multiple perspectives and with the methodologies that faculty members bring to the classroom. Why, then, the perception of a chill among people who complain that certain views are not allowed full expression on our campuses? Perhaps it derives from the fact that universities are considering some of the most controversial issues of our time, like the ethics of stem-cell research and the future of the Middle East. Moreover, we are living at a time when the right and left are quick to seize upon flash points -- a single course, a controversial article, an isolated incident -- to take the full measure of a faculty member or a university campus. More broadly speaking, it is easy to forget that American colleges and universities derive their greatness not by echoing the conventional views of society, carrying the partisan banner of governments, or giving aid and comfort to purveyors of prejudices. Rather, they do so by protecting the freedom of professors and students to read widely and explore topics in all their complexity, to think critically and debate issues where there are grounds for reasonable disagreement, and to imagine and express new ideas and new worlds without fear of reprisal or retribution. Many of the most powerful critiques of society, along with compelling solutions to the world's seemingly intractable problems, have issued from university scholars and students. Is there, then, a problem? If so, how should we rectify it? Not by outside regulation, as some critics urge. Guided by established procedures of self-governance, universities must be steadfast in their commitment to the principles of academic freedom -- which is not a license to suppress student dissent or engage in partisan proselytizing in the classroom. Upholding academic freedom does require universities to furnish a safe haven for free inquiry and discussion. And it recommends that we provide a respectful hearing to all debatable opinions and to external criticisms of the academy, rather than dismiss those who question us as "barbarians at the gates," against whom we must close ranks. We must also make a better effort to describe the nature of faculty-student interactions. We should begin by explaining that we teach young people both to think critically and to support their arguments with reasons, regardless of which way the political winds are blowing on the campus or off. Students in any class may not feel comfortable being challenged by a viewpoint with which they strongly disagree. But neither should they ever feel inhibited or afraid to disagree with their professors. For two decades, I taught a course on ethics and public policy that dealt with the controversial topics of our time, such as terrorism, abortion, affirmative action, and bioethics. My students knew that agreeing with me on a given issue would have no bearing on how I treated or graded them. Those who brought solid evidence and original thinking to bear on their arguments, and who responded effectively to the strongest counterarguments, earned the highest grades. For their part, instead of making their case through reasoned arguments in academic forums, some critics of higher education are promoting legislation to regulate professors. In doing so, they are violating the spirit of academic freedom and threatening to poison the collegial atmosphere of robust and respectful debate that has enabled American universities to contribute so much to our democracy. By demonstrating our steadfast commitment to protecting the freedom of faculty members and students to engage in vigorous discourse across the political spectrum without government interference, we can prevent the threat of a chill from becoming a devastating frost. Amy Gutmann is president of the University of Pennsylvania. From checker at panix.com Sat Sep 10 02:06:35 2005 From: checker at panix.com (Premise Checker) Date: Fri, 9 Sep 2005 22:06:35 -0400 (EDT) Subject: [Paleopsych] Sigma Xi: Brain-Based Values Message-ID: Brain-Based Values http://www.americanscientist.org/template/BookReviewTypeDetail/assetid/44445 [26]Patricia S. Churchland The Ethical Brain. Michael S. Gazzaniga. xx + 201 pp. Dana Press, 2005. Envision this scene: Socrates sits in prison, calmly awaiting execution, passing the time in philosophical discussions with students and friends, taking the occasion to inquire into the fundamentals of ethics: Where do moral laws come from? What is the root of moral motivation? What is the relation between power and morality? What is good? What is just? Ever modest, Socrates confesses ignorance of the answers. The pattern of questioning strongly hints, however, that whatever it is that makes something good or just is rooted in the nature of humans and the society we make, not in the nature of the gods we invent. This does not make moral rules mere conventions, like using a fork or covering one's breasts. There is something about the facts concerning human needs that entails that some laws are better than others. From the time of Socrates to the present, people have sought to give a natural basis for morals--that is, to understand how a moral statement about what ought to be done can rest on hard facts, albeit facts about conditions for civility and peace in social groups. How can ethical claims be more than mere conventions? How can such claims be rooted in facts about human nature but have the logical force of a command? Developments in evolutionary biology have helped to explain the appearance of moral motivation in humans and in other eusocial animals--animals that display behavior involving cooperation, sharing, division of labor, reciprocation and deception. In these species, various forms of punishment (shunning, biting, banishing, scolding) are visited on those who threaten the social norms. Ethological studies help us appreciate that, at a basic level, human social behavior has much in common with that of other species. Developments in neuroscience hold out the promise of extending the naturalistic perspective to aid in the understanding of how the brain and its circuitry underlie the capacity to learn social norms and to behave in accordance with them. Many of us ponder the possibility that discoveries about brain function and organization will challenge the conventional wisdom on which our system of justice relies and will allow us to see more deeply into the biology of social behavior, including moral behavior. In his new book, The Ethical Brain, Michael S. Gazzaniga takes an unflinching look at the interface between neuroscience and ethics, and offers his own thoughtful perspective on some of the tough questions. As a graduate student at Caltech, Gazzaniga studied under one of the towering figures of neuroscience, Roger Sperry, whose lab pioneered research into the cognitive effects of cutting the fibers connecting the two cerebral hemispheres (a procedure used to treat intractable epilepsy). Ingenious testing of these so-called "split brain" patients revealed that their two brain hemispheres operated independently, each hemisphere acting almost like a distinct person. These were profoundly important results, both for philosophy and for neuroscience. Gazzaniga went on to explore the neurobiology of higher mental functions--attention, memory, choice, consciousness--more generally, always with a philosophical question biting his heels. He currently serves on the President's Council on Bioethics. Thus it is especially fitting that he should now pen his thoughts on neuroethics. The most fundamental neuroethical issue concerns free will and responsibility. The mind is what the brain does, and the brain is a causal machine. Consequently, deliberations, beliefs, decisions and ensuing behavior are the outcome of causal processes. Typically, the causal processes leading to awareness of a decision are nonconscious. The "user illusion," nevertheless, is that a decision is created independently of neuronal causes, by one's very own "act of will." Some philosophers--usually called libertarians--resolutely believe that voluntary decisions actually are created by the will, free of causal antecedents. Like flat-earthers and creationists, libertarians glorify their scientific naivet? by labeling it transcendental insight. Gazzaniga, like many a philosopher, realizes that it would make a mockery of the criminal justice system if the accused could escape punishment simply by pleading that the brain is a causal machine and hence he or she lacked free will. So when and how ought we to hold people responsible for their behavior? Gazzaniga's answer has two components: First, he claims that we hold a person responsible, causality notwithstanding, so long as his or her behavior was unconstrained--so long as the person could have done otherwise. Second, Gazzaniga identifies responsibility as a social, not a neurobiological, property. His point is that our institutions for assigning responsibility derive from the need to maintain and protect civil society, which must figure out suitable criteria for when and how to punish those who violate the rules. Gazzaniga sums up his solution to the problem of free will by saying that "the brain is determined, but the person is free." The logic of this brain/person duality is not particularly compelling, or even coherent, yet as Gazzaniga's writing implies, it may be in our collective interest to live by this dualistic legal fiction. The obvious test of the "let's pretend" solution is to see whether it can specify relevant criteria for distinguishing between those who could have done otherwise and those who could not have, and between those cases in which mens rea (literally, a guilty mind) obtains and those in which it does not. (Mens rea is a criminal law concept requiring proof that the mental state of the accused was such that he or she committed the crime purposely, knowingly, recklessly or negligently; strict liability, in which state of mind has no relevance, is fairly rare in criminal law.) Here, however, the wheels fall off Gazzaniga's solution. Worried that ever-cunning defense attorneys will try to extract more exculpatory mileage out of neuroscience than the facts can support, Gazzaniga magnifies the incompatibility of responsibility as applied to persons and the causality that governs functions of a person's brain. He says, "The issue of responsibility . . . is a social choice. In neuroscientific terms, no person is more or less responsible than any other for actions." This implies that there are no relevant factual differences between someone with, say, obsessive-compulsive disorder and someone who can resist impulses. Can this conclusion be right? As the British neuroscientist Steve Rose has pointed out, badness, just as much as madness, involves the brain. The flaw in Gazzaniga's argument is that although responsibility is assessed in a social context, the capacity to learn social norms and the capacity to act in accordance with them are matters of individual brain function. It is precisely because an important difference exists between a normal brain and the brain of someone who is seriously demented or unreachably deluded that such people are not considered responsible for crimes they might commit. Moreover, judicial institutions rely on threat of punishment to deter. The late maturation of the prefrontal cortex (with reference to neuronal density, synaptic density, dendritic length and myelination) means that the brains of mature adults are critically different from those of young children--which almost certainly accounts for the child's more modest ability to appreciate the consequences of his or her choices and to resist temptation. Satisfied that the brain/person duality is workable, Gazzaniga pushes the hypothesis further. He says that because assignment of responsibility is a social matter, not a matter of fact about the brain, neuroscience cannot possibly "settle" whether a person is responsible. Granted, determining legal responsibility is complicated, and neuroscientific knowledge cannot be substituted for knowledge of the law and of community standards. What kicks up sand, however, is the unfortunate choice of the word settle. Neuroscientific evidence can surely be relevant, even if the disposition of the case is settled by members of a jury whose brains follow some form of constraint-satisfaction algorithm. Yet Gazzaniga resolutely insists upon the stronger point: Neuroscientific data are not even relevant. Why not? His reasoning goes like this: As a group, schizophrenics, for example, are no more prone to violence than individuals in the general population. Ditto, he says, for people with prefrontal lesions. If a given schizophrenic, Mr. Jones, kills someone, it is mere theater to display his brain scans in court, picking out some abnormality or other as "the cause" of his homicidal behavior. There are no relevant differences that neuroscience knows about that can explain why Jones killed, but Smith (also schizophrenic) did not. Not everyone with low glucose levels engages in violence; not all citizens raised in an inner-city hell become drug dealers; not all premenstrual women beat their children. We can assume there are differences in the brain, but whatever these differences happen to be, they are not, he believes, relevant to determination of responsibility. Why? Because there is no "responsibility" area whose functionality can be examined through a scanner or with electrodes--not now, not ever. Responsibility is a social construct, not a brain function. This point, he believes, holds generally--for schizophrenics, for patients with prefrontal cortex lesions, and so forth. And for good measure, he suggests that the insanity defense itself is too imprecise and problematic to be of practical value. It is widely expected that neuroscience has, or soon will have, something to say about competence to stand trial, about whether the mens rea condition has been met and about appropriate sentencing. Thus Gazzaniga's bold thesis raises important concerns. I share his worry that defense attorneys and hired experts from neuroscience may get out in front of what current science can honestly say--it's bad enough that venal psychiatrists have sown wholesale distrust of their discipline by selling their "expertise" to the highest bidder. On the other hand, perhaps Gazzaniga overstates the case. Consider the Virginia man who at around age 40 became obsessed with child pornography and eventually molested his eight-year-old stepdaughter. He had no previous history of pedophilic inclinations, and his interest in child pornography completely disappeared with the surgical removal of a tumor of the frontolimbic system, which had invaded the hypothalamic area of his brain. Along with other appetites, sexual drive is regulated in the hypothalamus. Some months later, when the tumor grew back, his preoccupation with pornography returned, only to vanish again with repeat surgery. Because the waxing and waning of his sexual compulsions corresponded to the waxing and waning of the tumor, his was not a standard molestation case. So long as his limbic structures are tumor-free, it seems rather pointless to punish him for a pornographic pursuit that was alien to his character. Punishment would not make sense either as deterrence or as retribution. Consider a more complicated discovery. In a landmark longitudinal study in New Zealand that followed the lives of about 500 men from infancy to about age 26, a significant subpopulation showed a strong and unmodifiable disposition to engage in antisocial behavior, including irrational and self-destructive violence. Genetic analysis revealed that most of the men in that subpopulation carried a mutation for a particular enzyme, monoamine oxydase A (MAOA). The enzyme metabolizes three neuromodulators (serotonin, norepinephrine and dopamine, all of which are relatively concentrated in prefrontal areas of cortex), thereby inactivating them. Environment was also a factor: In the group with the MAOA mutation, the criteria for adolescent conduct disorder (a measure of antisocial behavior) were met in about 85 percent of those who had been severely maltreated as children, in about 38 percent of those who had probably been maltreated and in only about 22 percent of those who had not been maltreated. Among those who did not carry the MAOA mutation but had been severely maltreated, only about 42 percent had the conduct disorder. These findings are preliminary, and further research is needed on the exact nature of the effect of early maltreatment on the circuitry affected by low MAOA levels. Still, on the face of it, the capacity of maltreated children with the MAOA mutation to acquire and act on social norms appears to be diminished. If Gazzaniga is right, however, these data are irrelevant to determining responsibility. The fact that the men are irrationally violent means that society needs protection from them--fair enough. Even so, it is important to distinguish between custody and punishment. Why? For the sake of the integrity of the institution of justice, because as a social institution, the criminal sanction depends on broad social support to keep functioning properly. When the criminal sanction is applied to cases that violate common beliefs about fairness--to young children, for example--support is replaced by resistance and reform. In order to be broadly accepted, the legal fiction that the brain is determined but the person is free will have to make peace with the widespread conviction that because of brain abnormalities, we are not all equally masters of our fate. On other bioethical issues, Gazzaniga is just as forthright. The book begins with a discussion of the medical use of embryonic tissue and the debate over whether a blastocyst (which is a ball of a few hundred cells) is a person. This section is thoughtful, clearheaded and informed by developmental neuroscience. One fallacy Gazzaniga exposes depends on the common idea that graded differences block principled legal distinctions. In the version referred to as the fallacy of the beard, the logic goes like this: If we cannot say how long a man's whiskers must be to qualify as a beard, we cannot distinguish between a bearded man and a clean-shaven one. Although this form of argument fools nobody on the topic of beards, it has been seductively employed elsewhere, especially regarding embryos. Criticizing the blastocyst-as-baby argument, Gazzaniga sensibly points out that we can draw a reasonable, if imperfect, line. When a distinction is needed, we devise laws that draw one, typically erring on the side of caution, given prevailing community attitudes. There is no precise moment at which a child becomes an adult, or a blastocyst becomes a sentient person, but reasonable humans unencumbered by superstition can nonetheless come together to "draw a line," and we can redraw the line when the facts merit a revision. Eighteen as the age of majority is not the perfect line for all adolescents, but on the whole it works well enough. Gazzaniga also presents an eloquent defense of personal choice in end-of-life matters, while recognizing that there are bound to be fundamental differences across people regarding euthanasia. Most people understand the concept of brain death and see the wisdom in equating death with brain death. In large part, this acceptability may be owed to personal experiences concerning the remarkable benefits conferred by organ harvesting. Other topics covered, if not fully, then sufficiently well to provoke thought, concern the neurobiological and evolutionary explanations of religious beliefs, in all their amazing variety and conflicting manifestations. Gazzaniga discusses also the remarkable nature of autobiographical memory, and the susceptibility of memory to suggestions, reconstruction, invention and wholesale confabulation. Because it is brief, compelling and free of technical jargon, the whole book can be easily read during a transcontinental flight. At a time when intellectuals may feel cowed by the heavy hand of the fervently religious, it is a relief to see that Gazzaniga neither shies away from controversial opinions nor waters them down so as to offend nobody. At the same time, he is respectful of moral convictions that do not line up with his own. His opinions are delivered not as dogma but as part of an ongoing reflection and conversation, in which seeing all sides of a moral problem is itself regarded as a moral achievement. Reviewer Information Patricia Smith Churchland is University of California President's Professor of Philosophy and chair of the Department of Philosophy at the University of California, San Diego. She is the author most recently of Brain-Wise: Studies in Neurophilosophy (2002). [32]A letter on Patricia Churchland's use of the term "libertarian" in a review of The Ethical Brain, and a reply from Churchland References 26. http://www.americanscientist.org/template/AuthorDetail/authorid/1501;jsessionid=aaadH7i1yL4II9 32. http://www.americanscientist.org/template/BookshelfLetterTypeDetail/assetid/46005;jsessionid=aaadH7i1yL4II9 From checker at panix.com Sat Sep 10 02:06:51 2005 From: checker at panix.com (Premise Checker) Date: Fri, 9 Sep 2005 22:06:51 -0400 (EDT) Subject: [Paleopsych] Sigma Xi: A letter on Patricia Churchland's use of the term "libertarian" in a review of The Ethical Brain , and a reply from Churchland Message-ID: A letter on Patricia Churchland's use of the term "libertarian" in a review of The Ethical Brain , and a reply from Churchland http://www.americanscientist.org/template/BookshelfLetterTypeDetail/assetid/46005;jsessionid=aaadH7i1yL4II9 Letter to the Bookshelf A letter on Patricia Churchland's use of the term "libertarian" in a review of The Ethical Brain, and a reply from Churchland July 26, 2005 To the editor: In reading through the book review "Brain-Based Values," by Patricia Churchland (July-August 2005), I was astonished to see that she has named a subset of philosophers, who believe that voluntary decisions are created only by the will, "libertarians." She then likened these so-called libertarians to naive flat-earthers and creationists. I have no idea to whom she is referring, but as a lifelong libertarian and a 30-year member of the Libertarian Party, I would like to set the record straight. Libertarianism is a social, political and economic philosophy grounded in the liberal political ideas that emerged in the 18th and 19th centuries and upon which the founders of our nation based much of the design of our system of government. In essence, we believe that the use of government force should be strictly limited to its legitimate role in the justice system and national defense and that individuals should be given maximum choice in their actions as well as responsibility for those actions. In the economic sphere, our beliefs are best expressed in the works of Friedrich August von Hayek, Ludwig von Mises and Milton Friedman. And, while he denies the label libertarian, Robert Nozick in his Anarchy, State, and Utopia expresses the libertarian philosophy quite thoroughly. This is not exactly a naive group. I highly recommend them to your readers, and particularly to Ms. Churchland. It has been my experience that, in general, libertarians are more sophisticated than average, both politically and scientifically. Whoever this group of philosophers are, they most assuredly are not libertarians. Malcolm Johnson Lapeer, Michigan Patricia Churchland replies: Thank you for your note. I do apologize for the confusion. In the context of free will discussions, philosophers use the term libertarian as I specified. I do think this is a most unfortunate use, however, and for exactly the reasons you point out. By using it that way, philosophers have created an ambiguity, to no good purpose. I should have inserted a comment to clarify that the philosopher's sense of libertarian as used in free will discussions has NO relation to that term as used in a political context. As it happens, I share many of the views of libertarians (political, not free will) and have been a great admirer of John Stuart Mill for as many years as I have been studying philosophy. Best wishes, Patricia Smith Churchland, Chair UC President's Professor of Philosophy University of California San Diego La Jolla CA 92093 From checker at panix.com Sun Sep 11 22:15:54 2005 From: checker at panix.com (Premise Checker) Date: Sun, 11 Sep 2005 18:15:54 -0400 (EDT) Subject: [Paleopsych] NYT: One Last Recipe From Mother, for the Good Death Message-ID: One Last Recipe From Mother, for the Good Death http://www.nytimes.com/2005/08/30/health/30case.html By LARRY ZAROFF, M.D. My mother was liberated when she was 80. She and my father had been married some 60 years when he died. Up until then - Eastern European, patriarchal - he controlled her life and everyone else's that he could. Her space was small, limited. She, a good wife, tolerated, had a capacity for hard work and adversity. Now she was free at last, and she knew it. As if she had made up a list of what she really wanted to do all her married years, she leapt into a new life. She took over the real estate business, better at the work than he was. The company prospered and so did she. In her 80's, she passed her driver's test. She picked up the phone, called everyone, soon becoming the mainframe, a communication catalyst and the database for her extensive family and friends of all ages. Everyone who knew my mother wanted to stay in touch. She had an endless supply of common sense, and sound advice balanced by love. And she traveled from her home near Washington to visit her grandchildren and her great-grandchildren throughout the country. At 92, still living alone in her house, she flew to California to see her newest great-grandchildren and to teach my son the secrets of her famous Jewish recipes. I remember the seminar: she at the head of the stove, turning the potato latkes, while Jonathan took notes and videotaped the lesson. She then masterfully went on to explain the nuances of her kreplach, sweet and sour cabbage and, Jon's favorite, chopped liver. The class lasted two hours during which she did not tire, perhaps knowing that the documentation was important, a legacy. She returned to Washington and two weeks later survived a devastating heart attack, destroying enough of her heart muscle so that she was restricted to bed. My sister had her transferred to a rehabilitation center near her home in Baltimore. She did not do well there. When I called to suggest that she be readmitted to the hospital for intensive [3]therapy, she refused. When I encouraged her to eat, she remarked, "I have eaten enough." I said that my wife and I were coming to see her. "Wait a few days, don't make two trips" was her response. I disagreed, "We are coming now." Arriving in Baltimore late that night, we immediately went to the rehabilitation center. My mother moved in and out of a coma, but without a doubt she spotted me. Shortly afterward, she died. Quietly without pain. Now, knowing her well - not as well as she knew me - I am certain she did not want us to make a second trip. For her to cause her family any inconvenience was out of the question. Her plan was to die as soon as she saw us, goodbye and funeral in one package. We doctors are taught to cure, to heal, when possible to restore patients to a full and active life in society. We are also taught, if we cannot establish health, to allow patients a good death. But we pay little attention to what dying patients owe their loved ones. Leo Tolstoy understood this. In his novella "The Death of Ivan Ilyich," the protagonist, dying slowly, makes life miserable - complaining, criticizing, screaming - for his family until the last day, when he realizes that they love him. He then understands what he owes his wife and children: a good death. In the end he dies quietly, blissfully, a good death for him and his family. From checker at panix.com Sun Sep 11 22:16:01 2005 From: checker at panix.com (Premise Checker) Date: Sun, 11 Sep 2005 18:16:01 -0400 (EDT) Subject: [Paleopsych] NYT: The Senses: Do You Hear What I Hear? Well, Maybe Not Message-ID: The Senses: Do You Hear What I Hear? Well, Maybe Not http://www.nytimes.com/2005/08/30/health/30head.html By ERIC NAGOURNEY People who are tone [3]deaf - not your run-of-the-mill bad shower singers but those who truly cannot hear or produce musical tones - are actually processing the sounds differently in their brains, researchers reported yesterday. The researchers, writing online in Annals of Neurology, said they found the problem in the right side of the brain. The study was led by Isabelle Peretz of the University of Montreal. The researchers, using an EEG to measure brain activity, said they could instantly detect an abnormal response when a tone-deaf person heard a note. Tone-deafness, formally known as amusia, may occur in as much of 4 percent of the population, the study said. A person can be born tone deaf or develop the problem as a result of injury or illness. Amusia is related to speech and reading disorders like [4]dyslexia and dysphasia. A better understanding of it may help doctors devise treatments for people with the other problems, the researchers said. For the study, 8 tone-deaf adults and 10 others were connected to an EEG and asked to listen to a series of musical tones. Half the time, one of the notes was pitched up or down. The volunteers were asked to say when they heard a change. The study found that the brains of amusic volunteers did not respond to small changes in pitch that caused changes among the other volunteers. When the pitch changes were bigger, the study found, the amusic brains "overreacted." The researchers said more study was needed to narrow down where in the brain the problem was taking place. References 3. http://topics.nytimes.com/top/news/health/diseasesconditionsandhealthtopics/deafness/index.html?inline=nyt-classifier 4. http://topics.nytimes.com/top/news/health/diseasesconditionsandhealthtopics/dyslexia/index.html?inline=nyt-classifier From checker at panix.com Sun Sep 11 22:16:17 2005 From: checker at panix.com (Premise Checker) Date: Sun, 11 Sep 2005 18:16:17 -0400 (EDT) Subject: [Paleopsych] THES: IQ claim will fuel gender row Message-ID: IQ claim will fuel gender row http://www.thes.co.uk/current_edition/story.aspx?story_id=2024132&window_type=print Thes Higher Educational Supplement Phil Baty Published: 26 August 2005 Men are more likely to win Nobel prizes and other academic distinctions because they are more intelligent than women, according to a study to be published in a leading research journal. In a paper that will reignite the academic row over genetic differences between men and women, two prominent psychologists will argue in the British Journal of Psychology that men have larger brains and higher IQs than women, which makes them better suited to "tasks of high complexity". The paper's lead researcher is Paul Irwing, senior lecturer in psychology at Manchester University. The co-author is Richard Lynn, emeritus professor of psychology at Ulster University, who has caused outrage in the past with claims that white people are more intelligent than black people. Dr Irwing said: "My politics are rather different from Richard's and I would prefer it if we were wrong." But he said that he had resolved to put "scientific truth" above his personal political conflicts and potentially even his reputation. The researchers concluded that there was a very strong case that men not only have larger brains but have a higher IQ than women, by about five points. The paper will argue that while many academics have denied any IQ difference, those who acknowledge a difference argue that it is too small to be significant, or "not worth speaking of". Dr Irwing said: "We do not think that a five-IQ-point difference can be so easily dismissed." He said that the difference meant that there were a much higher proportion of men with higher IQs. There are 3 men to each woman with an IQ above 130 and 5.5 men for each woman with an IQ above 145, according to Dr Irwing. "These different proportions of men and women with high IQ scores may go some way to explaining the greater numbers of men achieving distinctions of various kinds, such as chess grandmasters, Fields medallists for mathematics, Nobel prizewinners and the like," he said. The researchers acknowledge that women outnumber men at nearly every level of educational achievement, PhD level being the sole exception. The paper will argue that there is evidence that at the same level of IQ, women are able to "achieve more" than men, "possibly because they are more conscientious and better adapted to sustained periods of hard work". The paper also cites a previous study that concluded that IQs in the region of 125 are adequate to "ascend to all levels in the labour market". "The small male advantage (in IQ) is therefore, likely to be of most significance for tasks of high complexity, such as complex problem solving in mathematics, engineering and physics," Dr Irwing said. The research is bound to cause a furious backlash. Professor Lynn had already caused controversy after claiming in a letter to this month's The Psychologist that Cambridge University psychopathologist Simon Baron-Cohen had "reached the same conclusion" as he had on gender. This week, Professor Baron-Cohen said: "I wish Professor Lynn had read the relevant section of my book, The Essential Difference, which concludes that overall intelligence is not better in one sex or the other." phil.baty at thes.co.uk From checker at panix.com Sun Sep 11 22:16:30 2005 From: checker at panix.com (Premise Checker) Date: Sun, 11 Sep 2005 18:16:30 -0400 (EDT) Subject: [Paleopsych] BBC: 'Men cleverer than women' claim Message-ID: 'Men cleverer than women' claim http://news.bbc.co.uk/go/pr/fr/-/1/hi/education/4183166.stm Published: 2005/08/25 09:57:24 GMT [e-mails to the BBC included.] Academics in the UK claim their research shows that men are more intelligent than women. A study to be published later this year in the British Journal of Psychology says that men are on average five points ahead on IQ tests. Paul Irwing and Professor Richard Lynn claim the difference grows when the highest IQ levels are considered. Their research was based on IQ tests given to 80,000 people and a further study of 20,000 students. 'Widening gap' Dr Irwing, a senior lecturer in organisational psychology at Manchester University, told the Today programme on BBC Radio Four the study showed that, up to the age of 14, there was no difference between the IQs of boys and girls. "But beyond that age and into adulthood there is a difference of five points, which is small but it can have important implications," he said. "This is against a background of women dramatically overtaking men in educational attainment and making very rapid advances in terms of occupational achievement." The academics used a test which is said to measure "general cognitive ability" - spatial and verbal ability. As intelligence scores among the study group rose, the academics say they found a widening gap between the sexes. There were twice as many men with IQ scores of 125, for example, a level said to correspond with people getting first-class degrees. At scores of 155, associated with genius, there were 5.5 men for every woman. Nobel prize-winners Dr Irwing told The Times the differences "may go some way to explaining the greater numbers of men achieving distinctions of various kinds, such as chess grandmasters, Fields medallists for mathematics, Nobel prize-winners and the like". The paper will argue that there is evidence that at the same level of IQ, women are able to achieve more than men "possibly because they are more conscientious and better adapted to sustained periods of hard work". Earlier this year, the president of Harvard University, Lawrence Summers, sparked controversy when he suggested at a seminar that one reason men outperformed women in maths and science was genetics. Several guests walked out of the conference after hearing the comments. Dr Summers, who has apologised repeatedly for his remarks, said later that the shortage of senior female academics was partly caused by child-minding duties, which restricted working hours. What is your reaction to this research? Are men more intelligent than women? Send us your comments using the form below. My reaction, coming from a family with a tradition of women who achieve very highly in maths and sciences, is weary disgust. Yet again, what is intelligence? Who is defining it? Have these researchers looked at IQ levels below the average, at gender differentials among prison inmates? Let's have these included for balance, please. Julia Blincoe, Southampton, England All this discussion is fairly irrelevant. Men and women have different and also some similar skills but we are all genetically programmed for survival, together. Basically we need teamwork and to be able to work to each other's strengths and minimise our collective weaknesses in order to make any progress in future. Divisive talk about who is better than who is pointless and smacks of political correctness. Richard, Worksop I think that this study is probably true in a lot of cases, but this is because young girls change their ideals from learning. They start to have maternal thoughts of children and emotional attachment to partners. Therefore they neglect high learning for their natural development of nurturing. In general though I think women are equal to men, but in different roles. Darrell Beck, Jacksonville, Florida Modern IQ tests are no longer biased at all. They have been re-designed to be taken by anyone in the world, with any kind of education (or no education). Before the tests are rubbished, maybe we can establish if they are of the modern variety? I for one am getting tired of the media continually men-bashing and portraying men as incapable. It's nice to have some evidence to the contrary once in a while. Nigel, UK The only thing IQ tests prove is how good you are at doing IQ tests. Matthew, Cheshire, UK Let's not ignore the fact that researchers believe about 20-25 IQ points are influenced by environmental factors. And the fact that test scores are adjusted for gender anyway as males tend to score higher on some factors and females on others. This is not a pure measure of intelligence, but a human-devised Western (and usually male and white) instrument. Flo, Malvern, England I do not believe, on average, that men are more intelligent than women. I'm convinced we often find more men at the extremes like in academia or indeed in the work place simply because we still live in a male-driven society. Women think differently from men, that I do agree with, but more intelligent? From my 'empirical analysis' I find this unlikely. Jason Robinson, Cambridge To throw in another possible factor, remember also the competitive aspect of IQ tests: the average man is possibly more likely to treat a measurement of his mental capacity as a chance to prove himself; the average woman may not push herself as hard as she does not consider the result quite so important. Anne, London, UK I scored relatively high in an IQ test when I was a child. Since then I have done many many many very very very stupid things in my life. I still wonder what that test has to do with intelligence or understanding at all. Alex, Wien, Austria I'm surprised that an academic journal is even considering this publication. A 'scientific' study that only takes into account one measure of intelligence that is well known to be biased towards white European males really shouldn't be taken seriously. I suspect the editor of the journal is male. Maria, Sheffield It really does amuse me that some men need to keep creating these tests to prove to themselves that they are more capable than women. I don't read about a rush of women psychologists doing the same thing. Maybe the women know the truth anyway or maybe they just don't care. Hazel, Sheffield I hope this taken for exactly what it is. A scientific study. Most of these things have little or no bearing on everyday life for most of us. However, as a man, it is nice to hear something positive about us for once. Nick Spiers, London I can easily see this as being true. However, it would be interesting to also look at the bottom IQ levels and see which sex has more at that level before making any judgements. Given that that sections of the media are so keen on denigrating men, and the advertising industry is so addicted to portraying men as buffoons and women as intelligent, perhaps this might re-adjust the balance a little. I find that although many of the women I've known are more socially intelligent, their general knowledge has always been abysmal, hence this being no surprise. Huw Morgan, Cardiff, UK I suspect the tests were formulated to play to men's strengths. Perhaps the tests were even set by men. IQ tests have long been recognised as skewed towards white men of European origin, why do we continue to pay attention to them? IQ tests still don't measure the different ways that intelligence can manifest itself, and until they do, they will continue to provide fodder to those who seek to re-establish man's 'superiority' over women. Roanne, Derby, UK I don't think men are more intelligent than women on average. However, from personal experience I would say that the distribution of intelligence in men is more extreme, that is to say, there are more exceptionally clever men than women, but there are also more exceptionally stupid men than women. Robin, Oxford, IKL It has long been accepted that IQ tests are gender-biased: they are designed by men to test 'male intelligence', such as spatial awareness. They simply do not cover all aspects of intelligence. Therefore it is no surprise that a test designed by men, and a study carried out by men, has found that men are 'more intelligent' than women. Jenny, London If your report is accurate, what this study actually shows is that men are better at IQ tests than women. This is not (necessarily) the same as saying men are cleverer than women. That would require rather more criteria than just an IQ test. Phil Evans, Keele, UK I have the impression that society allows men to develop skills in a focussed way, with less time reserved for repetitive care tasks. IQ can be improved in this way. It is not set and fixed at birth. If men hone skills at the expense of good housekeeping or social responsibilities, perhaps they are granted the time to develop the extra five points where women spend more time looking after house/kids/husband/parents/friends. Marjoline, The Hague, Holland From checker at panix.com Sun Sep 11 22:16:41 2005 From: checker at panix.com (Premise Checker) Date: Sun, 11 Sep 2005 18:16:41 -0400 (EDT) Subject: [Paleopsych] BBC: Women cleverer than men, says MP Message-ID: Women cleverer than men, says MP http://news.bbc.co.uk/1/hi/education/4079653.stm [Note the date. Mr. Mencken certainly thought so, at least that women were far more intelligent than men in what mattered. No woman, he said, would be so dumb to want to be a lawyer or a stock broker.] Last Updated: Wednesday, 8 December, 2004, 16:13 GMT GCSE students after receiving their results Girls are getting more top grades than boys at GCSE and A-level Women are brighter than men, according to the Labour chairman of the Commons education committee. Huddersfield MP Barry Sheerman said there was a "danger" of being obsessed about how boys were doing at school. His comments followed a committee discussion about whether girls or boys found it easier to learn to read. "My own personal view is that women are brighter than men," the MP said, adding that women now earned on average more than men as middle managers. First class? He said: "We should celebrate this, shouldn't we? The brightest kids are coming through and they happen to be women." In recent years girls have consistently outperformed boys at all levels of the education system. The "gender gap" at GCSE level in England, Wales and Northern Ireland this year was 5.3 percentage points at grades A* and A and by 8.4 points at grades C and above in girls' favour. Boys' performance had improved more than girls', however. This was even more noticeable at A-level. Even so, 23.7% of girls' entries achieved A grades, compared to 21% of boys'. Ninety-five per cent of boys' entries were passes, against 96.8% of girls'. More young women than men go to university. Schools define many more boys than girls as having special educational problems - which some researchers argue means the schools are failing to meet boys' needs. 'Wrong schooling' In the latest major international study of the performance of 15-year-olds in maths, reading and science tests, boys out-performed girls in almost all of the 40 countries involved in maths. In reading, girls had "significantly higher average performance" in all countries except Liechtenstein. The biggest gap was in Iceland. Science showed the smallest average gender gap, with boys doing a little better. American educational researchers William Draves and Julie Coates have argued that it is not boys who are the problem but schools. While boys are developing the skills they will need in the "knowledge jobs" of the future, schools are still preparing students for a past industrial age, they have said. SEE ALSO: [47]Boys 'fighting back' in A-levels 18 Aug 04 | Education [48]Top grades rising again for GCSEs 26 Aug 04 | Education [49]Finland tops global school table 07 Dec 04 | Education [50]Will boys always be boys? 28 Feb 04 | Education [51]GCSE 'gender gap' sparks concern 22 Aug 02 | Education [52]Addressing the gender gap 22 Aug 02 | Education RELATED INTERNET LINKS: [53]Education committee The BBC is not responsible for the content of external internet sites TOP EDUCATION STORIES NOW [54]Teachers welcome lesson plan deal [55]Boy charged in school arson probe [56]Muslim image 'must change' [57]Icelanders speak up for language References 47. http://news.bbc.co.uk/1/hi/education/3577868.stm 48. http://news.bbc.co.uk/1/hi/education/3597490.stm 49. http://news.bbc.co.uk/1/hi/education/4073753.stm 50. http://news.bbc.co.uk/1/hi/education/3494490.stm 51. http://news.bbc.co.uk/1/hi/education/2208547.stm 52. http://news.bbc.co.uk/1/hi/education/2208596.stm 53. http://www.parliament.uk/commons/selcom/edemhome.htm 54. http://news.bbc.co.uk/1/hi/education/4200238.stm 55. http://news.bbc.co.uk/1/hi/northern_ireland/4202844.stm 56. http://news.bbc.co.uk/1/hi/uk/4197218.stm 57. http://news.bbc.co.uk/1/hi/education/4201706.stm From checker at panix.com Sun Sep 11 22:20:23 2005 From: checker at panix.com (Premise Checker) Date: Sun, 11 Sep 2005 18:20:23 -0400 (EDT) Subject: [Paleopsych] Yeditoth Internet: Rabbi: Hurricane punishment for pullout Message-ID: Rabbi: Hurricane punishment for pullout http://www.ynetnews.com/articles/0,7340,L-3138779,00.html 5.9.7 [What has Pat Robertson blamed for the hurricane? I haven't heard.] Shas spiritual leader Ovadia Yosef: Hurricane Katrina result of Bush's support for disengagement, failure of New Orleans' black residents to study Torah. `This is the punishment for what Bush did to Gush Katif,' rabbi says Zvi Alush Hurricane Katrina is a punishment meted out by God as a result of U.S. President George W. Bush's support for the Gaza and northern West Bank disengagement, Shas spiritual leader and former Chief Sephardic Rabbi Ovadia Yosef said Tuesday. Notably, the rabbi chose to openly declare what many ultra-Orthodox believers have said for a while now, namely that recent naturally disasters in the U.S. are a direct result of American support for the pullout. Punishment from God? New Orleans (Photo: AP) In his weekly sermon, the rabbi said: "There was a tsunami and there are terrible natural disasters, because there isn't enough Torah study... black people reside there (in New Orleans). Blacks will study the Torah? (God said) let's bring a tsunami and drown them." "Hundreds of thousands remained homeless. Tens of thousands have been killed. All of this because they have no God," said the rabbi, who already found himself in hot water in the past following controversial remarks of one kind or another. Yet Rabbi Ovadia was not done there, and proceeded to explain in detail why Americans deserved the Hurricane. "Bush was behind the (expulsion of) Gush Katif," he said. "He encouraged Sharon to expel Gush Katif...we had 15,000 people expelled here, and there 150,000 (were expelled). It was God's retribution ..God does not short-change anyone." "He (Bush) perpetrated the expulsion. Now everyone is mad at him...this is his punishment for what he did to Gush Katif, and everyone else who did as he told them, their time will come, too," the rabbi said. Ovadia concluded: "Where can evil escape to from God? Its time will come and it will be slapped on the head." Knesset Member Eliezer Cohen (National Union) dismissed Ovadia's comments in a talk with Ynet. "I know meteorology well enough not to believe such rubbish," he said. Meanwhile, Knesset Member Ronny Brison said: "What, God is cross-eyed? He metes out punishments at the wrong place? We're sick and tired of Rabbi Ovadia's primitive worldview. He already did his part, he can remove himself from public life." Ilan Marciano contributed to the story From checker at panix.com Sun Sep 11 22:20:33 2005 From: checker at panix.com (Premise Checker) Date: Sun, 11 Sep 2005 18:20:33 -0400 (EDT) Subject: [Paleopsych] Bart D'Hooghe and Jaroslaw Pykacz: Quantum Mechanics and Computation Message-ID: Bart D'Hooghe and Jaroslaw Pykacz: Quantum Mechanics and Computation Foundations of Science (2004) 9: 387-404 [Sorry about my inability to reproduce the equations. I can supply the PDF next week. This is a terribly important article. It argues that quantum computing can potentially become far more powerful than any mannter of parallel processing done with classical computers.] ABSTRACT. In quantum computation non classical features such as superposition states and entanglement are used to solve problems in new ways, impossible on classical digital computers. We illustrate by Deutsch algorithm how a quantum computer can use superposition states to outperform any classical computer. We comment on the view of a quantum computer as a massive parallel computer and recall Amdahl's law for a classical parallel computer. We argue that the view on quantum computation as a massive parallel computation disregards the presence of entanglement in a general quantum computation and the non classical way in which parallel results are combined to obtain the final output. KEY WORDS: quantum computation, parallel computers 1. INTRODUCTION The theory of quantum computation has its roots in the year 1982 when Feynman (1982, 1986) pointed out that a simulation of a quantum system on a classical computer requires a number of resources (in time and space) exponentially increasing with the size of the system. Indeed, a composite quantum system is described in the tensor product of the Hilbert spaces used to represent the subsystems such that the number of basis vectors increases exponentially with the number of subsystems. However, a simulation performed on a genuine quantum system will not suffer from this exponential overhead due to its intrinsic quantum nature. Deutsch extended the ideas of Feynman by defining the Quantum Turing Machine (Deutsch, 1985) as a universal computation device based on the rules of quantum mechanics. The analogue of the classical Shannon bit is the qubit, i.e., a bi-stable state quantum system (photon, spin 1/2 particle, ...) which is mathematically represented in a two-dimensional complex Hilbert space. If the register of a Postdoctoral Fellow of the Fund for Scientific Research-Flanders (Belgium). quantum computer contains N qubits, then according to the rules of quantum mechanics the state of the register is represented in the tensor product of N two-dimensional Hilbert spaces, one for each qubit. The running of the quantum computer corresponds to a unitary evolution of the state of the quantum register, after which a final measurement is made to read the output. It is possible to show that a quantum computer is a universal computation device, i.e., any computation that can be performed on a classical computer can also be performed on a quantum computer. The paper of Feynman already suggests that a quantum computer can perform certain computational tasks faster than any classical computer can, e.g. the simulation of a quantum system. Deutsch and Simon discovered problems the solving of which on a classical computer is impossible (Deutsch, 1986; Deutsch and Jozsa, 1992; Simon, 1994), but which can be solved on a quantum computer running a quantum algorithm. Although very important from a theoretical point of view, one could still consider these examples as rather academic and without any practical meaning. This changed as the quantum algorithms of Simon and Deutsch inspired other quantum algorithms which could solve interesting?problems, such as Shor's factoring algorithm (Shor, 1994) and Grover's database search algorithm (Grover, 1996). These new quantum algorithms showed that building a quantum computer would not only be important from an information-theoretical point of view, but also for practical?applications. Shor's factoring algorithm can factor integers in a time polynomial in the size of the input such that this algorithm could break a commonly used cryptography system, namely the RSA encryption method which relies on the computational hardness to factor a number into its primes (Rivest, Shamir and Adleman, 1978). Grover's search algorithm can search a database in a time proportional to the square root of the size of the database, in contrast with a classical computer requiring an average search time linearly depending on the size of the database. Despite the fact that the physical implementation of a quantum computer still encounters great technical difficulties, already some preliminary results have been obtained by building quantum computers with a limited number of qubits, demonstrating the validity of some of the most important quantum algorithms, e.g., Shor's factoring algorithm was run on an NMR quantum computer (Vandersypen et al., 2001) to factor the number 15 in its primes 3 and 5. The question to what extent a quantum computer can solve certain problems faster than classical computers is the topic of the theory of quantum information and complexity, e.g., Bernstein and Vazirani (1993). In this paper we focus on the presence of genuine quantum features, such as quantum superposition and entanglement, in a quantum computation. In the first section we give a brief overview of the basics of quantum computation. We recall the notion of a qubit as the unit of information as the quantum counterpart of the classical Shannon bit. Already for a quantum computer containing a few qubits it is possible to study some simple quantum algorithms, and we analyze to what extent the presence of quantum superposition states is necessary in order to obtain a computational speed up compared to a classical computer. As a specific example, we discuss Deutsch problem. We show that the phase of the state vector of the quantum register is not totally irrelevant, but contains information about the evaluated function value. In the third section, we comment on the view on quantum computation as a massive parallel computation. We recall Amdahl's law for classical parallel computers which defines an upper bound in the possible speed up using a parallel computer. Next, we study to what extent this law is applicable in the realm of quantum computers and discuss the validity of the interpretation of a quantum computation as a massive parallel computation. 2. HOW A QUANTUM COMPUTER WORKS 2.1. Computation?is?a?Physical?Process? In this paper we follow a physical view on computation since any actual computation is always performed by some real physical system. The input is encoded by a suitably chosen initial state of the register of the computer, after which a processor induces changes of the state of the register according to some algorithm till the final state of the register is obtained and the algorithm stops. The reading of the output of the computational process corresponds with measuring the final state of the register. In this framework (computers as physical devices with a state evolution described by the algorithm) classical and quantum computers can be studied on the same footing, namely as physical devices with a certain set of possible accessible states and state evolutions induced by running the algorithm. Depending on the nature of the accessible states and possible state evolutions, the physical device which runs the computation is then called either a quantum computer or a classical computer. Classical computers are characterized by a register that is a set of bi-stable classical physical systems (switches, wires, magnets, etc.) able to store binary units (bits) of information: '0' or '1', called the 'classical Shannon bits'. The processor forces a transition of the initial state of the register to the final state according to an algorithm which obeys the laws of classical physics. Therefore, a classical N- bit register can be in only one of its possible 2N states at a time and a classical processor forces its transition to another such state, which is described mathematically as the action of Boolean functions on bit strings. Quantum computers on the other hand have registers which are sets of bi-stable quantum physical systems (two-level atoms, spin-1/2 particles, photons in orthogonal polarization states, etc.) able to store quantum binary units (qubits) of information. The state of each qubit is represented by a vector in a two-dimensional complex Hilbert space. It can be a superposition of two orthogonal states |0> and |1>which form a 'computational basis' of this Hilbert 10 space such that |0>=and |1>= and which may be 01 thought of as representing 'classical' values '0' and '1'. Using Dirac notation, we denote the state of a qubit |q>as: |q>=c0 |0>+c1 |1>,c0,c1 .C, |c0|2 +|c1|2 =1 A quantum N-qubit register can be not only in any of the 2N 'classical' states: |0>=|00 ...0>,|1>=|00 ...1>,...,|2N -1>= |11 ...1>, but also in any superposition state |.>: 2N-12N-1 |.>= ci |i>,c0,...c2N-1 .C, |ci|2 =1 i=0 i=0 The processor forces a transition of the initial state of the register to the final state according to the laws of quantum mechanics, which is described mathematically as the action of unitary transformations on the (superposition) state |.>of the quantum register. Therefore, a quantum processor can in a single run perform a unitary transformation simultaneously on each 'classical' state |i>that is in the superposition state |.>. Hence, running a single quantum processor on this superposition state of classical inputs could be interpreted as the running of a huge family of parallel processors, which is the essence of the famous 'quantum parallelism', a view on quantum computation on which we comment in more detail in the last section of this paper. If, in order to read the output, a measurement is performed on the quantum register in a final state .f :2N-12N-1 |.f>= di |i>,d0,...d2N-1 .C, |di|2 =1, i=0 i=0 the state of the register collapses into one of its 'classical? states |i>with probability |di|2. A 'good' quantum algorithm often uses superposition in the input state and a well-chosen unitary evolution such that the output state of the quantum register has a very large probability to be the 'classical' state corresponding to the correct solution of the problem. For instance, in Grover's database search problem one has to find in a database an entry for which a certain proposition holds. For each input, an oracle can be called to tell whether the proposition holds or not. Starting from a superposition of all the classical states as the initial state of the quantum register, running Grover's algorithm transforms this initial state into a state in which the component corresponding to the entry with the searched property has maximal amplitude. In some cases a quantum computation can even yield the correct outcome with certitude, e.g., for Deutsch problem and Grover's quantum search on a database with two entries. In general, a quantum computation only gives the correct outcome with a certain probability because the final measurement is probabilistic. However, verifying a 'candidate solution' obtained by a quantum computation is often computational easy, and by repeating the quantum computation a number of times one can increase the probability of obtaining the correct outcome arbitrarily close to unity. It should be remarked that quantum information is of an entirely different nature than classical information. This not only follows from the presence of superposition states and entanglement in the quantum register, but also from the no-cloning theorem discovered by Dieks (1982) and Wootters and Zurek (1982), which implies that the state of the quantum register cannot be copied without destroying the original state. Amongst others, this implies that a quantum computation is a very delicate process. It is also one of the reasons why it is so difficult to build a quantum computer with a large number of qubits, since decoherence naturally arising in a quantum computation can not be countered by copying the state of the quantum register a number of times to compensate decoherence effects by redundancy. To solve decoherence problems special error correcting techniques have been developed, e.g., Calderbank and Shor (1996). 2.2. Unitary?Evolutions?of?the?Quantum?Register?by?Quantum?Gates? During quantum computation, the state of a quantum register changes by unitary transformation. Since the set of possible unitary transformations of an N qubit system is infinite, in principle an infinite number of possible state evolutions could be considered for the quantum register. However, universal sets of m-qubit gates have been found, such that any unitary transformation for the N qubit register can be decomposed in a succession of unitary gates working on only a limited number (m) of qubits. Actually, already a two qubit gate together with the single qubit phase operations form a universal set of gates (DiVincenzo, 1995), i.e., any quantum circuit can be implemented up to arbitrary precision using these gates. This not only makes it much easier to grasp the running of a quantum computer in terms of the two qubit and single qubit phase operations, but also suggests that some of the characteristic features of a quantum computer can already be demonstrated by two qubit algorithms. As we have already mentioned in the introduction, any classical computer can be simulated by a quantum computer. This means that any classical gate can be simulated in a quantum computation using a number of universal quantum gates. On the other hand, there are quantum gates without any classical analogue, e.g., the phase gates which act on the phase of a qubit (there is even no classical analogue for the phase of a qubit, since the classical Shannon bit can only have value '0' or '1' and does not have a phase), or the 'square root of NOT'. As an illustration we briefly present some of the well-known quantum gates, see e.g., Nielsen and Chuang (2000) for more. Since quantum gates correspond to unitary transformations, they are completely defined by the matrix representation in a(n arbitrary chosen) basis of the tensor product Hilbert space in which the state vector of the quantum register is represented, i.e., the computational basis. The Hadamard transformation acts on a single qubit and transforms a 'classical' state (|0 . or |1>) into a superposition state. Its matrix representation is given by: 1 11 H =v 21 -1 such that H|0>= |0>v+|1. and H |1>= |0>v-|1. . Also H-1 = H, 22 so the inverse of the Hadamard gate is the Hadamard gate itself: H |0>v+|1. =|0. and H |0>v-|1>=|1. . Another single qubit gate is the 22 phase shift gate. Using Pauli matrices: 01 0 -i 10 sx = ,sy = ,sz = 10 i 00 -1 the phase shift gates are given by: cos .i sin .cos . sin . ei. 0i.sx = i.sy = i.sz =,e ,e -i.i sin . cos . - sin . cos . 0 e The quantum analogue of the classical NOT gate is the quantum NOT gate which inverts the value of a qubit: NOT |0>=|1. , NOT |1>=|0 . with matrix representation given by: 01 NOT = 10 The 'square root of NOT' is a quantum gate which has no classical analogue: v 1 - ii 11 + i 1 -i NOT == 21 i 2 -i 1 vv andis suchthat NOT NOT = NOT. The Controlled NOT gate (CNOT) is a two qubit gate which induces a NOT on the 'target' qubit when the 'control' qubit has value 1. If for a two qubit state |a,b. the qubit a denotes the control bit and the qubit b the target bit, the action of CNOT is defined as CNOT |a,b>=|a,b . a>, with . addition modulo 2, and the matrix representation of CNOT is given by: .. 1000 .. 0100 ..CNOT = . 0001 . 0010 Together with the phase operations acting on a single qubit, the CNOT gate forms a universal set of gates (Barenco et al., 1995), such that any unitary transformation can be expressed in terms of these universal gates. Actually, any two qubit gate inducing entanglement between the two qubits in the quantum register, together with the single qubit phase operations, forms a universal set of gates (Dodd et al., 2001; Bremner et al., 2002). 3. A QUANTUM ALGORITHM USING TWO QUBITS 3.1. Deutsch?Problem? Let us consider the following formulation of Deutsch problem (Deutsch, 1986; Deutsch and Jozsa, 1992). Given a {0,1}-valued function f defined on a two-point domain, for simplicity let it also be {0,1}, determine with a single call of the black box (which calculates the function f and will be called 'oracle') whether the function f is constant or balanced. Clearly, this problem cannot be solved on a classical computer, since a classical computation requires the value of f to be calculated in both points, after which the two values can be compared to determine whether the function f is balanced or not. However, this problem can be solved by a quantum computer. Let a quantum register containing two qubits be prepared in the input state .0: .0 =|0>v + |1>.|0>v - |1>=|00>+|10>-|01>-|11 . 22 2 We assume the existence of an oracle represented by a unitary transformation Uf such that calling the oracle transforms an input state |xy . into the state Uf |xy>=|x( y . f (x)) . with . being the addition modulo 2. Applying this transformation to the state .0 we obtain: |0f(0)>+|1f(1)> - 0f(0) - 1f(1) Uf (.0) = 2 with f (i) = f (i) . 1. After Uf a Hadamard transformation is performed on the first qubit. Let us consider the two possibilities, namely f constant or balanced. If f is constant, f(0) = f(1) , and the state of the two qubit quantum register can be rewritten as ( H . 1) Uf (.0) = ( H . 1) |0f(0)>+|1f(0)> - 0f(0) - 1f(0) 2 |0>+|1>|f(0)> - f(0) = ( H . 1) v . v 22 =|0> . |f(0)>v - f(0) 2 such that if the function f is constant the first qubit is |0>. ? If f is balanced, f(1) = f(0) . 1 = f(0), f(1) = f(1) . 1 = f(0), and the state of the quantum register can be rewritten as: ( H . 1) Uf (.0) = ( H . 1) |0f(0)> + 1f(0) - 0f(0) -|1f(0) . 2 |0>-|1>|f(0)> - f(0) = ( H . 1) v . v 22 =|1> . |f(0)>v - f(0) 2 such that if the function f is balanced the first qubit is |1>. Therefore, in a quantum computation with one call of the oracle the problem whether f is constant or balanced is solved by measuring the state of the first qubit: if the first qubit is 0 the function f is constant, if the first qubit is 1 the function f is balanced. It looks as if the quantum computer can calculate both values f(0) and f(1) parallelly and compare them simultaneously. However, the quantum computater cannot answer a question about the actual value of f(0)or f(1). If the function f is constant, we still do not know whether ( f (0), f (1) ) =(0,0) or ( f (0), f (1) ) =(1,1) . Similarly, if f is balanced we still have two possibilities, either ( f (0), f (1) ) =(0,1) or ( f (0), f (1) ) =(1,0) . As we show in next subsection, the knowledge about the actual values f(0)and f(1) is encoded in the phase of the state vector of the quantum register. 3.2. What?about?the?Phase?? Let us note that |f(0)>-f(0) =(-1)f(0)(|0>-|1>).The state of the quantum register can be transformed into a 'classical state' (i.e., the qubits assume only values zero or one) by applying a Hadamard gate on the second qubit. In the case that f is constant, the state .fc of the quantum register is given by: .c =(1 .H)( H .1)Uf (.0) =(1 .H) |0>.|f(0)>v-f(0) f 2 =(1 .H) |0>.(-1)f(0) |0>v-|1>=(-1)f(0) |0>.|1 . 2 Analogously, if f is balanced, the state .fb is given by: .b =(1 .H)( H .1)Uf (.0)=(-1)f(0)|1>.|1> f In both cases, reading the first qubit yields the answer to the problem of Deutsch. If we could measure the phase of the final state vector of the two qubit register, we could also answer the question about the actual value of f(0) . Indeed: if f(0) =0, then (-1)f(0) = 1 and no phase shift occurs. If on the other hand f(0) =1, then (-1)f(0) =-1 which corresponds with a phase shift of p, since eip =-1. Further, knowing the value of the first qubit, i.e., knowing whether the function f is constant or balanced, we could immediately obtain the value of f(1) as either f(1) = f(0),or f(1) = f(0) . 1.Hence in both cases, f being constant or balanced, knowledge about the phase of the state vector of the register and the value of the first qubit would determine the value of f(0)and f(1).This shows that in the quantum computation solving Deutsch problem knowledge about the actual value of f(0)has been absorbed into the state vector of the quantum register. Therefore, after a single?call of the oracle, the state vector of the quantum register contains information about the function values in both points. Nevertheless, since it is not possible to measure the absolute phase of a state vector, no knowledge about the function value f(0)(and consequently f(1)) can be obtained by application of this algorithm. 3.3. Product?States?in?Deutsch?Algorithm? Let us rewrite some of the expressions in Deutsch algorithm applied to function defined on two points only, to show that in each step of the computation the state of the quantum register is a product state (see Arvind, 2001). Clearly, the initial state is a product state (of superposition states for each qubit). Since |f(1)> - f(1) = (-1)f(1)(|0>-|1>), after calling the oracle, the state of the quantum register is given by: |0f(0)>+|1f(1)> - 0f(0) - 1f(1) Uf (.0) = 2 . . . . . . f (0) f (1) |0> . |f(0)> - +|1> . |f(1)> - = 2 |0> . (-1)f(0)(|0>-|1>)+|1> . (-1)f(1)(|0>-|1> ) = 2 (-1)f(0)|0> + (-1)f(1)|1>|0>-|1 . = v . v 22 which is again a product state. Since the Hadamard gate acts on a single qubit, it can not entangle the qubits such that after applying the two Hadamard gates as described in the previous subsection we obtain again a product state, either (-1)f(0)|0>.|1 . if f is constant, or (-1)f(0)|1>.|1 . if f is balanced. Therefore, during this Deutsch algorithm the state of the quantum register can always be written as a product state. Hence this quantum algorithm does not use quantum entanglement, but rather exploits the possibility of the individual qubits to be in a superposition state, such that the state of the register is a product of superposition states. Since this Deutsch algorithm does not use entanglement, it could be run on a classical optical system making use of interference between optical signals, such that a 'classical qubit' is encoded in the polarization of a (classical) light beam (Arvind, 2001). On the other hand, two qubit quantum algorithms exist which do require the use of quantum entanglement (Arvind and Mukunda, 2000) such that the absence of entanglement in Deutsch two qubit algorithm is not typical for two qubit quantum computation. Moreover, it has been shown (Arvind, 2001) that the Deutsch Jozsa algorithm for three or more qubits involves entangled states, such that its implementation using these 'classical qubits' becomes impossible. Although in a general quantum algorithm entangled states are used, a classical digital computer has no access to the superposition states used by any Deutsch algorithm such that Deutsch problem still can be considered as a good example of quantum computation outperforming classical bivalued computation. Another example of computation 'outperforming' classical (single processor) computation is parallel computation. In the next section, we discuss the maximal speed-up possible for classical parallel computers and comment on the intuitive view of quantum computation as a massive parallel computation. 4. INTERPRETATION OF QUANTUM COMPUTATION AS A PARALLEL COMPUTATION 4.1. Amdahl's?Law?for?Classical?Parallel?Computation? The main idea behind parallel computing is to divide the computational work among a number of processors, whose combined effort will perform a task faster than a single processor. Amdahl (1967) realized that the speed-up which can be obtained by making a computation parallel has an upper bound, such that adding more parallel processors does not result in a further speed-up of the computation. Ideally, dividing the work among k processors will decrease the computation time by a factor k 1 . However, in general not all work can be performed parallel, and the computation contains a serial part. Let T (k) be the time needed to perform the calculation by k parallel processors, and T(1) the time needed for a single processor. The speed-up S (k) is defined as the ratio of T(1) and T (k): T(1) S (k) = T (k) Let us split the computation into a serial part S and a parallel part P which can be divided among the parallel processors. The speed up S (k) using k parallel processors is given by: T(1) Ts +Tp S (k) = = T (k) Ts +Tkp which shows that even for k .8the speed up has an upper bound given by: Ts +Tp Tp S(8 ) ==1 + Ts Ts Clearly, the serial part of the computation puts an upper bound to the possible speed-up using a parallel computer. In practice, the situation is even worse, since this formula does not take into account the extra work needed to divide the parallel part of the computation between the parallel processors, nor the extra time needed to transport the data between the processors and combine them to obtain the final output. Nevertheless, this expression already gives some idea of the feasibility and the amount of possible speed-up using a parallel computer. This expression is known as Amdahl's Law (Amdahl, 1967) and yields the maximal possible speed-up using a parallel computer. 4.2. Quantum?Computation?as?a?Massive?Parallel?Computation?? In this section we analyze some of the main differences between a quantum computer and a classical parallel computer. Let us first explain in what sense a quantum computation could be interpreted as a parallel computation. Let us consider the case that for each classical state of the register a computation has to be performed (e.g., the value of a function) and that each computation takes a time tc independent of the input. On a classical computer with N bits there are 2N classical states such that this calculation would take a time 2Ntc, which increases exponentially with N, the number of bits. A quantum computer can perform this calculation in a single run applying a unitary transformation to the superposition of the classical states in the input. Let tq be the time necessary to run the quantum algorithm. To obtain a parallel computer which is as fast as a quantum computer, a speed-up of S (k) = 2Ntc/tq is needed. In the maximal parallel situation and if tq ? tc, k = 2N parallel processors are necessary, since according to Amdahl's law the speed-up is given by: . T(1) Ts + Tp 0 + TpSk = 2N = . = = = 2N T 2N Ts + 2TNp 0 + 2TNp which shows that under these assumptions the calculation on a parallel computer with 2N processors is as fast as on a quantum computer. In this sense, a quantum computer is as powerful as a parallel computer with as many parallel processors as there are classical input states. However, this view on quantum computation as a massive parallel computation is not completely correct. First of all, in order to obtain quantum speed-up in a general quantum computation unitary gates have to be used which entangle the qubits (Jozsa and Linden, 2002). Therefore, a classical state of the register (which is a product state) is in general transformed by the quantum gates into an entangled state, i.e., a non-classical state. In a real classical parallel computation, classical states are always mapped into other classical states, with Boolean functions describing the action of the classical gates. In a quantum computation, classical states are mapped onto entangled states, with unitary transformations describing the action of the quantum gates. Hence the view on a quantum computer as processing each classical input separately in a massive parallel computation is invalid because now the classical states become entangled. Secondly, in a classical parallel computation the 'parallel results' are classical?states?of the parallel registers, and the final output is obtained by combining the results in a classical way, i.e., using classical (deterministic) logic. In a quantum computation, the 'parallel results' are state?vectors?which need to be combined in a quantum way, namely by superposition in which the phase?of the superposed state vectors is important. Finally, a more philosophical remark. In a classical parallel computation all parallel results are 'actual', i.e., the state of each parallel register can be measured without disturbing the computation. In a quantum computation the final state of the quantum register (before the measurement to read the output) is in general a superposition of classical states. If one performs a measurement to obtain the output of the quantum computation, one can only observe the register in a classical?state with a certain probability. In this sense, the superposition state itself is never 'actualized' in a measurement. Due to the no-cloning theorem, this 'potential' status is irreducible. A fortiori, the quantum 'parallel results', i.e., the state vectors in the superposition, can also only be considered 'potential' and 'intermediate parallel results' cannot be measured without disturbing the quantum computation. Taking into account these important differences between a quantum computation and a classical parallel computation, we argue that a quantum computation should not be viewed as a kind of massive parallel computation in the classical sense (and obeying Amdahl's law), but really should be regarded as a new kind of computation, on a physical device with non classical features such as superposition states and entanglement. 5. CONCLUSIONS A quantum computer which has access to non classical states can allow a computational speed-up impossible on a classical computer. For example, using product states of superposition states of the individual qubits, Deutsch problem can be solved with a single?call of the oracle on a quantum computer. We showed that the state vector of the quantum register contains complete?information about the function values in both points, i.e., the phase of the state vector is not irrelevant. In general, a quantum computation also uses entangled (i.e., non product) states. This lead us to argue against the view on a quantum computer as a massive parallel computer because of the presence of entanglement in a general quantum computation and the crucial difference between the ways in which classical and quantum information is handled, e.g. the no-cloning theorem forbids storing intermediate results without disturbing the quantum computation. Also, the output of a classical parallel computation is obtained by combining the parallel results using classical logic. In a quantum computation the parallel results are state vectors which are combined in a quantum way, namely by superposition in which the phase is important. A classical digital parallel computer has no access to superposition states which are used by the quantum algorithm to solve Deutsch problem with a single?call of the oracle. No matter how many parallel processors a classical computer contains, it can not solve the problem making only a single?call to the oracle. This illustrates the fact that quantum computers are a totally new kind of computers which are more powerful than classical parallel computers, even if these have as many parallel processors as there are classical states in the quantum register. In a sense, a quantum computer should be regarded not as a massive parallel but rather as a superposing?and entangling?parallel computer. ACKNOWLEDGMENTS Part of this paper was prepared during the joint Polish-Flemish Research Project 007/R97/R98 and the financial support provided within this Project is gratefully acknowledged. Bart D'Hooghe is a Postdoctoral Fellow of the Fund for Scientific Research -Flanders (Belgium). REFERENCES Amdahl, G.M.: 1967, Validity of the Single-Processor Approach to Achieving Large-Scale Computing Capabilities. AFIPS?Conference?Proceedings,?Spring? Joint?Computing?Conference?(Atlantic City, N.J., Apr. 18-20). Reston, VA: AFIPS Press, 30, 483-485. Arvind: 2001, Quantum Entanglement and Quantum Computational Algorithms. Pramana?Jr.?of?Physics?56: 357-365; available as e-print: quant-ph/0012116. Arvind and N. Mukunda: 2000, A Two-qubit Algorithm Involving Quantum Entanglement. e-print: quant-ph/0006069. Barenco, A., D. Deutsch, A. Ekert and R. Jozsa: 1995, Conditional Quantum Dynamics and Logic Gates. Phys.Rev.Lett.?74: 4083-4086, e-print: quantph/9503017. Bernstein, E. and U. Vazirani: 1993, Quantum Complexity Theory. Proc.?25th? ACM?Symp.?on?Theory?of?Computating, 11-20. Bremner, M.J., C.M. Dawson, J.L. Dodd, A. Gilchrist, A.W. Harrow, D. Mortimer, M.A. Nielsen and T.J. Osborne: 2002, A Practical Scheme for Quantum Computation with Any Two-qubit Entangling Gate. Phys.?Rev.?Lett.?89: 247902; e-print: quant-ph/0207072. Calderbank, A.R. and P.W. Shor: 1996, Good Quantum Error-Correcting Codes Exist. Phys.?Rev.?A?54(2): 1098-1106. Deutsch, D.: 1985, Quantum Theory, the Church-Turing Principle, and the Universal Quantum Computer. Proc.?Roy.?Soc.?London?A400: 97. Deutsch, D.: 1986, Three Connections between Everett's Interpretation and Experiment. In R. Penrose and C.J. Isham (eds.), Quantum?Concepts?in?Space? and?Time.?Oxford: Clarendon Press, 215-225. Deutsch, D. and R. Jozsa: 1992, Rapid Solutions of Problems by Quantum Computation. Proc.?Roy.?Soc.?Lond.?A 439: 553-558. Dieks, D.: 1982, Communication by EPR Devices. Phys.?Lett.?A?92: 271. DiVincenzo, D.P.: 1995, Two-bit Gates are Universal for Quantum Computation. Phys.?Rev.?A?51: 1015-1022. Dodd, J.L., M.A. Nielsen, M.J. Bremner and R.T. Thew: 2001, Universal Quantum Computation and Simulation Using Any Entangling Hamiltonian and Local Unitaries. e-print: quant-ph/0106064. Feynman, R.: 1982, Simulating Physics with Computers. Int.?J.?Theor.?Phys.?21: 467-488. Feynman, R.: 1986, Quantum Mechanical Computers. Found.?Phys.?16: 507. Grover, L.K.: 1996, A Fast Quantum Mechanical Algorithm for Database Search. Proceedings?of?the?28th?Annual?ACM?Symposium?on?Computing, 212. Jozsa, R. and N. Linden: 2002, On the Role of Entanglement in Quantum Computational Speed-up. e-print: quant-ph/0201143. Nielsen, M.A. and I.L. Chuang: 2000, Quantum?Computation?and?Quantum? Information.?Cambridge: University Press. Rivest, R.L., A. Shamir and L.M. Adleman: 1978, A Method for Obtaining Digital Signatures and Public-Key Cryptosystems. Communications?of?the?ACM?21?2: 120-126. Shor, P.W.: 1994, Algorithms for quantum Computation, Discrete Logarithms and Factoring. Proc.?35th?Annual?symposium?on?Foundations?of?Computer?Science, IEEE Computer Society Press, Los Alamitos, CA, 124. Simon, D.: 1994, On the Power of Quantum Computation. In FOCS'94, 116-123. Journal version available at SIAM?J.?Comp., 1997, 26(5): 1474-1483. Vandersypen, L.M.K., M. Steffen, G. Breyta, C.S. Yannoni, M.H. Sherwood and I.L. Chuang: 2001, Experimental Realization of Shor's Quantum Factoring Algorithm Using Nuclear Magnetic Resonance. Nature?414: 883-887. Wootters, W.K. and W.H. Zurek: 1982, A Single Quantum Cannot be Cloned. Nature?299: 982-983. Departement?Wiskunde?Bart D'Hooghe Vrije?Universiteit?Brussel? Pleinlaan?2? 1050?Brussel? Belgium? E-mail:?bdhooghe at vub.ac.be? Instytut?Matematyki?Jaroslaw Pykacz Uniwersytet?Gda?nski? Wita?Stwosza?57? 80-952?Gda?nsk? Poland? E-mail:?pykacz at delta.math.univ.gda.pl From checker at panix.com Wed Sep 14 01:28:56 2005 From: checker at panix.com (Premise Checker) Date: Tue, 13 Sep 2005 21:28:56 -0400 (EDT) Subject: [Paleopsych] VDare: Steve Sailer: Charles Murray Re-enters Great American Inequality Debate Message-ID: Steve Sailer: Charles Murray Re-enters Great American Inequality Debate http://www.vdare.com/sailer/050828_murray.htm 5.8.28 [14]Steve Sailer Archive Charles Murray Re-enters Great American Inequality Debate By [17]Steve Sailer Social scientist [18]Charles Murray, the co-author of the [19]1994 bestseller [20]The Bell Curve, is perhaps America's premier [21]data analyst. His 1984 book [22]Losing Ground provided the [23]intellectual impetus for the successful 1996 welfare reform law. His 2003 work [24]Human Accomplishment is a delightful statistical romp among the most [25]eminent scientists and artists in global history. Now, Murray is back with a landmark essay, "[26]The Inequality Taboo," in the September issue of [27]Commentary. The printed text alone totals 7,500 words, and the web version contains over 10,000 additional words of notes and sources. If published just by itself, Murray's 1,500-word [28]Footnote 44 would rank as the crucial statement on the recent trends and future prospects of the white-black IQ gap. Known among his friends for his remarkable judiciousness, Murray is a rather sensitive soul. The foul calumny he has been [29]subjected to over the last eleven years must have been tiresome. Murray hadn't crafted an essay about IQ since his little known (but important) [30]1999 effort reporting the then latest results of the enormous military-funded National Longitudinal Study of Youth look at IQ and life outcomes. This year, however, the [31]absurd denunciations visited upon Harvard president [32]Larry Summers for offering what Murray calls "a few mild, speculative, off-the-record remarks about [33]innate differences between men and women in their aptitude for high-level science and mathematics," persuaded Murray that intellectual discourse in America had decayed so shamefully that he needed to return to the fray. "[34]The Inequality Taboo" consists of three parts: bullet A defense of Summers's discussion of why brainiac math nerds are more likely to be male than female; bullet An updating on the last decade's worth of new findings on the white-black IQ gap; bullet And a ringing call to Americans to start discussing honestly the group differences that we see every day: "What good can come of raising this divisive topic? The honest answer is that no one knows for sure. What we do know is that the [35]taboo has crippled our ability to explore almost any topic that involves the different ways in which groups of people respond to the world around them--which means almost every political, social, or economic topic of any complexity." Murray suggests that both high-end male-female cognitive differences and the white-black IQ gap appear to be more or less "intractable"--he writes: "Whatever the precise partitioning of causation may be (we seldom know), policy interventions can only tweak the difference at the margins." Murray's defense of Summers is well-done, although the [36]stupidity and [37]bad faith of the attacks on the Harvard president were so blatant that [38]lesser analysts managed to make most of Murray's points [39]last winter. One interesting fact that Murray doesn't mention is that the much-demonized IQ researcher [40]Cyril Burt was the first to determine that women were equal to men in intelligence. British psychometrician [41]Chris Brand writes: "[I]n 1912, the British psychologist Cyril Burt overturned Victorian wisdom by finding males to have the same average general intelligence as females (using the new [42]Binet tests from France), [and] this finding was replicated in countless investigations (and qualified by the observations that males have a wider range of IQs--thus producing more geniuses and more mental defectives--and that adolescent boys only temporarily lag behind adolescent girls in mental development)." The majority of psychometricians, including, most notably, [43]Arthur Jensen, support Burt's finding of mean gender equality. (However, [44]Richard Lynn has a paper coming out [45]arguing that men average a third of a standard deviation--or five points--higher in IQ). Nor is there any dispute that, just as Summers said, at the [46]extreme right edge of the Bell Curve, from which Harvard's [47]math and science professors are drawn, there are more men than women. One of the most newsworthy aspects of "The Inequality Taboo" is Murray's view that the [48]white-black IQ gap may have narrowed slightly in recent years. According to Murray's article, the three most recent re-normings of major IQ tests came up with a mean white-black gap of 0.92 standard deviations, or 14 points. That doesn't sound like much of a change from the one standard deviation (15 points) racial gap that IQ realists have been talking about for decades. But, in reality, they've been intentionally understating the traditional size of the difference. A 2001 [49]meta-analysis of eight decades of data suggested a 1.1 standard deviation gap (16.5) points. So, if this new 14 point gap found in the three recent re-normings holds up as more data comes in, we may have seen some significant progress on this massive social problem. Currently, though, the evidence remains far from clear. Murray writes in a [50]footnote: "Forced to make a bet, I would guess that the black-white difference in IQ has dropped by somewhere in the range of .10-.20 standard deviations over the last few decades. I must admit, however, that I am influenced by a gut-level conviction that the radical improvement in the political, legal, and economic environment for blacks in the last half of the 20th century must have had an effect on IQ." Murray is too honest, however, to skip over the other, more disturbing, possibility: that the [51]greater fertility of lower IQ women has had a dysgenic and/or "[52]dyscultural" effect. Murray has calculated that 60% of the babies born to black women who began participating in the National Longitudinal Study of Youth in 1979 were born to women with IQs below the black female average of 85.7. Only 7% were born to black women with IQs over 100. I hope that the improved nutrition, health care, and other environmental enhancements that have allowed African-Americans to come to dominate [53]basketball, [54]football, and [55]sprinting in recent decades have also driven up black IQ scores more than the tendency of intelligent black women to [56]remain childless has driven them down. But the overall situation remains murky. It needs more research than is currently being funded. Does part of the white-black IQ gap have a genetic basis? Murray suggests an experiment that might prove conclusive: "To the extent that genes play a role, IQ will vary by racial admixture. In the past, studies that have attempted to test this hypothesis have had no accurate way to measure the degree of admixture, and the results have been accordingly muddy. The recent advances in using [57]genetic markers solve that problem. Take a large sample of racially diverse people, give them a good IQ test, and then use genetic markers to create a variable that no longer classifies people as 'white' or 'black,' but along a continuum. Analyze the variation in IQ scores according to that continuum. The results would be close to dispositive." I bet, however, that Murray's critics won't rush to [58]fund this study and put their money where their mouths are. In his coda, Murray says: "Thus my modest recommendation, requiring no change in laws or regulations, just a little more gumption. Let us start talking about group differences openly--all sorts of group differences, from the visuospatial skills of men and women to the vivaciousness of [59]Italians and [60]Scots. Let us talk about the nature of the manly versus the womanly virtues. About differences between Russians and Chinese that might affect their adoption of capitalism. About differences between [61]Arabs and Europeans that might affect the assimilation of [62]Arab immigrants into European democracies. About differences between the [63]poor and non-poor that could inform policy for [64]reducing poverty." Sounds like the table of contents for VDARE.com! Murray concludes: "Even to begin listing the topics that could be enriched by an inquiry into the nature of group differences is to reveal how stifled today's conversation is... Let us stop being afraid of data that tell us a story we do not want to hear, stop the name-calling, stop the denial, and start facing reality." I'm sometimes asked why I come up with more new insights than the typical pundit. (Here's a [65]list of four dozen things I've either discovered myself, accurately forecasted, or scooped the rest of the press about). It's not because I'm smarter. It's because I just tell the truth. The great thing about truths is that they are causally connected to all the other truths in the world. If you follow one truth bravely, it will lead you to another. In contrast, lies, ignorance, and wishful thinking are dead ends. The Great American Inequality Debate is in one of those dead ends. Charles Murray--and we here at VDARE.COM--are trying to rescue it. [Steve Sailer [[66]email him], is founder of the Human Biodiversity Institute and [67]movie critic for [68]The American Conservative. His website [69]www.iSteve.com features site-exclusive commentaries.] References 14. http://www.vdare.com/sailer/index.htm 17. http://www.vdare.com/sailer/index.htm 18. http://www.isteve.com/2003_QA_with_Charles_Murray_on_Human_Accomplishment.htm 19. http://www.vdare.com/sailer/bell_curve_10yr.htm 20. http://www.amazon.com/exec/obidos/tg/detail/-/0684824299/vdare 21. http://olimu.com/Journalism/Texts/Reviews/HumanAccomplishment.htm 22. http://www.amazon.com/exec/obidos/tg/detail/-/0465042333/vdare 23. http://www.manhattan-institute.org/html/lm_pr_address.htm 24. http://www.amazon.com/exec/obidos/tg/detail/-/006019247X/vdare 25. http://www.amconmag.com/11_17_03/review.html 26. http://www.commentarymagazine.com/production/files/murray0905.html 27. http://www.commentarymagazine.com/production/files/murray0905.html 28. http://www.commentarymagazine.com/production/files/murray0905.html#_edn44 29. http://www.mugu.com/cgi-bin/Upstream/People/Murray/bc-crit.html 30. http://www.lrainc.com/swtaboo/taboos/cmurraybga0799.pdf 31. http://www.vdare.com/francis/050124_harvard_women.htm 32. http://www.vdare.com/sailer/050220_summers.htm 33. http://www.vdare.com/sailer/050306_summers.htm 34. http://www.commentarymagazine.com/production/files/murray0905.html 35. http://www.vdare.com/pb/gambler_dan.htm 36. http://www.vdare.com/sailer/050220_summers.htm 37. http://www.vdare.com/sailer/050306_summers.htm 38. http://www.isteve.com/2005_National_Post_Summers_Harvard.htm 39. http://www.isteve.com/2005_Education_of_Larry_Summers.htm 40. http://www.indiana.edu/~intell/burtaffair.shtml 41. http://theoccidentalquarterly.com/vol3no2/cb-boasa.html 42. http://www.psych.umn.edu/psylabs/CATCentral/Binet.htm 43. http://www.isteve.com/jensen.htm 44. http://news.bbc.co.uk/1/hi/education/4183166.stm 45. http://www.vdare.com/misc/mercer_050106_silly.htm 46. http://www.isteve.com/2005_National_Post_Summers_Harvard.htm 47. http://www.vdare.com/pb/purpose_of_tenure.htm 48. http://www.vdare.com/sailer/no_excuses.htm 49. http://www.questia.com/PM.qst?a=o&d=5001029349 50. http://www.commentarymagazine.com/production/files/murray0905.html#_edn44 51. http://olimu.com/WebJournalism/Texts/Commentary/MarchingMorons.htm 52. http://slate.msn.com/id/33569/entry/33726/ 53. http://www.vdare.com/sailer/march_madness.htm#hoops 54. http://www.vdare.com/sailer/limbaugh.htm 55. http://vdare.com/sailer/lynch_mob.htm 56. http://www.isteve.com/IsLoveColorblind.htm 57. http://www.isteve.com/2002_How_White_Are_Blacks.htm 58. http://www.vdare.com/sailer/pioneer.htm 59. http://www.vdare.com/guzzardi/basta.htm 60. http://www.vdare.com/sailer/fischer.htm 61. http://www.vdare.com/sailer/risky_transactions.htm 62. http://www.vdare.com/fulford/racial_rape.htm 63. http://www.mugu.com/cgi-bin/Upstream/murray-poor?embedded=yes&cumulative_category_title=Charles+Murray&cumulative_category_id=Murray 64. http://www.vdare.com/francis/culture_of_poverty.htm 65. http://isteve.blogspot.com/2005/08/return-of-second-istevecom-panhandling.html 66. mailto:steveslr at aol.com 67. http://groups.yahoo.com/group/iSteve-movies/ 68. http://www.amconmag.com/ 69. http://www.isteve.com/ 70. http://www.vdare.com/asp/donate.asp From checker at panix.com Wed Sep 14 01:30:44 2005 From: checker at panix.com (Premise Checker) Date: Tue, 13 Sep 2005 21:30:44 -0400 (EDT) Subject: [Paleopsych] Patricia A. Williams: The Fifth R: Jesus as Evolutionary Psychologist Message-ID: Patricia A. Williams: The Fifth R: Jesus as Evolutionary Psychologist Theology and Science, Vol. 3, Issue 2 (2005), pp 133-43. [Responses appended.] The historical Jesus seems to have known about human nature as described by evolutionary psychology. He addresses the dispositions of human nature that evolutionary psychology says are central: resources, reproduction, relatedness (kinship), and reciprocity. In doing so he answers Aristotle's question, how can human beings flourish? His answer opens a window onto the divine. Patricia A. Williams is a philosopher of biology and philosophical theologian who writes full-time on Christianity and science. Her recent books include, Doing Without Adam and Eve: Sociobiology and Original Sin (2001) and Where Christianity Went Wrong, When, and What you Can do about it (2001). Her mailing address is PO Box 69, Covesville, VA 22931. Her e-mail address is theologyauthor at aol.com; website www.theologyauthor.com. ----------------- I have argued in Doing without Adam and Eve: Sociobiology and Original Sin that Christian doctrines of original sin are only partly true and that most Christian doctrines of the atonement are flatly false.1 These doctrines depend on the historicity of Adam and Eve, and science shows us that Adam and Eve cannot be historical figures. In my more recent book, Where Christianity Went Wrong, When, and What You Can Do About It,2 a work based on historical Jesus scholarship, I argued further that Jesus did not perceive his own death as a sacrifice for sin; indeed, he did not consider sacrifices for the forgiveness of sin necessary. Since these arguments undermine doctrines previously considered central to Christianity, they appear to make Jesus irrelevant. However, to draw that conclusion would be wrong. I argue here that Jesus is relevant at least in part because he is an astonishingly perceptive evolutionary psychologist. As such, he answers Aristotle's famous ethical question, "how can human beings flourish?" and offers us a window onto the divine. To answer Aristotle's question or, indeed, questions in ethics in general, requires a theory of human nature. We need to know who we are before we can figure out how to flourish. Now, for the first time in history, we have a scientific theory of human nature. It is evolutionary psychology. Evolutionary psychology emerged in the early 1970s. Now the subject has its own textbook,3 a plethora of laborers in its vineyard, and considerable empirical support. Moreover, evolutionary psychology is rooted in sociobiology, a scientific theory that has had great heuristic value and made successful predictions about the social behavior of other animals for almost 40 years. The four central concepts of evolutionary psychology derive from sociobiology and they are well established. They are the four R's of human nature and much of the rest of nature as well: resources, reproduction, relatedness, and reciprocity. Human nature's four Rs People pursue resources. To survive, all organisms must do so. People reproduce. To continue the existence of their gene line, all mortal creatures must do so. These two Rs, resources and reproduction, are essential to the continued existence of the organic world. For sexually reproducing organisms like ourselves, sex is also essential (although not for every individual). Relatedness as such is, of course, essential too. Reproduction produces related organisms, by definition. Here, however, relatedness refers to inclusive fitness, the concept at the foundation of sociobiology. In classic evolutionary theory, an organism is fit if it survives to reproduce. In sociobiology, fitness moves from the individual organism to include its close relatives. In inclusive fitness theory, organisms help close relatives survive to reproduce. The classic case is parents helping dependent offspring grow to maturity. Biologists from Darwin on knew that organisms also help organisms other than offspring, but they did not know why. Sociobiology explained why. Organisms help close relatives because close relatives carry copies of the helper's genes. It turns out that the most accurate way to view evolution is from the point of view of the gene, and the evolutionary goal is to get as many copies of one's genes into the next generation as possible. An organism does this by reproduction, certainly, but also by helping most those relatives most likely to be carrying copies of its genes. Many organisms know who their close kin are?by smell, by sight, by sharing a common nest or mother, and by chemical cues. Of course, none knows about genes. None needs to. All an organism needs to be able to do is to recognize and help its relatives. Inclusive fitness theory permits helping behavior that is not ultimately egocentric or even a hidden form of egocentricity. The helping organism need not expect help in return for its aid because its reward is built into the situation. In return for helping relatives, the organism gets more copies of its genes into the next generation. Of course, it does not know this, so it cannot be behaving selfishly. Finally, for organisms that can recognize individuals and remember them and their deeds of help and harm, reciprocity becomes salient. Reciprocity entails equal exchange and may occur between organisms that are not kin. Reciprocity is egocentric: the helper expects help in return and in an amount equal to the help given. Few animals have the memories requisite to engage in reciprocity, but we do. We are creatures who reciprocate. Much of our lives are devoted to the exchange of goods and favors, and much of our justice system exists to enforce reciprocal relationships like contracts and marriages. The sense that reciprocity is justice underlies the legalization of the death penalty for murder. Few people doubt that the continued existence of the organic world on Earth is good and, so, logically the things that make its continuance possible are good. This entails that the four R's are good, but suppose we re-label them. If we engage in them too vigorously, the pursuit of resources becomes greed; of reproduction, lust; of relatedness, nepotism; and of reciprocity, justice for me and my group, to the exclusion of justice for you and your group. As the pursuit of resources is the most basic need of any organism, so greed is the simplest excess. It entails hoarding more than a person needs, sometimes to the direct detriment of the person, as when we eat to the point of obesity, but also to the detriment of society, as when the economic system is such that a few become ludicrously rich while the many remain poor. Lust is more complex, for it involves two sexes, and evolutionary psychology demonstrates that, because of their biological roles, male and female differ in their sexual desires. By definition, males produce smaller sex cells. This means that, with a few interesting but irrelevant exceptions, male organisms invest less in their offspring than females. In mammals such as ourselves, females make an additional investment, for they carry and nourish their offspring internally for a period, and then feed them milk their bodies make. With his small investment, the man can walk away from a pregnancy he has caused without great loss, even if his child dies, but the woman loses greatly if her child dies, for she has invested greatly. Usually her best evolutionary strategy is to continue investing until her child is able to take care of itself. The result of these differences is that men's best evolutionary strategy is to impregnate many women, whereas a woman's best strategy is to be impregnated by a healthy, prosperous man who will devote his resources to their children. The result after millions of years of evolution is lustful males and sexually cautious females, on average. Marriage complicates the picture further. If a man is to spend his adult years investing his resources in his wife's children (there are marriage systems where this is not the case, but they are not relevant here), he needs to be certain that they are also his children. Therefore, he must guard against his wife's adultery. Millions of years of evolution have produced jealous males who will punish women vigorously for adultery?sometimes brutally, sometimes fatally. Thus, evolution has burdened women doubly. On average, women invest more than men in offspring and on average men punish women more than they punish each other for adultery. Put simply, men lust more; women suffer more. Nepotism is even more complex, but it is easier to explain. Dependent children need their parents' special love and support in order to survive to adulthood, so special parental love is necessary and good. However, it does not end when the child becomes an adult; indeed parents continue to love their children more than they love other people's children?and therefore more than they love other people?for the life of the parent. However, special love for adult relatives easily becomes nepotism (unfair favoritism). If pursued systematically, it becomes tribalism and, reversed, may result in discrimination against or even murder of non-relatives or members of other tribes. It can turn into genocide. Most complex of all is reciprocity. Although Aristotle knew nothing of sociobiology, he built much of his ethical theory on giving others their due. Being familiar with sociobiology, Richard D. Alexander4 and Matt Ridley5 both explicitly developed ethical theories that place reciprocity at the center of ethics. Yet reciprocity is a double-edged sword. It may call for justice in the abstract and justice for others, but often it cries for justice for myself, for my kin, for my group. Reciprocity endorses an eye for an eye. It recommends vengeance. It may also give rise to paranoid vigilance that keeps asking whether the exchange has really been equal. Was I cheated? Again? Greed, lust, nepotism, and justice exclusively for oneself and one's group are the main vices that spring from the four R's. However, the four R's produce virtues too. The virtue that uses resources is generosity, the ability to give resources freely to others. From the desire for reproduction, love springs, the sort of love that sweeps ego aside and encourages the lover to enhance the beloved's welfare. The reproductive desire results in love for people who are not close kin. The virtue founded on relatedness is love also, a steady love for relatives that we can transfer from relatives to all others by symbolizing all people as related. Reciprocity can beget friendships and other relationships of equality, a personal caring that does not keep a ledger of gain and loss. It, too, might develop into generosity and love. Thus, evolution has given us enormous potential for both good and evil, and it has provided a wide range of choices, from egocentricity that seeks the destruction of others to generosity and love that seek to further their welfare. We are remarkably flexible and free. That is the primary reason we find it so difficult to answer Aristotle's question about how to flourish. If we have such a range of desires and can engage in such an enormous number of activities, then which are those that best promote our flourishing? Therefore, to answer Aristotle's question, we need to know about the four Rs, which are the central themes of human nature. We need to recognize their centrality in our psychological makeup and to know their potential to lead us into vice and virtue. Finally, we need to grasp how best to handle them so that all people may flourish. Without knowing anything about Aristotle?not to mention evolutionary psychology!?these things are precisely what the historical Jesus knows, discusses, and enacts. Jesus and the simplest R's The figure known as "the historical Jesus" is neither the Jesus of the Gospels, who is many contradictory persons, nor the "real" Jesus, whoever that would be. Whoever it was, we cannot recover him now. The historical Jesus is a scholarly reconstruction that most Jesus scholars base primarily on the synoptic Gospels: Matthew, Mark, and Luke. Some scholars also use the Gospel of Thomas, which was recovered with the discovery of ancient documents at Nag Hammadi in 1945. All Jesus scholars also use other historical materials that inform them about the situation in Palestine from about 200 BCE to 100 CE. These materials include Greek and Roman archives, the works of Josephus and other ancient scholars, the Hebrew Scriptures, Jewish intertestamental literature, the Dead Sea Scrolls, and the findings of archaeology. Most scholars exclude the Gospel attributed to John because they think it contains very little historical material going back to Jesus. As philosopher of historical methodology Raymond Martin notes in his book on the works of outstanding Jesus scholars, the historical Jesus scholars are professional historians doing expert work that meets the standards of modern scholarship.6 John P. Meier explains their methodology at length,7 and Funk, Hoover, and the Jesus Seminar explain it more succinctly while also laying out clearly how and why historians view the Gospels as they do.8 The Jesus I refer to is the scholars' reconstruction. The main effect of using their reconstruction here is to restrict the passages of scripture I discuss to those the scholars think go back to the historical Jesus. The historical Jesus perhaps says more about the use of resources than about any other subject. He speaks about resources in short sayings like "Do not worry about your life, what you will eat, or about your body, what you will wear" and "Consider the ravens: they neither sow nor reap, they have neither storehouse nor barn, and yet God feeds them" and "Consider the lilies, how they grow: they neither toil nor spin; yet I tell you, even Solomon in all his glory was not clothed like one of these" (Luke 12:22, 24, 27 NRSV). Jesus says God takes care of them and will take care of us. He says we spend too much time worrying and working over resources. He tells stories like that of the rich man and Lazarus (Luke 16:19 ? 26). The rich man, who dressed and dined lavishly, ignored poor, sick Lazarus at his gate. After both die, the rich man finds himself in hell, staring at Lazarus in heaven. Here Jesus emphasizes our common humanity and the skewed distribution of resources in the ancient world, where the rich got rich by exploiting the poor. Those who neglect their less fortunate neighbors, who consider their own wealth a sign of their favor in God's sight and poverty and sickness signs of disfavor, are wrong. We live together, and we cannot flourish separately. Perhaps the most poignant of the stories scholars attribute to Jesus concerns an unnamed farmer who is blessed with such abundant harvests he decides to tear down his already full barns and build bigger ones to hold his burgeoning produce. Jesus calls him a fool, for he will die that night, and he cannot take his carefully conserved resources with him (Luke 12:16 ? 20). Perhaps he should have considered the lilies and ravens or the suffering poor and used his wealth rather than hoarding it. The Gospels tell us little about what Jesus thought about sex. Moreover, Jesus scholars think the few sayings attributed to Jesus in the Gospels that deal with sex are not certain to go back to the historical Jesus. The main exception is Jesus' prohibition of divorce (Matt. 5:31 ? 32). In Judaism in Jesus' day, women could not divorce their husbands, but husbands could divorce their wives, often for trivial reasons. At the same time, women depended on men for protection and income. A woman without a husband was in trouble, and a divorced woman was tarnished goods. Thus, Jesus' prohibition of divorce protected women. In a different culture, he might have offered different protection?say, equal pay for equal work or heavy penalties against men for spousal abuse. The point of the prohibition is not that divorce is wrong but that women need protection from the power men want to exercise over them, as evolutionary psychology suggests. From the point of view of legality, Jesus' prohibition of divorce is inconsistent with his usual laxness about laws; legal consistency would expect him to allow divorce. However, his prohibition of divorce is consistent with his tendency to protect the disadvantaged, whether the poor, the sick, children, or women. In prohibiting divorce, Jesus was protecting women. His concern for the equality of women appears again in a story about a woman caught in adultery, currently recounted in the Gospel according to John (7:53 ? 8:11). Although found in John, the narrative is not thought to be originally part of John's Gospel: the style is not John's and the passage is not in some of the earliest copies of John that we have. It also floats around in John and even shows up in early versions of other Gospels. Yet it is attested by early church historians and is consistent with other deeds and sayings known to come from the historical Jesus. Scholars think it probably goes back to Jesus. The narrative tells of some men bringing before Jesus a woman who, they claim, they have caught in the act of adultery. They ask whether she should be put to death by stoning, as the law required. Jesus replies that the sinless man should throw the first stone, and the men slowly depart, leaving the woman to live. This story fits evolutionary psychology perfectly. Evolutionary psychology says that men are more lustful than women are but, at the same time, they want to stop their women from committing adultery and may be brutal in order to do so. The story says the men caught the woman in the act. If so, they necessarily caught the man in the act as well, but he is nowhere to be found. The men want to punish only the woman, despite the fact that the Torah calls for the deaths of both parties (Lev. 20:10; Deut. 22:22). Jesus, knowing men's hearts, says okay, stone her if you are sinless, and the men retire, their own lust exposed. Again, Jesus is protecting women and making the battle between the sexes more equal than the men had wished. Less certain to go back to the historical Jesus is the story about the Samaritan woman with whom Jesus converses at the well in Samaria. The story is only in John's Gospel (4:5 ? 42). Yet it is consistent with what scholars know about Jesus, who behaved in ways his society disapproved. He talks to the woman in public, not acceptable behavior for a Jewish man, and she is a Samaritan, member of a group that Jews despised. It turns out that she has had several "husbands," that is, lovers, but although Jesus knows this, he does not withdraw from the conversation. He does not appear to condemn her illicit sexual behavior. The other almost certain item scholars know about Jesus regarding sex is that he was celibate. Among Jews, whom the Torah commanded to be fruitful and multiply (Gen. 1:28), Jesus' celibacy might seem unlikely. However, many of the prophets were celibate, and John the Baptist and those members of the Jewish sect of Essenes who lived in the wilderness probably were as well. Yet Jesus never praises celibacy, and his leading disciple, Peter, is married (Mark 1:30). Nothing more is known about Jesus' attitude toward reproductive relations except that he seems to have liked and protected children, and many women were among his followers and were active among the first generation of Christians, so he must have welcomed them into the group around him. Considering how aroused people get about sexual/reproductive relations the world over, Jesus seems amazingly calm and unperturbed. He calls a married man to be his leading disciple, yet remains celibate. He cares for children, yet has none of his own. He does not get excited about illicit sexual relationships, yet protects women from men's brutality toward them in the crucial issues of adultery and divorce. Indeed, concerning the two least complicated Rs, resources and reproduction, Jesus advises us to be at ease. About resources, he suggests we behave more like other animals, not worrying so much about the future but enjoying the fruits we have today. The prayer attributed to him says, "Give us this day our daily bread" (Matt. 6:11 NRSV) rather than asking for a good harvest to store away. Yet Jesus is not an ascetic. On the contrary, he parties enough to be accused of drunkenness and gluttony (Matt. 11:19). Jesus seems to steer a middle course, and this suggests that he is insufficiently attracted to this R either to pursue or to reject it. He uses resources without being possessed by them. His attitude toward reproduction is similar, except that he seems to have studied this chapter of his evolutionary psychology textbook even more carefully. Knowing of men's lust and their desire to control women's reproduction, brutally if necessary, he tries to protect and help women, making the reproductive relationships equal. Other than that, his attitude seems to be "take it or leave it." Again, he is insufficiently attracted to this R either to pursue or to reject it. Jesus and the other R's To understand Jesus on the other two R's?relatedness and reciprocity?requires some knowledge of Judaism in Jesus' day. The Jews had two ancient beliefs. They believed God had chosen them out of all the nations on Earth to be God's special people, and they believed God had promised them a particular piece of land, that it was God's holy land, and that they were to live on it and to cultivate it as their own. Yet in the first century, Jews were scattered across the Roman Empire and beyond, and Rome was sovereign over the holy land where Jews thought only God should reign. Most Jews who cried for justice wanted to drive Rome out of God's land, their land. A newer belief about chosenness invaded Judaism about the time of the Exile. Some Jews thought God had chosen only a remnant of the Jewish people and had doomed all other Jews. This remnant theology often included apocalyptic eschatology, the idea that the end of the age was near and that it would culminate in holy and devastating war led by God's messiah and fought by his angels and the holy remnant against the Romans and the condemned Jews. In the end, God would establish justice, that is, God would vindicate the remnant and destroy the other Jews and the gentiles who did not convert to worship of the Jewish God. Moreover, all twelve Jewish tribes, including the ten that had disappeared centuries ago, would assemble in the holy land along with the (good) Jews from the Diaspora. These exclusivist and violent beliefs caused three centuries of sporadic civil war among the Jews, when Jews murdered other Jews and called it God's justice. The civil wars culminated in the Roman destruction of the Temple in 70 CE. Jewish themes, then, were land (resources), kinship (relatedness), and justice (reciprocity seen as self-vindication), all under the aegis of the one and only God. (Other Jews had apparently been reading evolutionary psychology, too.) Given Jewish circumstances, these themes provided a recipe for self-destruction. Self-destruction arrived via civil war and Roman exasperation. Jesus stepped into this stew as an itinerant preacher. His career began with John the Baptist (Mark 1:1 ? 11) who was preaching by the Jordan River, announcing the forgiveness of sins through baptism. In this, John was not following Torah, which commanded sacrifices in the Temple for the forgiveness of sins. Jesus' indifference toward the Temple, symbol of Jewish chosenness, relatedness, and covenantal reciprocity with God, implies that he was not attracted to these Jewish themes. Relatedness, in particular, was not high on Jesus' list of sacred subjects. In an extremely well attested incident (Mark 3:31 ? 35), Jesus was talking with his close disciples and friends when his mother and brothers approached and asked to see him. When Jesus' disciples told him his family was outside, Jesus not only refused to see them but also disowned them. He stated, instead, that his friends were his family. In so far as Jesus was unmarried, he also rejected the relatedness that comes with children and in-laws. As a good evolutionary psychologist, he knew that families are naturally hierarchical and promote nepotism. Jesus wanted to emphasize equality and the common kinship of all people. His emphasis on our common kinship stands in stark contrast to the Jewish claim that all Jews were related and special in God's sight because all were the offspring of one man, Abraham. Abraham, they claimed, was their father. Jesus referred to God as father, not Abraham. God, of course, in Jewish theology, is creator of all, the father of all people, not merely the Jews. Jesus tells stories about fathers in which the father represents God. In the story of the prodigal son (Luke 15:11 ? 32), a younger son asks for his inheritance before his father dies, then goes off and violates Jewish law, finally tending pigs, animals the Torah calls unclean. He even envies the pigs. When, broke and hungry, he returns home hoping to become one of his father's servants, his father embraces him, forgives him, and throws a feast for him, much to the chagrin of the prodigal's elder brother who has faithfully remained home and served their father well. If this father represents God, Jesus is implying that God loves and saves the unfaithful as well as the faithful. This is not remnant theology. It is not reciprocity, either. Indeed, the father seems generous to a fault. Jesus seems well aware of the human desire for reciprocity and its offshoot, justice, and he constantly discourages seeking them. Well known are his short sayings denigrating the Torah reciprocity of an eye for an eye. The sayings suggest, instead, that if people batter one cheek, turn the other; if they sue for a coat, give them a cloak also, and if they force a person to go a mile, go two (Matt. 5:38 ? 41). These statements all reject reciprocity. Jesus also tells stories that portray reciprocity and justice negatively. The prodigal son is one such story. It depicts the elder brother as wanting justice. He is angry about his father's embrace of his brother, even after the father assures him that all the father has is his (Luke 15:31). He repudiates and perhaps envies the father's generosity even after the father tells him that he will lose nothing by it. An even more relevant story is that of the day laborers (Matt. 20:1 ? 15). Jesus tells of a landowner who hires some laborers early in the morning and promises them a day's wage?a fair wage, probably, since they accept it. He hires others later, some as late as evening. When time comes to pay the laborers, he pays the late-hired a whole day's wage, and those hired earlier complain. The landowner wants to know what their complaint is. They received the agreed wage. The landowner did not cheat them. Nevertheless, they feel resentful. They expected reciprocity to be the rule the landowner would use to pay his workers. Instead, the landowner displayed generosity, and his generosity angered them. First century history tells who the angry figures represent. They typify the remnant theologians and their followers who expected God to repay their faithfulness with victory and vindication and condemn all the unfaithful, which is to say, all the Jews who disagreed with them. Repeatedly, Jesus rejected reciprocity in favor of generosity and forgiveness. The rabbis had suggested that a person should be forgiven three times. The Gospels report that Jesus recommended seven (Luke 17:4), a symbolic number standing for wholeness or completion. The most extreme report has Jesus saying to forgive 77 times (Matt. 18:22). A figurative doubling of completion or infinity seems to be implied. This is probably Matthew's emendation, but the idea of infinite forgiveness apparently goes back to Jesus. Jesus was wiser than those who want to make ethics center on reciprocity. He knew that placing reciprocity at the center of ethics generates ruinous results. Reciprocity justifies vengeance. It stifles generosity. It encourages self-centeredness, self-righteousness, and paranoia. Borrowing from the Torah, Jesus recommends a better way: love your neighbor; love, he says, is the heart of the Torah and the prophets (Matt. 22:39 ? 40). Love is generous; love forgives; love helps others and casts out fear. In contrast, reciprocity is egocentric. Placing it at the center of ethics encourages people to guard their own interests and mistrust other people. In doing so, it leaves them lonely and fearful, and therefore they seek groups that emphasize conformity, enforce strict rules, and proclaim their own self-described goodness while denouncing outsiders' evil. Jesus knew such people and such groups?the remnant theologians and their followers. He looked around him and saw that a strong emphasis on reciprocity does not lead to a flourishing life. Yet Jesus embraced equality for the poor and powerless. The concept of egalitarianism might spring from reciprocity, but they are not the same. Jesus seems to think that the rich might give to the poor without asking return, and husbands might treat their wives with the same equality they offer to their fellow men. Jesus and the Divine To say that Jesus was an excellent evolutionary psychologist is not to claim that he knew anything about evolution. He was probably a typical Palestinian Jew of his time in his knowledge of the world. He would have known the Torah said God created the world in six days and created Adam and Eve as the first human beings. Jesus probably would not have known much history except as the Hebrew Scriptures represent it, and he would have known no science. Nonetheless, he had remarkable insights into human nature as evolutionary psychology discloses it and profound solutions on how to cope with it, based on compassion, especially for the powerless. His slogan might have been "equality, not reciprocity," which amounts to generosity by those who have power and wealth to those who have neither. Jesus represents God's generosity this way: God gives without requiring return. The Gospels tell us that the divine touched Jesus at his baptism and, after that, he exorcised the possessed, healed the sick, and forgave sinners. Josephus, too, says Jesus was "a doer of wonderful works." 9 Wonderworkers were said to work by divine agency, and there seems little doubt that Jesus was close to God, filled with the divine, a "spirit person," to use historian Marcus Borg's term. Jesus himself felt he was close to the divine. He prayed frequently, sometimes all night, and he called God "father." His insights into human nature and his solutions to the problems it poses for human flourishing probably came from the divine source. If so, Jesus may be for us a window onto the divine. Jesus spoke of love, generosity, and forgiveness. In doing so, he spoke of the nature of God. Christian atonement theology has claimed that Jesus died on the cross as a sacrifice for sins. Jesus, it claims, died to satisfy God's need for justice?a God, it also claims, who has no needs. An innocent man had to die to pay for the sins of the guilty because God required that justice be done. Such is atonement theology. It does not take much insight into the nature of justice to grasp the injustice of killing the innocent to forgive the guilty. The God who allegedly commanded such a deed ruled by reciprocity and had a stingy soul. This is not Jesus' God. Jesus says that God is generous, so generous it angers those whose ethics rest on reciprocity. God is not a God of reciprocity, of contracts and covenants. Nor, according to Jesus, does God demand sacrifice for the forgiveness of sins. The Gospels never show Jesus sacrificing at the Temple. They introduce him as a disciple of John the Baptist, who does not sacrifice at the Temple either. Instead, John baptizes for the forgiveness of sins. Jesus, too, forgives sins without requiring sacrifice?or even baptism. Jesus did not think God requires sacrifices in order to forgive sins. Indeed, Jesus says God gives us what we need when we ask for it. In one of his stories, he tells of an evil judge whom a widow importunes so strenuously he decides her case in her favor (Luke 18:1 ? 5). The story is about an evil judge, not a good one, and yet when asked, he gives what is wanted. How much more then, would Jesus' God, a generous, fatherly God, give what we ask, including forgiveness? In summary, the historical Jesus was an evolutionary psychologist who told us how to flourish in a world where human beings evolved, yet where divinity pervades human life. We flourish, he says, not by egocentricity, with its greed, lust, nepotism, and self-seeking justice, but by love, with its generosity and forgiveness. Since greed and generosity, egocentricity and love arise from the four R's, we have the capacity to choose greed or generosity, egocentricity or love. Jesus asks us to choose love, to act like God rather than like evolved creatures caught in evolutionary overdrive. Jesus says not to be so self-concerned, so harried, and so vigilant. The fifth R, he says, is "Relax." Notes 1. Patricia Williams, Doing without Adam and Eve: Sociobiology and Original Sin (Minneapolis: Fortress Press, 2001). 2. Patricia Williams, Where Christianity Went Wrong, When, and What You Can Do About It (Philidelphia: Xlibris, 2001). 3. David M. Buss, Evolutionary Psychology: The New Science of the Mind (Boston: Allyn and Bacon, 1999). 4. Richard D. Alexander, Darwinism and Human Affairs (Seattle: University of Washington Press, 1979). 5. Matt Ridley, The Origins of Virtue: Human Instincts and the Evolution of Cooperation (New York: Penguin Books, 1996). 6. Raymond Martin, The Elusive Messiah: A Philosophical Overview of the Quest for the Historical Jesus (Boulder, Colorado: Westview Press, 1999). 7. John P. Meier, A Marginal Jew: Rethinking the Historical Jesus. Vol. 1, Roots of the Problem and the Person (New York: Doubleday, 1991). 8. Robert W. Funk, Roy W. Hoover, and the Jesus Seminar, The Five Gospels: The Search for the Authentic Words of Jesus (San Francisco: Harper San Francisco, 1993). 9. Falvius Josephus, Antiquities of the Jews, III:3, in The Complete Works of Josephus (Grand Rapids, Mich.: Kregel Publications, 1981). --------------------------------- A Response to Patricia A. Williams' "The Fifth R: Jesus as Evolutionary Psychologist" Richard F. Carlson and Jason N. Hine We wish to thank The Rev. Bill Maury-Holmes for his insightful suggestions in the preparation of this manuscript. Richard F. Carlson is Research Professor of Physics at the University of Redlands. He is editor of the book, Science and Christianity: Four Views (2000). Jason N. Hine has worked in the area of science and Christian faith for a number of years. Recently he co-led the seminar, "What Can We Teach Our Children About Dinosaurs?" Patricia A. Williams' essay centers on her assertion that the "historical Jesus" (as defined by the work of the Jesus Seminar) exhibited personal characteristics consistent with an understanding of human nature as described by evolutionary psychology. This relatively new enterprise describes human characteristics in terms of David Buss' four "R's": Resources, Reproduction, Relatedness (kinship), and Reciprocity.1 After showing that each R generally contains a spectrum of characteristics, Williams attempts to identify Jesus' position along each spectrum by citing incidents and sayings from the Gospels. We have a few quibbles that we will mention here but not pursue. Williams states that she uses the results of the Jesus Seminar in her characterization of Jesus.2 Yet over half of her Gospel references have been given gray or black classifications by the Seminar (gray or black implying that the sayings in question are most likely not Jesus' words). Two other quibbles relate to Williams' statement that Jesus did not perceive his own death as a sacrifice for sin and her comments on Christian atonement theories. Each of these is worthy of a response, but we have chosen to concentrate on Williams' evaluation of Jesus' character in terms of Buss' four R's. We see Williams' essay as a useful, interesting, and fanciful way to view Jesus. However, we wish that she had followed her own ideas just a bit further. By successfully demonstrating how Jesus' character is consistent with evolutionary psychology, Williams places him in a box of dimensions specified by the four R's. We feel that Jesus' character surpasses the four R's in a number of remarkable ways. While Williams briefly explores intimations regarding the divinity of Jesus in the final section of her article, we find her presentation to be inadequate. Our goal is to highlight areas where we would like to have seen Williams take her ideas further. We will refer to much of the same evidence as used by Williams from the Gospels. In some cases, we provide additional evidence from the Gospels, which for the most part falls under Jesus Seminar categories of red or pink (most likely the sayings of Jesus) or occasionally gray (probably not said by Jesus but close to his ideas).3 As does Williams, we will use black references in a very limited way (black, in the opinion of the Jesus Seminar, implies that Jesus did not say it, as it represents the perspective or content of a later or different tradition).4 In doing so we hope as much as possible to compare oranges to oranges (maybe we should say red grapefruit to red grapefruit). Our understanding is that, when presented with earthly problems, Jesus succeeded in incorporating God's will in his response. Another way of putting this is that Jesus' response was both horizontal (human to human) and vertical (human to God). As indicated by Williams, Jesus' response to every situation was based on "the unmatched quality of God's love, generosity, and forgiveness." 5 The problem is we feel that Williams could have done more to demonstrate this when considering his responses to people or situations. In her discussion of Jesus and his attitudes to the issue of Reproduction (one of the four R's), Williams cites the account of the adulterous woman.6 The religious leaders brought the adulterous woman to Jesus thinking that there were only two possible ways he might respond ? either uphold the Law and condemn the woman to death, or allow her to live and thereby break the Law. However, Jesus' response did not come from among this set; rather his action was profoundly perceptive, wise, and loving. Williams claims that Jesus' intention was to provide protection for women by exposing the lust in the woman's accusers. We agree that this is the main thrust of the narrative. Clearly, Jesus cared for and forgave the adulterous woman, and one may infer from this that Jesus cares for all women. However, more than this, Jesus' response also demonstrated care for the woman's accusers?he did not seek to humiliate them but rather his response served as an invitation to engage in serious self-reflection, and thus the door was left open for any of the accusers to come to Jesus later. Further, Jesus' action here would have likely had a similar effect on each woman and man in the crowd. Even today, his response invites personal reflection, illuminates our shared struggle with sin, and demonstrates the love of God through what is termed "grace"?the free and divine gift of mercy, acceptance, and favor. Hence, we feel that Jesus' approach stretches the scope of what evolutionary psychology considers possible. The next R we examine is Relatedness or kinship. Williams, in asserting that "Relatedness, in particular, was not high on Jesus' list of sacred subjects," 7 cites an "extremely well-attested incident (Mark 3:31 ? 35)",8 a passage rated as gray by the Jesus Seminar scholars. Here Williams sees Jesus as rejecting his family. Referring to his family, she states, "Jesus not only refused to see them but also disowned them." 9 Yes, it is possible to infer from this that Jesus is rejecting his family here. However, our understanding, supported by Williams herself several sentences later, is that Jesus was expanding on what he considers his true family to be?" Whoever does the will of God is my brother and sister and mother" (Mark 3:35?NRSV). Elsewhere in Mark 7:9 ? 13 (black by the Jesus Seminar) Jesus affirms the command to "honor your father and mother" (Matt. 19:19?gray) by condemning the Pharisees' and scribes' use of the Corban offering in order to relieve themselves of the obligation to support their parents. Like Williams, in the Gospels we too see a consistent theme of Jesus' concern for and acceptance of society's rejects, e.g. the blind beggar, the Samaritan woman, the prostitute, tax collectors, in short?the "lepers" of that society. We conclude that an expanded view of relatedness was very high on Jesus' list of sacred subjects, again in line with but stretching the conceptual boundaries of evolutionary psychology in a way that provides us a glimpse of God's all-inclusive love. We next turn to Williams' treatment of the story of the prodigal son (Luke 15:11 ? 32?pink by the Jesus Seminar) and to other Gospel examples she cites in her discussion of another R?Reciprocity.10 Here we affirm Williams' conclusion that, in terms of relationships with others, Jesus rejected reciprocity and instead constantly exhibited extreme generosity, forgiveness, friendship, and love in his teaching and his relationships with a wide array of people. In terms of the fourth R, Resources, we disagree with Williams' characterization of Jesus as being "at ease" and "not worrying"11 about resources. On the contrary, we see Jesus as one who was concerned about the wise and generous use of resources (e.g. see Matt. 25:14 ? 28?pink- and Mark 10:17 ? 22?gray). We feel that Jesus' command to "not worry" (Luke 12:29?gray) about resources is to be understood as an important step in seeking God's kingdom (Luke 12:31?black), a proper prioritization of Relatedness vs. Resources, not as Williams puts it, a general indifference toward resources on the part of Jesus. In closing, we feel that Patricia Williams is addressing a topic of crucial importance: understanding the person of Jesus. This is crucial, because we feel that our clearest understanding of God is through the person of Jesus. In addition, we feel Williams is moving in a helpful direction as she relates the insights of evolutionary psychology to the historical Jesus in a way we see as light-hearted, yet full of opportunities for greater insight into the divine. Jesus not only goes beyond the horizontal (human to human) categories of the four R's, but he also exhibits a vertical (human to God) aspect of his character that stretches the boundaries of evolutionary psychology toward the positive extremes exhibited by God through Jesus. We hope that Williams and others will continue to explore these new ideas further. Notes 1. David M. Buss, Evolutionary Psychology: The New Science of the Mind (Boston: Allyn and Bacon, 1999). 2. Williams' essay above, 136. 3. Robert S. Funk, Roy W. Hoover, and the Jesus Seminar, The Five Gospels, The Search for the Authentic Words of Jesus (New York: Macmillan, 1993), 36. 4. Ibid. 5. Williams essay, 142. 6. Ibid., 138. 7. Ibid., 140. 8. Ibid. 9. Ibid. 10. Ibid. 11. Ibid., 139. --------------- Was Jesus an Evolutionary Psychologist? Joshua M. Moritz Joshua M. Moritz is a Ph.D. student in Theology and Science at the Graduate Theological Union, Berkeley, and Managing Editor of Dialog: A Journal of Theology. His undergraduate and professional background is in evolutionary biology and paleoanthropology. In her article "The Fifth R: Jesus as Evolutionary Psychologist," Patricia Williams casts Jesus in the role of a bio-psychological counselor and seer whose understanding of human nature turns out to be precisely that of the modern field of evolutionary psychology. There is no latent anachronism here, but rather, Williams is pointing out that the Jesus of history understood what makes human beings get up in the morning, what drives us, and what makes us tick. According to Williams, evolutionary psychology posits four primary factors that motivate and orient the vast majority--if not all--of human behaviors: resources, reproduction, relatedness, and reciprocity. The historical Jesus, as she understands him, addressed each of these areas of human life, and in so doing revealed a remarkable intuition, which parallels the findings of sociobiology and evolutionary psychology. Such intuition, concludes Williams, was indeed a product of Jesus' connection with the Divine, and through this connection, he revealed to his followers the egalitarian nature of God. His teachings about this God may empower human beings in the present to establish egalitarian communities and enable them to flourish. In this article, I wish to briefly respond to Williams' essay and her use of evolutionary psychology and sociobiology as they relate to theological anthropology. To begin with, I want to express my appreciation for Williams' work in this area. She has consistently pointed out the difficulties which modern evolutionary biology poses for many classical Western Christian doctrines--such as atonement theology's reliance on "the Fall without the Fall," 1 the doctrine of original sin based on the combination of Lamarckian inheritance and a historical fall, and the problem of evil.2 These problem areas, which Williams develops should be preeminent as constructive theology continues to strive to make itself intelligible in a world dominated by scientific self-understanding. I also am grateful for Williams' attempts to integrate constructively the work of sociobiology and evolutionary psychology into theological anthropology, and her subsequent reformulation of various ancient Christian doctrines in light of these disciplines, which pronounce much on human nature. Her theological engagement with these bio-psychological fields is refreshing because there has been a tendency in the humanities to make light of the findings of evolutionary psychology and sociobiology, and to construct their ideas into caricatures and straw men that are then easily vanquished.3 This happens even though many scholars in the Philosophy of Biology maintain that sociobiology and evolutionary psychology are legitimate extensions of the Neo-Darwinian theoretical framework.4 That being said, I would like to raise several questions and concerns with Williams' essay and her related work. While I agree with Williams in her acceptance of the basic guiding principles of evolutionary psychology--that it is very likely that certain heritable and adaptive human behaviors have been honed by natural selection, and that there are specific cognitive mechanisms resulting from evolution by natural selection which underlie such human behaviors--I must question Williams' uncritical acceptance of the opinions that are championed by these disciplines. Williams treats evolutionary psychology and sociobiology as though they 'have arrived' despite the large number of sympathetic, yet valid critiques of these fields.5 Among other things sociobiology and its descendent evolutionary psychology have been criticized on account of their genic selectionism, genetic reductionism, determinism, and atomism,6 their assumption of massive modularity in the brain, their hyper-adaptationism7 and their confusion regarding moral categories.8 I have not found any citation of such criticisms in Williams work on this subject. She only goes so far as to mention that there is controversy surrounding sociobiology "because it applies to us," and "because some sociobiologists have been inept with metaphors, sowing considerable confusion."9 There is no discussion of the more fundamental criticisms of the methodological and biological assumptions of evolutionary psychology and sociobiology's practitioners. Evolutionary psychology and its predecessor sociobiology claim that humans have a generic nature and that this nature is rooted in our biology--particularly in our genes. Since our genes, as they have evolved to adapt to a specific environment, are the foundation and unconscious directors of our behavior, such behaviors should be seen in light of the ultimate evolutionary purpose and goal of our genes--namely "to get as many copies of one's genes into the next generation as possible." 10 Contained in this ambiguous behavioral inheritance bequeathed to us by our genes are predispositions in the vast majority of humans towards murder, infanticide, child abuse,11 divorce, infidelity,12 pornography,13 xenophobia,14 treating women as commodities,15 rape,16 and even genocide.17 To ensure that each gender gets their maximal fitness reward calculated in genes that make it to the succeeding generation, men are by nature sexually promiscuous and competitive, and women are by nature "coy" and parentally nurturing.18 When our "selfish genes" are in the driver's seat, such is to be expected, and while exceptions may exist, they are just that--exceptions. Cue Jesus. Into such a world of ethically sordid genetic pre-dispositions embodied in immoral animals enters the historical Jesus who, in effect, tells humans to live contrary to their genetically inherited nature. Jesus calls us to "deny ourselves" and in so doing deny to our genes the fitness rewards which they so fervently long for. For the sake of the Kingdom of God, we must be willing to minimize our inclusive fitness and forsake those who share the greatest percentage of our own genes. In fact, our genes are not to be seen as more important than the genes of a total stranger--even those of an unrelated Samaritan or Gentile. We are to spend our precious resources on those who offer us no fitness benefits whatsoever: widows past reproductive age, orphans who are not our kin, the poor who cannot benefit us materially, the sick--who may even harm our own health and fitness potential, and prisoners--who cannot be trusted to reciprocate. Men are called to resist the urge to "diversify their genetic portfolio" and women are called to trust in God for their material resources rather than in their husbands or mates.19 Humans are, in fact, asked to adopt an extremely unstable evolutionary strategy by throwing out reciprocity all together--" give to everyone who asks of you, and whoever takes away what is yours, do not demand it back and lend, expecting nothing in return." 20 The road of the cross which the life of Jesus paves for those who would follow, is a sure evolutionary dead-end--the ultimate self-extinction event. Williams says that such behavior and the wisdom of Jesus "fits evolutionary psychology perfectly." 21 but what does she mean by this? If she means that Jesus understands human nature as it is perceived at the tail end of our evolution and that he calls us to resist the very same dark tendencies bequeathed to us by evolution, then she is right. Christian morality demands a "revolution or a reversal of those priorities" which are given to us by nature.22 Where does such moral courage come from if it is not within human nature? Is it pure grace from the realm of the Divine that actually alters our evolved nature? Or, is it an effort of the will which is transformed once one is encountered by the life and example of Jesus? Either answer poses a dilemma for evolutionary psychology because both behavioral scenarios are outside of its explanatory purview. If we are altered by super-nature, then the categories of nature are no longer adequate. Alternatively, if human nature has enough behavioral wiggle room so that humans may act in ways which are not genetically predisposed, and even in ways directly contrary to our genetic predispositions, then such evolutionary psychological talk of genetic predispositions loses its scientific scope and robustness. Evolutionary psychology seeks to explain altruistic behavior in terms of inclusive fitness in the context of Evolutionarily Stable Strategies, but such explanations lose their relevance when the object of investigation thrusts aside the "things of this world" to pursue an eschatologically stable strategy instead. Deeper than this dilemma, though, is evolutionary psychology's foundational assumption of the "selfish genes" view of evolved biological reality. This "gene's-eye view" of evolution, which Williams presupposes,23 is far from being a safe assumption. In fact, this is precisely the area where many biologists from various sub-disciplines find the most intractable problems relating to the future direction and success of evolutionary research.24 There is a growing consensus that there is a variety of levels of selection in evolution.25 The notion that "naked genes" are the target or primary level of selection, while at first broadly accepted, has since then been "severely criticized, and even its original supporters have now moderated their claims." 26 Such genic selectionism, which is fundamental to the explanatory framework that under-girds evolutionary psychology and its theory of inclusive fitness, is also called into question by genetic pleiotropy27 and "the interaction of genes controlling polygenic components of the phenotype."28 Furthermore, investigations into the roles played by symbiosis,29 self-organization,30 neutral evolution,31 historical and developmental constraints,32 epigenetics,33 and generic principles in evolution34 have demonstrated that other forces are at work both in the generation of evolutionary novelty, and the way in which biological information is inherited. Natural selection and the genocentrism it entails is no longer the sole fiddler bowing the tune of evolutionary change, but now appears to be joined by a symphony of other evolutionary mechanisms each playing at different tempos and in different keys. Conclusion These developments, when taken together, pose a serious obstacle to the future advance of any general theory of evolutionary psychology. While an evolutionary psychology is certainly still possible it will have to be a much mediated evolutionary psychology which can no longer speak of a generic human nature as such, but rather, must aim to describe only elements of human nature that have a definite genetic corollary. Occasions of altruism in nature will no longer create a research problem for this epistemically humbled and less imperialistic evolutionary psychology, and the moral "performance gap"35 between what we are and what we ought to be will lose much of its mysterious quality when considered within a thoroughly supplemented and expanded Neo-Darwinism. Was the Historical Jesus an evolutionary psychologist? He certainly knew enough about human nature to know that selfish motives--if not always selfish genes--orient much of our behavior. Jesus was also familiar, however, with the nature of the Divine, and he knew enough about God's nature to recognize that the One in whose image humans have been made is not far from us when we walk by faith. Notes 1. A phrase coined by Robert John Russell. For Russell's discussion of the problem of "Fall without the Fall" see Robert J. Russell, "Theology and Science: Current Issues and Future Directions," 2000, Part II, Section E, Redemption, Evolution and Cosmology, http://www.counterbalance.net/rjr/erede-body.html. See also Robert J. Russell, "Is Evil Evolving?" Dialog: A Journal of Theology 42:3 (Fall 2003): 311. For Williams' discussion see Patricia Williams, Doing without Adam and Eve: Sociobiology and Original Sin (Minneapolis: Fortress Press, 2001); and Patricia Williams, "Sociobiology and Original Sin" Zygon 35:4 (Dec 2000). 2. Patricia Williams, "Evolution Sociobiology and the Atonement," Zygon 33:4 (1998); Patricia Williams, "The Problem of Evil: A Solution from Science," Zygon 36:3 (2001). 3. Such critiques of sociobiology and evolutionary psychology where the actual views of these disciplines are exaggerated or misrepresented are, Richard C. Lewontin, Steven P. R. Rose, and Leon J. Kamin, Not in Our Genes: Biology, Ideology, and Human Nature (New York: Pantheon Books, 1984); and Hilary Rose and Steven P. R. Rose, Alas Poor Darwin: Arguments Against Evolutionary Psychology (New York: Harmony Books, 2000). For a critical review of the latter which points out several misreadings of evolutionary psychology see Daniel Jones, "Alas Poor Higgs," British Medical Journal, 322 (24 March, 2001), 740ff. http://bmj.bmjjournals.com/cgi/eletters/322/7288/740#13672 . 4. See, for example, Michael Ruse, "I see sociobiology, the study of animal social behavior from an evolutionary perspective, as a natural and an unforced growth and development from orthodox and established neo-Darwinian evolutionary biology. This being so I suggest that because neo-Darwinian biology is a genuine and fruitful branch of science, the respect that it deserves should automatically be transferred to sociobiology." Quoted in Peter Saunders, "Sociobiology: A House Built on Sand" in Evolutionary Processes and Metaphors, Mae-Wan Ho and Sidney W. Fox eds. (New York: Wiley, 1988) 290. 5. See Kim Sterelny and Paul E. Griffiths, Chapter 13 in Sex and Death: An Introduction to Philosophy of Biology (Chicago: University of Chicago Press, 1999); Philip Kitcher, Vaulting Ambition: Sociobiology and the Quest for Human Nature (Cambridge, Mass.: MIT Press, 1985); The Evolution of Minds: Psychological and Philosophical Perspective, Paul Davies & Harmon Holcomb, III, eds. (Norwell, MA: Kluwer Academic Publishers, 2001); Jaak Panksepp and Jules B. Panksepp, "The Seven Sins of Evolutionary Psychology," Evolution and Cognition, 6:2, 108 ; Elisabeth A. Lloyd, "Evolutionary Psychology: The Burdens of Proof", Biology and Philosophy 14 (1999): 211 ? 233; Paul E. Griffiths, 'Evolutionary Psychology' in The Philosophy of Science: An Encyclopedia, Sahotra Sarkar and Jessica Pfeifer eds. (New York: Routledge, 2005). For a criticism that aims at some of evolutionary psychology and sociobiology's more foundational assumptions see Peter Saunders, "Sociobiology: A House Built on Sand." 6. See David Depew and Bruce Weber, Darwinism Evolving: Systems Dynamics and the Genealogy of Natural Selection (Cambridge, MA: MIT Press, 1995) 374 ? 378. 7. Stephen J. Gould, "More Things in Heaven and Earth" in Alas Poor Darwin. 8. David Sloan Wilson, Eric Dietrich, and Anne B. Clark, "On the Inappropriate Use of the Naturalistic Fallacy in Evolutionary Psychology," Biology and Philosophy 18 (2003): 669 ? 682. 9. Williams, Doing Without Adam and Eve, 124. 10. Williams, this issue, 134. 11. Martin Daly and Margo Wilson, Homicide (New York: Aldine, 1988). 12. Helen Fisher, The Anatomy of Love: The Natural History of Monogamy, Adultery, and Divorce (New York: Norton, 1992). 13. "Evolution has built into every red-blooded male a desire to find 'Pornotopia'--the fantasy land where 'sex is sheer lust and physical gratification, devoid of more tender feelings and encumbering relationships, in which women are always aroused, or at least easily arousable, and ultimately are always willing' (Symons, p. 171). The entire cosmetics, fashion, and pornography industries are attempts to create Pornotopia here on Earth". Frank Miele, "The (Im)moral Animal: A Quick & Dirty Guide to Evolutionary Psychology & the Nature of Human Nature," Skeptic 4:1 (1996): 42 ? 49. See also David Buss, The Evolution of Desire (New York: Basic Books, 1994), 49 ? 60. and Donald Symons, The Evolution of Human Sexuality (Oxford: Oxford University Press, 1979), 187 ? 200. 14. Edward O. Wilson, Consilience: The Unity of Knowledge (New York: Knopf, 1998), 253 ? 54. 15. Daly and Wilson, Homicide 188 ? 189; Edward O. Wilson, On Human Nature (Cambridge: Harvard University Press, 1978), 126. 16. Randy Thornhill and Craig Palmer, The Natural History of Rape: Biological Bases of Sexual Coercion (Cambridge, MA: MIT Press, 2000). 17. John Alcock, The Triumph of Sociobiology (New York: Oxford University Press, 2001), 144 ? 146. 18. Martin Daly and Margo Wilson, Sex, Evolution and Behavior (Boston: Willard Grant, 1983), 78 ? 79.; Robert L. Trivers, Social Evolution (Menlo Park, CA: Benjamin/Cummings, 1985), 207; Carl-Adam Wachtmeister and Magnus Enquist, "The Evolution of the Coy Female ? Trading Time for Information," Ethology 105:11 (November 1999): 983 ? 992. 19. Frank Miele, "The (Im)moral Animal," 43; See Jesus' response to "Is it lawful to divorce for any reason?" Matt 19:3 ? 12, and see Mark 10:2 ? 12 and John 4. 20. Luke 6:30 ? 35. 21. See Williams, this issue, 138. 22. John Hare, "Is There an Evolutionary Foundation for Human Morality?" in Evolution and Ethics: Human Morality in Biological and Religious Perspective (Grand Rapids, MI: Eerdmans, 2004), 190. 23. Williams maintains "that the most accurate way to view evolution is from the point of view of the gene" (this issue, 134). She thus appears to adhere to the genic selectionism of G. C. Williams, W. D. Hamilton, and Richard Dawkins. 24. See Gertrudis Van de Vijver, Linda Van Speybroeck, and Dani De Waele, "Epigenetics: A Challenge for Genetics, Evolution, and Development?" Annals of the New York Academy of Sciences 981 (2002): 1 ? 6. 25. Stephen Jay Gould and Elisabeth A. Lloyd, "Individuality and Adaptation Across Levels of Selection: How Shall We Name and Generalize the Unit of Darwinism?" Proceedings of the National Academy of Sciences USA 96:21 (October 1999):11904 ? 11909. 26. Ernst Mayr, "The Objects of Selection," Proceedings of the National Academy of Sciences USA 94:6 (March 1997): 2091 ? 2094. 27. This is where multiple, often seemingly unrelated, phenotypic effects are caused by a single altered gene or pair of altered genes. See Jonathan Hodgkin, "Seven Types of Pleiotropy" International Journal of Developmental Biology 42 (1998): 501 ? 505. 28. Ernst Mayr, "The Objects of Selection," 2092. 29. Lynn Margulis, "Symbiogenesis and Symbioticism," in Symbiosis as a Source of Evolutionary Innovation: Speciation and Morphogenesis, Lynn Margulis and Ren? Fester eds (Cambridge, MA: The MIT Press, 1991). 30. Stuart A. Kauffman, "Self-Organization, Selective Adaptation and its Limits: A New Pattern of Inference in Evolution and Development," in Evolution at the Crossroads: The New Biology and the New Philosophy of Science, David J. Depew and Bruce H. Weber eds. (Cambridge, MA: MIT Press, 1985), 184 ? 185; and David Depew and Bruce Weber, Darwinism Evolving: Systems Dynamics and the Genealogy of Natural Selection (Cambridge, MA: MIT Press, 1995), 446. 31. Motoo Kimura, "Recent Development of the Neutral theory Viewed from the Wrightian Tradition of Theoretical Population Genetics," Proceedings of the National Academy of Sciences USA 88:14 (July 1991): 5969 ? 5973 ; Motoo Kimura, "Evolutionary Rate at the Molecular Level," Nature 17:217 (129) (Feb 1968): 624 ? 626; Motoo Kimura, "The Rate of Molecular Evolution Considered From the Standpoint of Population Genetics," Proceedings of the National Academy of Sciences USA 63:4 (August 1969): 1181 ? 1188. 32. For the historical constraints see Stephen J. Gould and Richard C. Lewontin, "The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme," Proceedings of the Royal Society of London , Series B, 205:1161 (1979): 581 ? 598. For a discussion of Developmental Systems Theory see Susan Oyama, Paul E. Griffiths, and Russell D. Gray, Cycles of Contingency: Developmental Systems and Evolution, (Cambridge, MA: MIT Press, 2001). 33. See Van de Vijver, Van Speybroeck, and De Waele "Epigenetics: A Challenge for Genetics, Evolution, and Development?" For a critique of the selfish genes understanding of evolution from an epigenetic standpoint see especially Richard von Sternberg, "On the Roles of Repetitive DNA Elements in the Context of a Unified Genomic Epigenetic System," Annals of the New York Academy of Sciences 981 (2002): 154 ? 188. See Eva Jablonka and Marion J. Lamb, Epigenetic Inheritance and Evolution: The Lamarckian Dimension (Oxford: Oxford University Press, 1995). 34. See Ricard Sol? and Brian Goodwin, Signs of Life: How Complexity Pervades Biology. (New York: Basic Books, 2000); and Simon Conway Morris, Life's Solution: Inevitable Humans in a Lonely Universe (New York: Cambridge University Press, 2003). 35. For a discussion of the gap between what we actually do and what morality demands of us see John Hare (cited above). -------------------- Jesus and Evolutionary Psychology, Two Agendas Howard J. van Till Howard J. Van Till is Emeritus Professor of Physics and Astronomy, Calvin College, Michigan, USA. His works include Portraits of Creation: Biblical and Scientific Perspectives on the World's Formation (1990) and The Fourth Day: What the Bible and the Heavens are Telling us about Creation (1986). Patricia A. Williams posits the provocative thesis that the historical Jesus' knowledge of human nature--as he experienced it and engaged it 2000 years ago--closely matches the understanding of human nature now offered by evolutionary psychology. This thesis does not entail any frivolous conjectures that Jesus was supernaturally informed about biological evolution or about a scientific psychology based on evolutionary considerations. Rather, the thesis posits that the historical Jesus, a typical Palestinian Jew in his knowledge of the world and unaware of anything resembling modern science, nonetheless "had remarkable insights into human nature as evolutionary psychology discloses it." As I see it, this is a reasonable and modest thesis that can be tested by a comparison of what we know (or at least have good reason to believe) about Jesus' perceptions of human nature and what modern evolutionary psychology offers us regarding its scientific understanding of human nature. Williams summarizes the four central concepts of evolutionary psychology, as derived from sociobiology, in her list of "the four R's of human nature and much of the rest of nature as well: resources, reproduction, relatedness, and reciprocity." Before offering any thoughts regarding a comparison of Jesus' knowledge of human nature and "the four R's of human nature" that Williams draws from evolutionary psychology, I must express a bit of puzzlement concerning the grounds for the similarity thesis that Williams posits. Suppose that Williams is correct (and I am content to let evolutionary psychologists judge whether or not that is the case) to say that evolutionary psychology's assessment of the four major foci of human behavior can be captured in this list of four R's. Suppose that Williams is also correct (and I am content to let scholars of the historical Jesus judge whether or not that is the case) to characterize Jesus' knowledge of human nature as focused on those same four behavioral concerns. Would that provide a sufficient basis for concluding that Williams is warranted in positing that Jesus' knowledge of human nature closely matches the understanding of human nature offered by evolutionary psychology? That is, would Williams be warranted in concluding that Jesus is relevant today partly "because he is an astonishingly perceptive evolutionary psychologist?" I do not see how the case can be settled on the similarities so far granted. Williams may well be correct in drawing parallels in what the historical Jesus saw 2000 years ago and what modern evolutionary psychology now sees as the principal concerns of human nature. However, as I understand it, the primary concern of evolutionary psychology is not merely to list those basic concerns, but rather to posit explanations for those behavioral foci as products of the entire evolutionary process. However, if the positing of evolution-based explanations constitutes the core of the modern science of evolutionary psychology, then the appropriateness of drawing close parallels between Jesus and evolutionary psychology must, it seems to me, be called into question. The historical Jesus offered no explanations of the sort that would interest evolutionary psychology. Jesus, on the contrary, expressed numerous moral and ethical judgments on the manner in which humans ought to act in response to those basic drives for resources, reproduction, relatedness, and reciprocity. To summarize what we have observed so far: even if it is the case that Jesus and evolutionary psychology agree on their identification of the primary concerns that characterize human nature, there is a vast difference in what they offer in response. Evolutionary psychology offers scientific explanations for the origin and presence of the four R's as products of our evolutionary history. Given what cognitive psychology perceives to be core human concerns, evolutionary considerations suggest ways to understand how humans came to be this way. Jesus, on the other hand, offered moral or ethical principles that would encourage humans to choose behavior (whether consistent with evolutionary influences or not) that is "good" by the standards of divine intention for our living as God-conscious creatures. In other words, while it may well be argued, as Williams does, that Jesus and evolutionary psychology proceed because of similar views of human nature, they have radically differing agendas driving their interests in reflecting on the primary foci of human behavioral concerns. Evolutionary psychology's concern for explaining the historical roots of the four R's cannot easily be equated with Jesus' concern to provide moral or ethical guidance in choosing ways to act on those four drives. Evolutionary psychology offers a theory about human behavior and its roots in the practical need for species survival. Jesus posited no such theory, but instead exemplified sound moral and ethical value judgments on behavioral choices, judgments rooted in his extraordinary awareness of the divine intention for human life. Perhaps I am being too critical. Perhaps Williams never intended to make the strong equation that I have just criticized. Perhaps I need to take more seriously Williams' expressed concern to demonstrate that, despite her contention that Jesus "did not perceive his own death as a sacrifice for sin," and despite the fact that this would seem to "undermine doctrines previously considered central to Christianity" and thereby "appear to make Jesus irrelevant," Jesus is nonetheless just as relevant today as ever. Why? Because his understanding of human nature equipped him to offer relevant answers to Aristotle's ethical question, "How can human beings flourish?" True, evolutionary psychology focuses on technical aspects of how human behavior affects human survival and reproduction, while Jesus focused on matters of acting in accord with the divine will for human moral and ethical behavior, but both express a concern for identifying the sort of human behavior that improves the probability for the flourishing of the species. Perhaps I should be content with Williams' case with the continuing relevance of what Jesus said and did. In fact, I think Williams' case for the high degree of relevance that the words and deeds of Jesus still have was eloquently made. Am I ready, then, to set my misgivings aside and accept Williams' references to Jesus as "an astonishingly perceptive evolutionary psychologist?" I must admit that I continue to have reservations about statements worded in this way. One way to express my hesitancy is to note that although Williams appears to justify this language by noting that Jesus and evolutionary psychology share a common agenda in dealing with the question, "How can humans flourish?" I think we need to explore whether or not the term "flourish" is being used in the same way for both. From the standpoint of evolutionary biology, what does it mean to flourish ? Stated as bluntly and succinctly as possible, for a species (or some higher order of categorization) to flourish, means to be reproductively successful over an extended time as a member of an ecosystem in some reasonably stable environment. It is about numbers, about numerical success, about survival. Maintain a stable or growing population, or your category of organisms goes extinct. Flourish, or vanish. Life is tough. Adapt or die; a purely pragmatic reality. From the standpoint of what Jesus said and did, however, what does it mean to flourish ? I would suggest that the species-survival criteria supplied by evolutionary psychology might be seen as necessary, but by no means sufficient from Jesus' standpoint. To flourish as a God-conscious creature would, I believe, sometimes require choosing behavior that conforms to the divine will in spite of the fact that it would fail to contribute to reproductive success. By the moral and ethical standards exemplified by the life and death of Jesus (whether or not these accomplished anything toward atonement for sin), flourishing as a human species is not simply a matter of numbers. On the contrary, Jesus sometimes exemplified behavioral choices that were radically contrarian in nature. In the extreme, Jesus paid the ultimate price of life itself by choosing right behavior over the biological goal of flourishing. I would not go so far as to say that Jesus advocated a generalized disregard for flourishing as a reproductively successful species, but it seems evident that Jesus did advocate the recognition of situations in which reproductive success was to be given secondary, not primary, status. Williams rightly recognizes this in noting that each of human nature's four R's can be pursued with such excessive vigor as to become a vice. Excessive pursuit of resources becomes greed or gluttony. Obsession with reproductive activity becomes lust or abuse of power. Unqualified valuation of relatedness becomes destructive exclusivism. Compassionless application of reciprocity becomes an excuse for vengeance. Jesus spoke and acted in a way that demonstrated such excesses to fall outside the divine will for human behavior. Hence, to engage in a bit of Williams-style commentary, Jesus "knew when to set evolutionary psychology aside and to make behavioral choices on the basis of divine calling rather than on the probabilities for reproductive success." I was especially struck (positively) by Williams' comments on the dangers of compassionless reciprocity in which she called attention to the remarkable and ironic contrast between the example set by Jesus and the distorted portrait of God that has become the display piece of substitutionary atonement theology. Williams says it with great eloquence. "Jesus spoke of love, generosity, and forgiveness. In doing so, he spoke of the nature of God. Christian atonement theology," alternatively, "has claimed that an innocent man had to die to pay for the sins of the guilty because God required that justice be done . It does not take much insight into the nature of justice to grasp the injustice of killing the innocent to forgive the guilty. The God who allegedly commanded such a deed ruled by reciprocity and had a stingy soul. This is not Jesus' God." Would that more contemporary Christians could see what Williams here points out. Seeing this demands no knowledge of evolutionary psychology, however. A sense of justice that transcends the scientific agenda will do. What about the fifth R? Recall Jesus' advice for life, "Be not anxious ." Live by love. Do not be driven by the egocentrism inherited from our evolutionary past. Do not allow yourself to distort any one of the four R's by becoming obsessed with its unqualified satisfaction. In a word, Relax. Great idea. That is the next item on my "to do" list. -------------------------- Counter-response on "The Fifth R: Jesus as Evolutionary Psychologist" Patricia A. Williams Patricia A. Williams is a philosopher of biology and philosophical theologian who writes full-time on Christianity and science. Her recent books include, Doing Without Adam and Eve: Sociobiology and Original Sin (2001) and Where Christianity Went Wrong, When, and What you Can do about it (2001). Her mailing address is PO Box 69, Covesville, VA 22931. Her e-mail address is theologyauthor at aol.com; website www.theologyauthor.com. I am grateful to the editors for the privilege of receiving responses to my article and the opportunity to reply. I also appreciate the sincerity and thoughtfulness characterizing the responses. I might add that evolutionary biology is conceptually difficult; it is a field in which experts make mistakes, and much sociobiology is conceptually confused, partly because it seems a favorite playground for atheists who are ideologically driven. Finally, historical Jesus scholarship is broad, deep, and varied, so one needs to dine, not snack. Keeping it all straight is difficult. Even I make mistakes. Therefore, it may be best to begin by explaining the project I pursue in my books and essays. I want to integrate science, theology, and spirituality. As I come from a Christian background, that generally means I engage some aspect of Christianity. My first step is to take the best, most central, most accepted scientific findings to establish a firm foundation in the sciences. My second is to pursue the best biblical scholarship, especially scholarship on the historical Jesus, Christianity's central figure and a prophet in two other world religions. Thus, two critical, rational enterprises stand at the center of my work. Third, I seek the best in Christian spirituality, which I presently think Quakerism represents. (Quaker theology also smoothes some theological and scriptural issues.) Then I try to integrate them. Some examples from my treatment of science may help. When I discuss cosmology, I avoid string theory or many-worlds theory. Although they may be cutting-edge research subjects, they currently lack mathematical proof and empirical evidence. In biology, I center on the theory of evolution by natural selection since it is the foundational theory of biology. This is not to deny that other mechanisms for evolution exist. Indeed, I consider genetic drift significant in speciation.1 In sociobiology, I concentrate on kin selection (inclusive fitness), because it lies at the heart of sociobiology and is well established theoretically and empirically. For evolutionary psychology, I focus on dispositions applicable to as many organisms as possible, (the exception in the 4Rs being reciprocity, which although not uniquely human, is central to human relationships as it is not to those of other animals). I might add, against Carlson and Hine's assumption that I borrowed the terminology from David Buss, the expression "the four R's" and the arguments for the four R's being fundamental originate with me. To understand where the responders have erred, it will help to return to the basics. Van Till discusses evolution in terms of species survival. Evolution does not promote species survival. On the contrary, natural selection is a negative mechanism, promoting no one's survival, only eliminating the unfit. Evolution depends on three things: that more organisms come to be than survive to reproduce, some characteristics vary, and some of these are inherited. Populations change over time (evolve) because organisms that die before they reproduce fail to pass their characteristics on to future generations, and these characteristics vanish from the population. Meanwhile, mutations may add novel characteristics. Species can be selected (go extinct--contrary to Moritz's assumption, I am not a single-level selectionist and certainly not a genetic-level one), but natural selection cannot promote their survival. Indeed, most have gone extinct, so it fails to promote their survival. On the whole, however, the theory of evolution applies to individuals and their kin and is always local, that is, characteristics fit in one environment will not be so in others. This means evolution cannot promote the flourishing of species. I doubt Jesus promotes it, either. I doubt he thinks that broadly. Rather, his widest interest seems individual and community flourishing in a non-egalitarian but God-suffused world. Van Till assumes sociobiology has a single focus, the explanation of certain behaviors by means of the theory of evolution. In fact, it has three foci. The first, begun by W. D. Hamilton in 1964,2 was to explain biologically altruistic behavior by means of inclusive fitness theory. The second, prominently promoted by Robert Trivers from 1971 and 1972,3 was to predict animal social behavior (including human social behavior) from inclusive fitness theory. The third has been to gather empirical evidence to support or refute the predictions. The third and last has occurred almost since Hamilton published, was famously summarized by E. O. Wilson in 1975,4 and has become a project of evolutionary psychology in recent years. In calling Jesus an evolutionary psychologist, I credit him with understanding by (divine?) intuition and astute observation that human nature is disposed (not determined!) to follow the 4Rs that lie at the foundations of sociobiology and evolutionary psychology. Given Jesus' lack of scientific knowledge, he could not have been doing anything more. Since Moritz fails to find citations in my theological writing to critics of sociobiology, and no explicit criticisms of it, he concludes I accept it uncritically. My theological works do ignore its critics, for I think engaging in intra-scientific squabbles inappropriate in a theological context. However, in a lead review in the Quarterly Review of Biology,5 I criticize selfish gene theory and the idea that sociobiological explanations of behavior provide total explanations. However, in my theological works, I do something different. I interpret sociobiology in a non-reductionist, non-determinist, non-egocentric way, usually without explicitly condemning its reductionist, determinist, and selfishness-promoting proponents who, I think, misconstrue the evidence. I am especially disturbed that Moritz cloaks me in the determinist mantle when I say in summary of human nature's four R's in my article, Thus, evolution has given us enormous potential for both good and evil, and it also has provided a wide range of choices, from egocentricity that seeks the destruction of others to generosity and love that seek to further their welfare. We are remarkably flexible and free. That is the primary reason we find it so difficult to answer Aristotle's question about how to flourish. If we have such a range of desires and can engage in such an enormous number of activities, then which are those that best promote our flourishing? I emphasize choice and freedom. There is no taint of determinism here. Indeed, I find more tendencies toward the assumption of genetic determinism in the responses to my article than I do in my article. Moreover, without citing sociobiology's critics, I explicitly argue against determinism for an entire section in my Doing without Adam and Eve.6 As for "natural selection and the genocentrism it entails [being] no longer the sole fiddler" (Moritz), it never was. Charles Darwin, lord of the theory of evolution, invokes the inheritance of acquired characteristics to aid it, then sexual selection.7 Ernst Mayr, king of the new synthesis, recognizes sexual selection, the Baldwin effect, symbiosis, and genetic drift.8 E. O. Wilson, prince of sociobiology, includes morphological and physiological differences and environmental contingencies.9 A review of the most thoroughly studied genus in the world, Drosophila, adds premating isolation.10 Moreover, we now possess empirical proof that environments restructure organisms' brains, including adult human brains.11 Many things shape organisms and their behaviors. Many people shape historical Jesus scholarship. It is not limited to the Jesus Seminar. Although I respect the Jesus Seminar and find its two volumes12 handy for checking out black, gray, pink, and red sayings and deeds, I nowhere rely on it to tell me which sayings go back to Jesus as Carlson and Hine assert. In contrast, I say I will "restrict the passages of scripture I discuss to those the scholars think go back to the historical Jesus." I have written a book, mentioned in the article, on the historical Jesus13 with a bibliography listing 42 references to works of 35 Jesus scholars and historians of the two first centuries. I compiled that list five years ago, and I have continued reading. In an essay such as "The Fifth R," to summarize such extensive scholarship is impossible. However, to offer one example here, most other scholars think the passage Carlson and Hine mention that the Jesus Seminar colors gray, Mark 3:31 ? 35, goes back to Jesus. If Carlson and Hine researched further in the Seminar's The Five Gospels, they would find even the Seminar colors the parallels in Matthew 12:46 ? 50 and Thomas 99 pink. The event occurs in two sources, Mark and Thomas, so it meets the scholarly criterion of multiple attestation. Matthew and Luke (8:19 ? 21) retain it from Mark, their source for it, so it must have been well known. Moreover, it also fits the strong scholarly criterion that events and sayings embarrassing to the Jesus movement are likely to go back to Jesus. For a son not to honor his mother breaks one of the Ten Commandments, and in the Jesus movement after Jesus' death, some of his family members became his followers. Their change of heart must have aroused criticism of their earlier unbelief. Why include such an embarrassing incident in your narrative unless it is so widely known that excluding it appears fraudulent? Carlson and Hine also comment that I am dealing with the person of Jesus and putting "him in a box of dimensions specified by the four R's." This is false. I am interested in his insights into human nature, God, and ethics. I think he was a person of integrity and, so, his insights probably reflect his character, but his character is not the subject of "The Fifth R" and certainly not limited to the four R's--no one's is. The four R's at most represent some basic human dispositions. Carlson and Hine also misquote me. I never use the expression, "the unmatched quality of God's love, generosity, and forgiveness." Thus, I am unlikely to do "more to demonstrate this." Moritz seems to think the fifth R is "Rebel"14 and jettison the four R's. On the contrary, it is "Relax." In a wonderfully coined phrase, he calls the rebellious approach "an eschatologically stable strategy " to distinguish it from evolutionarily stable strategies. In contrast, I think "Relax" is probably stabilizing for the species. Other species follow evolutionary strategies and go extinct, so evolutionary strategies remain stable only temporarily. Based on the history of other species, if we follow evolutionary strategies, we will go extinct, too. Perhaps there is a better way. Jesus may offer it. Nonetheless, "Relax" does not entail rejecting the four R's. As I note in the article, Jesus is not an ascetic, but is accused of drunkenness and gluttony, enjoys the company of women and children, and calls a leading disciple who is married. Pursuing the four R's inordinately through greed, lust, nepotism, and justice for oneself to the exclusion of others destabilizes community and, so, diminishes human wellbeing. Such pursuits lead to wars that, in the contemporary world, may not only result in the extinction of our species but also the annihilation of life on Earth. Inordinate rebellion against the four R's also promises extinction. Best follow Van Till and make "Relax" the next item on the "'to do' list." Finally, Van Till comments that knowledge of evolutionary psychology is not required to understand that God's killing the innocent in order to forgive the guilty is unjust. I agree. I think evolutionary psychology sheds light here not by explaining justice, but by explaining the attractiveness to many Christians of a God who insists divine justice be satisfied. Theirs is an anthropomorphic God, built from our basic, evolved dispositions. Relaxed as he was about the four R's, Jesus could reflect, instead, a God of generosity and mercy. Notes 1. Patricia A. Williams, Doing without Adam and Eve: Sociobiology and Original Sin (Minneapolis: Fortress, 2001), 108 ? 115. 2. W. D. Hamilton, "The Genetical Evolution of Social Behaviour I and II," Journal of Theoretical Biology 7 (1964): 1 ? 51. 3. Robert L. Trivers, "The Evolution of Reciprocal Altruism," The Quarterly Review of Biology 46 (1971): 35 ? 57 and "Parent-Offspring Conflict," American Zoology 14 (1972): 249 ? 264. 4. E. O. Wilson, Sociobiology: The New Synthesis (Cambridge, Mass.: Harvard University Press, 1975). 5. Patricia A. Williams, "Of Replicators and Selectors," The Quarterly Review of Biology 77 (2002): 302 ? 306. 6. Williams, Doing without Adam and Eve, 143 ? 148. 7. Charles Darwin, On the Origin of Species (Cambridge, Mass.: Harvard University Press, [1859] 1964) and The Descent of Man, and Selection in Relation to Sex (Princeton: Princeton University Press, [1871] 1981). 8. Ernst Mayr, What Evolution Is (New York: Basic Books, 2001). 9. Wilson, Sociobiology. 10. Jeffrey R. Powell, Progress and Prospects in Evolutionary Biology: The Drosophila Model (New York: Oxford University Press, 1997). 11. Jeffrey M. Schwartz and Sharon Begley, The Mind and the Brain: Neuroplasticity and the Power of Mental Force (New York: Regan Books, 2002). 12. Robert W. Funk, Roy W. Hoover, and the Jesus Seminar, The Five Gospels: The Search for the Authentic Words of Jesus (San Francisco: Harper San Francisco, 1993) and Robert W. Funk and the Jesus Seminar, The Acts of Jesus: The Search for the Authentic Deeds of Jesus (San Francisco: Polebridge Press, 1998). 13. Patricia A. Williams, Where Christianity Went Wrong, When, and What You Can Do About It (Philadelphia: Xlibris, 2001). 14. As, famously, in Richard Dawkins, The Selfish Gene (New York: Oxford University Press, 1976), 215, "We, alone on earth, can rebel against the tyranny of the selfish replicators [genes]". From checker at panix.com Wed Sep 14 01:28:40 2005 From: checker at panix.com (Premise Checker) Date: Tue, 13 Sep 2005 21:28:40 -0400 (EDT) Subject: [Paleopsych] SW: Einstein and Quantizing Chaos Message-ID: Theoretical Physics: Einstein and Quantizing Chaos http://scienceweek.com/2005/sw050902-6.htm The following points are made by A. Douglas Stone (Physics Today 2005 August): 1) At the 11 May 1917 meeting of the German Physical Society, Albert Einstein (1879-1955), then a professor at the University of Berlin, presented the only research paper of his career that was written on the quantization of energy for mechanical systems.[1] The paper contained an elegant reformulation of the Bohr-Sommerfeld quantization rules of the old quantum theory, a rethinking that extended and clarified their meaning. Even more impressive, the paper offered a brilliant insight into the limitations of the old quantum theory when applied to a mechanical system that is nonintegrable -- or in modern terminology, chaotic. Louis de Broglie (1892-1987) cited the paper in his historic thesis on the wave properties of matter,[2] as did Erwin Schroedinger (1892-1987) in the second of his seminal papers on the wave equation for quantum mechanics.[3] But the 1917 work was then ignored for more than 25 years until Joseph Keller independently discovered the Einstein quantization scheme in the 1950s.[4] Even so, the significance of Einstein's contribution was not fully appreciated until the early 1970s when theorists, led by Martin Gutzwiller, finally addressed the fundamental difficulty of semiclassically quantizing nonintegrable Hamiltonians and founded a subfield of research now known as quantum chaos. 2) Even today, Einstein's insight into the failure of the Bohr-Sommerfeld approach is unknown to the large majority of researchers working in quantum physics. It seems appropriate, in this centennial of Einstein's miracle year, to put the achievement of his obscure 1917 paper in a modern context and to explain how he identified a simple criterion for determining if a dynamical system can be quantized by the methods of the old quantum theory. 3) Einstein's paper was titled "On the Quantum Theorem of Sommerfeld and Epstein." In his title, Einstein was acknowledging physicist Paul Epstein, who had written a paper relating the Sommerfeld rule to the form of the constants of motion. Epstein's name has not survived in the context of the Sommerfeld rule, and the quantization condition discussed by Einstein is now referred to as either Bohr-Sommerfeld or WKB (Wentzel-Kramers-Brillouin) quantization. 4) Although Einstein's antipathy to certain aspects of modern quantum theory is well known, there appears to be a renewed appreciation this year of his seminal contributions to quantum physics. With his introduction of the photon concept in 1905, his clear identification of wave-particle duality in 1909, his founding of the quantum theory of radiation in 1917, and his treatment of the Bose gas and its condensation in 1925, Einstein laid much of the foundation of the theory. He commented to Otto Stern, "I have thought a hundred times as much about the quantum problems as I have about general relativity theory." We should add to his list of illustrious achievements another advance, modest on the scale of his genius, but brilliant by any other standard: the first identification of the problem of quantizing chaotic motion.[5] References (abridged): 1. A translation of the paper appears in The Collected Papers of Albert Einstein , vol. 6, A. Engel, trans., Princeton U. Press, Princeton , NJ (1997), p. 434 2. L. de Broglie, PhD thesis, reprinted in Ann. Found. Louis de Broglie 17, 22 (1992) 3. E. Schroedinger, Ann. Phys. ( Leipzig ) 489, 79 (1926) 4. J. B. Keller, Ann. Phys. (N.Y.) 4, 180 (1958). An initial version of the paper was published by Keller as a research report in 1953. See also J. B. Keller, S. I. Rubinow, Ann. Phys. (N.Y.) 9, 24 (1960) 5. J. B. Keller, SIAM Rev. 27, 485 (1985) Physics Today http://www.physicstoday.org -------------------------------- Related Material: THEORETICAL PHYSICS: QUANTIZATION OF A PENDULUM SYSTEM The following points are made by Ian Stewart (Nature 2004 430:731): 1) A central problem in modern physics is to find effective methods for quantizing classical dynamical systems -- modifying the classical equations to incorporate the effects of quantum mechanics. One of the main obstacles is the disparity between the linearity of quantum theory and the nonlinearity of classical dynamics. Recently, Cushman et al (Phys. Rev. Lett. 2004 93: 024302) analyzed a quantum version of the spring pendulum, whose resonant state was first discussed by Enrico Fermi (1901-1954), and which is a standard model for the carbon dioxide molecule. 2) Cushman et al demonstrated that when this system is quantized, the allowed states, or eigenstates, fail to form a perfect lattice, contrary to simpler examples. Instead, the lattice has a defect, a point at which the regular lattice structure is destroyed. They demonstrated that this defect can be understood in terms of an important classical phenomenon known as "monodromy". A quantum-mechanical cliche is Schroedinger's cat, whose role is to dramatize the superposition of quantum states by being both alive and dead. Classical mechanics now introduces a second cat, which dramatizes monodromy through its ability always to land on its feet. The work affords important new insights into the general problem of quantization, as well as being an example of the relation between nonlinear dynamics and quantum theory. 3) The underlying classical model here is the swing-spring, a mass suspended from a fixed point by a spring. The spring is free to swing like a pendulum in any vertical plane through the fixed point, and it can also oscillate along its length by expanding and contracting. The Fermi resonance occurs when the spring frequency is twice the swing frequency. The same resonance occurs in a simplified model of the two main classical vibrational modes of the carbon dioxide molecule, and the first mathematical analysis of the swing-spring was inspired by this model. 4) Using a modern technique of analysis known as reduction, which exploits the rotational symmetry of a system, Cushman et al demonstrated that this particular resonance has a curious implication, which manifests itself physically as a switching phenomenon. Start with the spring oscillating vertically but in a slightly unstable state. The vertical "spring mode" motion quickly becomes a "swing mode" oscillation, just like a clock pendulum swinging in some vertical plane. However, this swing state is transient and the system returns once more to its spring mode, then back to a swing mode, and so on indefinitely. The surprise is that the successive planes in which it swings are different at each stage. Moreover, the angle through which the swing plane turns, from one occurrence to the next, depends sensitively on the amplitude of the original spring mode. 5) The apparent paradox here is that the initial state has zero angular momentum -- the net spin about the vertical axis is zero. Yet the swing state rotates from one instance to the next. Analogously, a falling cat that starts upside down has no angular momentum about its own longitudinal axis, yet it can invert itself, apparently spinning about that axis. The resolution of the paradox, for a cat, is that the animal changes its shape by moving its paws and tail in a particular way. At each stage of the motion, angular momentum remains zero and is thus conserved, but the overall effect of the shape changes is to invert the cat. The final upright state also has zero angular momentum, so there is no contradiction of conservation. This effect is known as the "geometric phase", or monodromy, and is important in many areas of physics and mathematics. Nature http://www.nature.com/nature -------------------------------- Related Material: QUANTUM PHYSICS: ON NANOMECHANICAL QUANTUM LIMITS The following points are made by Miles Blencowe (Science 2004 304:56): 1) In the macroscopic world of everyday experience, the motions of familiar objects such as dust particles, bumblebees, baseballs, airplanes, and planets are accurately described by Newton's laws. According to these classical laws, the trajectories of the objects can in principle be measured to arbitrary accuracy; any uncertainty in their motion is due to the imprecision of the measuring device. In contrast, in the microscopic world of atomic and subatomic particles such as the hydrogen atom and the electron, the probabilistic laws of quantum physics hold sway. Heisenberg's uncertainty principle limits the precision of simultaneous measurements of the position and velocity of a particle. And there is the superposition principle, which allows a particle to be simultaneously in two places. This latter principle is responsible for the interference pattern produced on a detection screen by a beam of particles that have passed through a sufficiently narrow-ruled grating. Such interference patterns have been observed even for beams of molecules with mass over 1000 times that of a hydrogen atom (1). 2) Ever since the laws of quantum mechanics were first established early last century, physicists and philosophers have been occupied with the problem of how the macroscopic classical world emerges from the microscopic quantum world (2). Is there an actual boundary between the two, where some as yet undiscovered fundamental physical law governs the transition from quantum to classical behavior as the system size and/or energy scale increases? Or is classical physics just an approximation to quantum physics, even at macroscopic scales, so that if we were to try hard enough in our experiments, quantum behavior would be observed in the motion of macroscopic mechanical objects? 3) LaHaye et al (3) have described an experiment whose goal is to test Heisenberg's uncertainty principle on a vibrating mechanical beam that is about a hundredth of a millimeter long. While such a beam is tiny by everyday standards, it is equivalent in mass to about 10^(12) hydrogen atoms, certainly belonging well outside the traditional, microscopic quantum domain. The work of LaHaye et al comes hot on the heels of a recent related experiment(4). While neither experiment has quite reached the necessary sensitivity to test the uncertainty principle, they come much closer than all previous efforts. 4) Under normal conditions, a mechanical beam will undergo classical thermal Brownian motion, vibrating in a random way as it is buffeted by the air molecules as well as the fluctuating defects in the beam. As the beam is cooled and the surrounding air is expelled, the thermal Brownian motion will decrease in amplitude, until just the irreducible quantum zero-point fluctuations of the beam in its lowest energy state remain. This zero-point motion is a consequence of the uncertainty principle that prevents the beam from being in a state of absolute rest. The temperature below which the beam must be cooled in order to freeze out the Brownian motion is related to the beam's resonant frequency. The frequency of the beam used by LaHaye et al(3) is about 20 million cycles per second (20 MHz), and the lowest temperature to which they manage to cool the beam is about 60 millikelvins (mK). This is not quite cold enough, however; a 20-MHz beam must be cooled to about 1 mK in order for the zero-point motion to be comparable to the Brownian motion. On the other hand, a smaller beam with a much higher frequency of about 1 billion hertz (1 gigahertz, or GHz) was recently demonstrated (5). Such a beam would only need to be cooled to about 50 mK for the quantum zero-point and classical Brownian motions to be comparable in amplitude, close to the lowest temperature that LaHaye et al(3) achieve in their experiment. References (abridged): 1. L. Hackermueller et al., Phys. Rev. Lett. 91, 90408 (2003) 2. A. J. Leggett, J. Phys. Condens. Matter 14, R415 (2002) 3. M. D. LaHaye et al., Science 304, 74 (2004) 4. R. G. Knobel, A. N. Cleland, Nature 424, 291 (2003) 5. X. M. H. Huang et al., Nature 421, 496 (2003) Science http://www.sciencemag.org From checker at panix.com Wed Sep 14 01:28:48 2005 From: checker at panix.com (Premise Checker) Date: Tue, 13 Sep 2005 21:28:48 -0400 (EDT) Subject: [Paleopsych] Edge: John Horgan: In Defense of Common Sense Message-ID: John Horgan: In Defense of Common Sense http://www.edge.org/3rd_culture/horgan05/horgan05_index.html All these theories are preposterous, but that's not my problem with them. My problem is that no conceivable experiment can confirm the theories, as most proponents reluctantly acknowledge. The strings (or membranes, or whatever) are too small to be discerned by any buildable instrument, and the parallel universes are too distant. Common sense thus persuades me that these avenues of speculation will turn out to be dead ends. IN DEFENSE OF COMMON SENSE By John Horgan Introduction John Horgan, author of The End of Science, and feisty and provocative as ever, is ready for combat with scientists in the Edge community. "I'd love to get Edgies' reaction to my OpEd piece -- "In Defense of Common Sense" -- in The New York Times", he writes. Physicist Leonard Susskind, writing "In Defense of Uncommon Sense", is the first to take up Horgan's challenge ([10]see below). Susskind notes that in "the utter strangeness of a world that the human intellect was not designed for... physicists have had no choice but to rewire themselves. Where intuition and common sense failed, they had to create new forms of intuition, mainly through the use of abstract mathematics." We've gone "out of the range of experience." Read on. -- [11]JB JOHN HORGAN oversees the science writings program at the Stevens Institute of Technology. His books include The End of Science and Rational Mysticism. [12]John Horgan's Edge bio page [13]THE REALITY CLUB:[14] Verena Huber-Dyson, [15]Robert Provine, [16]Spencer Reiss, [17]Daniel Gilbert, [18]John McCarthy, [19]Leonard Susskind respond to John Horgan. [20]Horgan replies. _________________________________________________________________ IN DEFENSE OF COMMON SENSE As anyone remotely interested in science knows by now, 100 years ago Einstein wrote six papers that laid the groundwork for quantum mechanics and relativity, arguably the two most successful theories in history. To commemorate Einstein's "annus mirabilis," a coalition of physics groups has designated 2005 the World Year of Physics. The coalition's Web site lists more than 400 celebratory events, including conferences, museum exhibits, concerts, Webcasts, plays, poetry readings, a circus, a pie-eating contest and an Einstein look-alike competition. In the midst of all this hoopla, I feel compelled to deplore one aspect of Einstein's legacy: the widespread belief that science and common sense are incompatible. In the pre-Einstein era, T. H. Huxley, aka "Darwin's bulldog," could define science as "nothing but trained and organized common sense." But quantum mechanics and relativity shattered our common-sense notions about how the world works. The theories ask us to believe that an electron can exist in more than one place at the same time, and that space and time -- the I-beams of reality -- are not rigid but rubbery. Impossible! And yet these sense-defying propositions have withstood a century's worth of painstaking experimental tests. As a result, many scientists came to see common sense as an impediment to progress not only in physics but also in other fields. "What, after all, have we to show for ... common sense," the behaviorist B. F. Skinner asked, "or the insights gained through personal experience?" Elevating this outlook to the status of dogma, the British biologist Lewis Wolpert declared in his influential 1992 book "The Unnatural Nature of Science," "I would almost contend that if something fits in with common sense it almost certainly isn't science." Dr. Wolpert's view is widely shared. When I invoke common sense to defend or -- more often -- criticize a theory, scientists invariably roll their eyes. Scientists' contempt for common sense has two unfortunate implications. One is that preposterousness, far from being a problem for a theory, is a measure of its profundity; hence the appeal, perhaps, of dubious propositions like multiple-personality disorders and multiple-universe theories. The other, even more insidious implication is that only scientists are really qualified to judge the work of other scientists. Needless to say, I reject that position, and not only because I'm a science journalist (who majored in English). I have also found common sense -- ordinary, nonspecialized knowledge and judgment -- to be indispensable for judging scientists' pronouncements, even, or especially, in the most esoteric fields. For example, Einstein's intellectual heirs have long been obsessed with finding a single "unified" theory that can embrace quantum mechanics, which accounts for electromagnetism and the nuclear forces, and general relativity, which describes gravity. The two theories employ very different mathematical languages and describe very different worlds, one lumpy and random and the other seamless and deterministic. The leading candidate for a unified theory holds that reality stems from tiny strings, or loops, or membranes, or something wriggling in a hyperspace consisting of 10, or 16 or 1,000 dimensions (the number depends on the variant of the theory, or the day of the week, or the theorist's ZIP code). A related set of "quantum gravity" theories postulates the existence of parallel universes -- some perhaps mutant versions of our own, like "Bizarro world" in the old Superman comics -- existing beyond the borders of our little cosmos. "Infinite Earths in Parallel Universes Really Exist," the normally sober Scientific American once hyperventilated on its cover. All these theories are preposterous, but that's not my problem with them. My problem is that no conceivable experiment can confirm the theories, as most proponents reluctantly acknowledge. The strings (or membranes, or whatever) are too small to be discerned by any buildable instrument, and the parallel universes are too distant. Common sense thus persuades me that these avenues of speculation will turn out to be dead ends. Common sense -- and a little historical perspective -- makes me equally skeptical of grand unified theories of the human mind. After a half-century of observing myself and my fellow humans -- not to mention watching lots of TV and movies -- I've concluded that as individuals we're pretty complex, variable, unpredictable creatures, whose personalities can be affected by a vast range of factors. I'm thus leery of hypotheses that trace some important aspect of our behavior to a single cause. Two examples: The psychologist Frank Sulloway has claimed that birth order has a profound, permanent impact on personality; first-borns tend to be conformists, whereas later-borns are "rebels." And just last year, the geneticist Dean Hamer argued that human spirituality -- surely one of the most complicated manifestations of our complicated selves -- stems from a specific snippet of DNA. Although common sense biases me against these theories, I am still open to being persuaded on empirical grounds. But the evidence for both Dr. Sulloway's birth-order theory and Dr. Hamer's "God gene" is flimsy. Over the past century, moreover, mind-science has been as faddish as teenage tastes in music, as one theory has yielded to another. Everything we think and do, scientists have assured us, can be explained by the Oedipal complex, or conditioned reflexes, or evolutionary adaptations, or a gene in the X chromosome, or serotonin deficits in the amygdala. Given this rapid turnover in paradigms, it's only sensible to doubt them all until the evidence for one becomes overwhelming. Ironically, while many scientists disparage common sense, artificial-intelligence researchers have discovered just how subtle and powerful an attribute it is. Over the past few decades, researchers have programmed computers to perform certain well-defined tasks extremely well; computers can play championship chess, calculate a collision between two galaxies and juggle a million airline reservations. But computers fail miserably at simulating the ordinary, experience-based intelligence that helps ordinary humans get through ordinary days. In other words, computers lack common sense, and that's why even the smartest ones are so dumb. Yes, common sense alone can lead us astray, and some of science's most profound insights into nature violate it; ultimately, scientific truth must be established on empirical grounds. Einstein himself once denigrated common sense as "the collection of prejudices acquired by age 18," but he retained a few basic prejudices of his own about how reality works. His remark that "God does not play dice with the universe" reflected his stubborn insistence that specific causes yield specific effects; he could never fully accept the bizarre implication of quantum mechanics that at small scales reality dissolves into a cloud of probabilities. So far, Einstein seems to be wrong about God's aversion to games of chance, but he was right not to abandon his common-sense intuitions about reality. In those many instances when the evidence is tentative, we should not be embarrassed to call on common sense for guidance. [Editor's Note:[21] First published as an Op-Ed Page article in The New York Times on August 12th] [22]LEONARD SUSSKIND Felix Bloch Professor of Theoretical Physics, Stanford University IN DEFENSE OF UNCOMMON SENSE Leonard Susskind Responds to John Horgan [susskind100.jpg] John Horgan, the man who famously declared The End of Science shortly before the two greatest cosmological discoveries since the Big Bang, has now come forth to tell us that the world's leading physicists and cognitive scientists are wasting their time. Why? Because they are substituting difficult-to-understand and often shockingly unintuitive concepts for "everyman" common sense. Whose common sense? John Horgan's (admittedly a non-scientist) I presume. The complaint that science -- particularly physics -- has lost contact with common sense is hardly new. It was used against Einstein, Bohr, and Heisenberg, and even today is being used against Darwin by the right wing agents of "intelligent design." Every week I get several angry email messages containing "common sense" (no math) theories of everything from elementary particles to the rings of Saturn. The theories have names like "Rational Theory of the Phenomenons. Modern science is difficult and often counterintuitive. Instead of bombastically ranting against this fact, Horgan should try to understand why it is so. The reasons have nothing to do with the perversity of string theorists, but rather, they have to do with the utter strangeness of a world that the human intellect was not designed for. Let me explain. Up until the beginning of the 20th century, physics dealt with phenomena that took place on a human scale. The typical objects that humans could observe varied in the size from a bacterium to something smaller than a galaxy. Similarly, no human had ever traveled faster than a hundred miles an hour, or a experienced a gravitational field that accelerates objects more powerfully than the Earth's acceleration, a modest thirty two feet per second per second. Forces smaller than a thousandth of a pound, or bigger than a thousand pounds, were also out of the range of experience. Evolution wired us with both hardware and software that would allow us to easily "grock" concepts like force, acceleration, and temperature, but only over the limited range that applies to our daily lives -- concepts that are needed for our physical survival. But it simply did not provide us with wiring to intuit the quantum behavior of an electron, or velocities near the speed of light, or the powerful gravitational fields of black holes, or a universe that closes back on itself like the surface of the Earth. A classic example of the limitations of our neural wiring is the inability to picture more than three dimensions. Why, after all, would nature provide us with the capacity to visualize things that no living creature had ever experienced? Physicists have had no choice but to rewire themselves. Where intuition and common sense failed, they had to create new forms of intuition, mainly through the use of abstract mathematics: Einstein's four dimensional elastic space-time; the infinite dimensional Hilbert space of quantum mechanics; the difficult mathematics of string theory; and, if necessary, multiple universes. When common sense fails, uncommon sense must be created. Of course we must use uncommon sense sensibly but we hardly need Horgan to tell us that. In trying to understand the universe at both its smallest and biggest scales, physics and cosmology have embarked on a new age of exploration. In a sense we are attempting to cross a larger uncharted sea than ever before. Indeed, as Horgan tells us, it's a dangerous sea where one can easily lose ones way and go right off the deep end. But great scientists are, by nature, explorers. To tell them to stay within the boundaries of common sense may be like telling Columbus that if he goes more than fifty miles from shore he'll get hopelessly lost. Besides, good old common sense tells us that the Earth is flat. Horgan also complains about the lack of common sense in cognitive science, i.e., the science of the mind. But the more psychologists and neuroscientists learn about the workings of the mind, the more it becomes absolutely clear that human cognition does not operate according to principles of common sense. That a man can mistake his wife for a hat is-well-common nonsense. But it happens. Cognitive scientists are also undergoing a rewiring process. Finally I must take exception to Horgan's claim that "no conceivable experiment can confirm the theories [string theory and cosmological eternal inflation] as most proponents reluctantly acknowledge." Here I speak from first hand knowledge. Many, if not all, of the most distinguished theoretical physicists in the world -- Steven Weinberg, Edward Witten, John Schwarz, Joseph Polchinski, Nathan Seiberg, Juan Maldacena, David Gross, Savas Dimopoulos, Andrei Linde, Renata Kallosh, among many others, most certainly acknowledge no such thing. These physicists are full of ideas about how to test modern concepts -- from superstrings in the sky to supersymmetry in the lab. Instead of dyspeptically railing against what he plainly does not understand, Horgan would do better to take a few courses in algebra, calculus, quantum mechanics, and string theory. He might then appreciate, even celebrate, the wonderful and amazing capacity of the human mind to find uncommon ways to comprehend the incomprehensible. _________________________________________________________________ [23]JOHN McCARTHY Computer Scientist; Artificial Intelligence Pioneer, Stanford University [mccarthy100.jpg] John Horgan pontificates: "But computers fail miserably at simulating the ordinary, experience-based intelligence that helps ordinary humans get through ordinary days. In other words, computers lack common sense, and that's why even the smartest ones are so dumb." Horgan regards a lack of common sense as an intrinsic characteristic of computers; I assume he means computer programs. However, much artificial intelligence research has focussed on analyzing commonsense knowledge and reasoning. I refer to my 1959 article "Programs with common sense", my 1990 collection of articles "Formalizing common sense", Erik Mueller's forthcoming book "Commonsense reasoning", and the biennial international conferences on common sense. I fear John Horgan would find this work as distressingly technical as he finds physics. Common sense has proved a difficult scientific topic, and programs with human-level common sense have not yet been achieved. It may be another 100 years. The AI research has identified components of commonsense knowledge and reasoning, has formalized some of them in languages of mathematical logic, and has built some of them into computer programs. Besides the logic based approach, there have been recent attempts to understand common sense as an aspect of the human nervous system. Research on formalizing common sense physics, e.g. that objects fall when pushed off a table, are not in competition with physics as studied by physicists. Rather physics is imbedded in common sense. Thus applying Newton's F = ma requires commonsense reasoning. Physics texts and articles do not consist solely of equations but contain common sense explanations. When Horgan says that string theory is untestable, he is ignoring even the popular science writing about string theory. This literature tells us that the current untestability of string theory is regarded by the string theorists as a blemish they hope to fix. _________________________________________________________________ [24]DANIEL GILBERT Psychologist, Harvard University [gilbert100.jpg] Horgan's Op-Ed piece is such a silly trifle that it doesn't dignify serious response. The beauty of science is that it allows us to transcend our intuitions about the world, and it provides us with methods by which we can determine which of our intuitions are right and which are not. Common sense tell us that the earth is flat, that the sun moves around it, and that the people who know the least often speak the loudest. Horgan's essay demonstrates that at least one of our common sense notions is true. _________________________________________________________________ [25]SPENCER REISS Contributing Editor, Wired Magazine [reiss100.jpg] Surely Susskind is joking: "Why, after all, would nature provide us with the capacity to visualize things that no living creature had ever experienced?" Art? Music? Heaven? God? The Red Sox win the World Series? Science fiction, for chrissake! Buy the man a drink! This is the kind of stuff that gives scientists a bad name. _________________________________________________________________ [26]ROBERT R. PROVINE Psychologist and Neuroscientist, University of Maryland; Author, Laughter [provine100.jpg] Hunter-Gatherers Make Poor Physicists and Cognitive Neuroscientists: Horgan 0, Susskind 1 Horgan continues to expand his franchise that is based on the technique of assertively posing provocative and often reasonable propositions. The boldness of his assertions earns him an audience that he would not otherwise achieve. But as in The End of Science, he picks a fight that he is not prepared to win and never delivers a telling blow. Susskind effectively exploits a basic weakness in Horgan's thesis, the fallibility of common sense, especially in scientific context. Researchers working at the frontiers of many sciences use mathematical and theoretical prostheses to expand the range of phenomena that can be studied, escaping some of the limits of their evolutionary history and its neurological endowment. The startling truth is that we live in a neurologically-generated, virtual cosmos that we are programmed to accept as the real thing. The challenge of science is to overcome the constraints of our neurological wetware and understand a physical world that we know only second-hand and incompletely. In fact, we must make an intuitive leap to accept the fact that there is a problem at all. Common sense and the brain that produces it evolved in the service of our hunter-gatherer ancestors, not physicists and cognitive neuroscientists. Unassisted, the brain of Horgan or any other member of our species is not up to task of engaging certain scientific problems. Sensory science provides the most obvious discrepancies between the physical world and our neurological model of it. We humans evolved the capacity to detect a subset of stimuli available to us on the surface of planet Earth. Different animals with different histories differ in their absolute sensitivity to a given stimulus and in the bandwidth to with they are sensitive. And some species have modes of sensation that we lack, such as electric or magnetic fields. Each species is a theory of the environment in which it evolved and it can never completely escape the limitations of its unique evolutionary history. But the problem of sensing the physical cosmos is even more complicated, because we do not directly sense physical stimuli, but are aware of only their neurological correlates. There is not, for example, any "blue" in electromagnetic radiation, pitch of B-flat in pressure changes in the air, or sweetness in sucrose. All are neurological derivatives of the physical world, not the thing itself. Neurological limits on thinking are probably as common as those on sensing, but they are more illusive -- it's harder to think about what we can't think about than what we can't sense. A good example from physics is our difficulty in understanding the space-time continuum -- our intellect fails us when we move beyond the dimensions of height, width, and depth. Other evidence of our neurological reality-generator is revealed by its malfunction in illusions, hallucinations, and dreams, or in brain damage, where the illusion of reality does not simply degrade, but often splinters and fragments in unanticipated ways. The intellectual prostheses of mathematics, computers, and instrumentation loosen but do not free our species of the constraints of its neurological heritage. We do not build random devices to detect stimuli that we cannot conceive, but build outward from a base of knowledge. A neglected triumph of science is how far we have come with so flawed an instrument as the human brain and its sensoria. Another is in realizing the limits of common sense and its knowledge base of folk wisdom. _________________________________________________________________ [27]VERENA HUBER-DYSON Logician; Emeritus Professor, University of Calgary [huber-dyson100.jpg] IN PRAISE OF EVOLVING COMMON SENSE It seems to me that John Horgan in his Edge piece "In Defense of Common Sense" is confusing "common sense" with "prejudice". The human capacity for common sense reasoning is undergoing an evolutionary process as science and technology are progressing. Just look back over the last two millennia for spectacular illustrations of this pretty obvious observation. Presumably Mr. Horgan watches TV, uses his personal computer and takes airplanes to get places he cannot reach on foot nor by his questionably commonsensical motor car. If he does not know how to fix whatever trouble his car may come up with -- like some people do -- he really should not drive it. To some of my colleagues the telescope serves as the extension of their vision to others the cloud chamber extends the reach of their cognition, just the way his car serves Mr Horgan to get around. In the cloud chamber we witness effects of events too small to see directly. Oh there are so many wonderful illustrations of this evolution of the human cognitive faculties. Ideas, models, conjectures acquiring reality by circumstantial evidence and repeated reasoning become part of our life; as they get entrenched our common sense expands through familiarity. Sometime our notions have to be adjusted, or some, like the idea of the ether, become obsolete. That too is progress. Common sense that refuses to evolve becomes prejudice, or bigotry to use a more bold expression. I have seen quite a bit of scientific evolution in my time. In my childhood the planetary model of the atom was the way we were thinking of matter; now it has become a metaphor or a handy tool, useful under certain conditions. The same is about to happen with strings. We have learned to think more abstractly, we do not really need to think of strings as wiggly worms much too small to see. We have become quite adept at mathematical modeling. I'd love to be around to see the evolution of cognition happening ever so much faster. Even the men in the street are keeping pace. Let us not encourage spoil-sports like Mr Horgan. _________________________________________________________________ [28]JOHN HORGAN My modest defense of common sense as a guide for judging theories -- particularly when empirical evidence is flimsy -- has provoked a predictable shriek of outrage from Lenny Susskind. His attempt to lump me together with advocates of intelligent design is more than a little ironic, since in rebuking me he displays the self-righteous arrogance of a religious zealot damning an infidel. Moreover, as a proponent (!!) recently acknowledged in the New York Times, string theory and its offshoots are so devoid of evidence that they represent "a faith-based initiative." Susskind urges me to "take courses in algebra, calculus, quantum mechanics, and string theory" before I mouth off further about strings. In other words, I must become a string theorist to voice an opinion about it. This assertion recalls the insistence of Freudians -- another group notoriously hostile to outside criticism and complaints about testability -- that only those fully indoctrinated into their mind-cult can judge it. Susskind's protestations to the contrary, string theory can be neither falsified nor verified by any empirical test. At best, experiments can provide only necessary but insufficient evidence for components -- such as supersymmetry -- of certain variants of string theory. That is why in 2002 I bet the physicist Michio Kaku $1000 that by 2020 no one will be awarded a Nobel prize for work on string theory or similar quantum-gravity theory. (I discuss the bet with Kaku, Lee Smolin, Gordon Kane, and other physicists at [29]"Long Bet"). Would Susskind care to make a side bet? As to the other respondents: John McCarthy merely confirms my assertion that computer programmers have failed to simulate common sense -- except that McCarthy expends many more words to make his point than I do. And like Lenny Susskind, Robert Provine and Verena Huber-Dyson merely point out that many scientific theories violate popular, common-sense intuitions about nature and yet prove to be empirically correct. No kidding. I said just that in my essay. The question that I raised -- and that all these respondents have studiously avoided -- is what we should do when presented with theories such as psychoanalysis or string theory, which are not only counterintuitive but also lacking in evidence. Common sense tells me that in these cases common sense can come in handy. References 12. http://www.edge.org/3rd_culture/bios/horgan.html 21. http://www.nytimes.com/2005/08/12/opinion/12horgan.html 22. http://www.edge.org/3rd_culture/bios/susskind.html 23. http://www.edge.org/3rd_culture/bios/mccarthy.html 24. http://www.edge.org/3rd_culture/bios/gilbert.html 25. http://www.edge.org/3rd_culture/bios/reiss.html 26. http://www.edge.org/3rd_culture/bios/provine.html 27. http://www.edge.org/3rd_culture/bios/huber-dyson.html 28. http://www.edge.org/3rd_culture/bios/horgan.html 29. http://www.longbets.org/12%3Ehttp://www.longbets.org/12 From checker at panix.com Wed Sep 14 01:29:18 2005 From: checker at panix.com (Premise Checker) Date: Tue, 13 Sep 2005 21:29:18 -0400 (EDT) Subject: [Paleopsych] PLoS Medicine: Why Most Published Research Findings Are False Message-ID: PLoS Medicine: Why Most Published Research Findings Are False http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124 Volume 2 | Issue 8 | AUGUST 2005 John P. A. Ioannidis Summary There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research. John P. A. Ioannidis is in the Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Ioannina, Greece, and Institute for Clinical Research and Health Policy Studies, Department of Medicine, Tufts-New England Medical Center, Tufts University School of Medicine, Boston, Massachusetts, United States of America. E-mail: jioannid at cc.uoi.gr Competing Interests: The author has declared that no competing interests exist. Published: August 30, 2005 DOI: 10.1371/journal.pmed.0020124 Abbreviation: PPV, positive predictive value Citation: Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124 ______________________________________________________________________ Published research findings are sometimes refuted by subsequent evidence, with ensuing confusion and disappointment. Refutation and controversy is seen across the range of research designs, from clinical trials and traditional epidemiological studies [[24]1-3] to the most modern molecular research [[25]4,[26]5]. There is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims [[27]6-8]. However, this should not be surprising. It can be proven that most claimed research findings are false. Here I will examine the key factors that influence this problem and some corollaries thereof. Modeling the Framework for False Positive Findings Several methodologists have pointed out [[28]9-11] that the high rate of nonreplication (lack of confirmation) of research discoveries is a consequence of the convenient, yet ill-founded strategy of claiming conclusive research findings solely on the basis of a single study assessed by formal statistical significance, typically for a p-value less than 0.05. Research is not most appropriately represented and summarized by p-values, but, unfortunately, there is a widespread notion that medical research articles should be interpreted based only on p-values. Research findings are defined here as any relationship reaching formal statistical significance, e.g., effective interventions, informative predictors, risk factors, or associations. "Negative" research is also very useful. "Negative" is actually a misnomer, and the misinterpretation is widespread. However, here we will target relationships that investigators claim exist, rather than null findings. It can be proven that most claimed research findings are false. As has been shown previously, the probability that a research finding is indeed true depends on the prior probability of it being true (before doing the study), the statistical power of the study, and the level of statistical significance [[29]10,[30]11]. Consider a 2 ? 2 table in which research findings are compared against the gold standard of true relationships in a scientific field. In a research field both true and false hypotheses can be made about the presence of relationships. Let R be the ratio of the number of "true relationships" to "no relationships" among those tested in the field. R is characteristic of the field and can vary a lot depending on whether the field targets highly likely relationships or searches for only one or a few true relationships among thousands and millions of hypotheses that may be postulated. Let us also consider, for computational simplicity, circumscribed fields where either there is only one true relationship (among many that can be hypothesized) or the power is similar to find any of the several existing true relationships. The pre-study probability of a relationship being true is R/(R + 1). The probability of a study finding a true relationship reflects the power 1 - b (one minus the Type II error rate). The probability of claiming a relationship when none truly exists reflects the Type I error rate, a. Assuming that c relationships are being probed in the field, the expected values of the 2 ? 2 table are given in [31]Table 1. After a research finding has been claimed based on achieving formal statistical significance, the post-study probability that it is true is the positive predictive value, PPV. The PPV is also the complementary probability of what Wacholder et al. have called the false positive report probability [[32]10]. According to the 2 ? 2 table, one gets PPV = (1 - b)R/(R - bR + a). A research finding is thus more likely true than false if (1 - b)R > a. Since usually the vast majority of investigators depend on a = 0.05, this means that a research finding is more likely true than false if (1 - b)R > 0.05. [33][table_thumb.gif] [34]Table 1. Research Findings and True Relationships What is less well appreciated is that bias and the extent of repeated independent testing by different teams of investigators around the globe may further distort this picture and may lead to even smaller probabilities of the research findings being indeed true. We will try to model these two factors in the context of similar 2 ? 2 tables. Bias First, let us define bias as the combination of various design, data, analysis, and presentation factors that tend to produce research findings when they should not be produced. Let u be the proportion of probed analyses that would not have been "research findings," but nevertheless end up presented and reported as such, because of bias. Bias should not be confused with chance variability that causes some findings to be false by chance even though the study design, data, analysis, and presentation are perfect. Bias can entail manipulation in the analysis or reporting of findings. Selective or distorted reporting is a typical form of such bias. We may assume that u does not depend on whether a true relationship exists or not. This is not an unreasonable assumption, since typically it is impossible to know which relationships are indeed true. In the presence of bias ([35]Table 2), one gets PPV = ([1 - b]R + ubR)/(R + a - bR + u - ua + ubR), and PPV decreases with increasing u, unless 1 - b =< a, i.e., 1 - b =< 0.05 for most situations. Thus, with increasing bias, the chances that a research finding is true diminish considerably. This is shown for different levels of power and for different pre-study odds in [36]Figure 1. [37][10.1371_journal.pmed.0020124.g001-M.jpg] [38]Figure 1. PPV (Probability That a Research Finding Is True) as a Function of the Pre-Study Odds for Various Levels of Bias, u Panels correspond to power of 0.20, 0.50, and 0.80. [39][table_thumb.gif] [40]Table 2. Research Findings and True Relationships in the Presence of Bias Conversely, true research findings may occasionally be annulled because of reverse bias. For example, with large measurement errors relationships are lost in noise [[41]12], or investigators use data inefficiently or fail to notice statistically significant relationships, or there may be conflicts of interest that tend to "bury" significant findings [[42]13]. There is no good large-scale empirical evidence on how frequently such reverse bias may occur across diverse research fields. However, it is probably fair to say that reverse bias is not as common. Moreover measurement errors and inefficient use of data are probably becoming less frequent problems, since measurement error has decreased with technological advances in the molecular era and investigators are becoming increasingly sophisticated about their data. Regardless, reverse bias may be modeled in the same way as bias above. Also reverse bias should not be confused with chance variability that may lead to missing a true relationship because of chance. Testing by Several Independent Teams Several independent teams may be addressing the same sets of research questions. As research efforts are globalized, it is practically the rule that several research teams, often dozens of them, may probe the same or similar questions. Unfortunately, in some areas, the prevailing mentality until now has been to focus on isolated discoveries by single teams and interpret research experiments in isolation. An increasing number of questions have at least one study claiming a research finding, and this receives unilateral attention. The probability that at least one study, among several done on the same question, claims a statistically significant research finding is easy to estimate. For n independent studies of equal power, the 2 ? 2 table is shown in [43]Table 3: PPV = R(1 - b^n)/(R + 1 - [1 - a]^n - Rb^n) (not considering bias). With increasing number of independent studies, PPV tends to decrease, unless 1 - b < a, i.e., typically 1 - b < 0.05. This is shown for different levels of power and for different pre-study odds in [44]Figure 2. For n studies of different power, the term b^n is replaced by the product of the terms b[i] for i = 1 to n, but inferences are similar. [45][10.1371_journal.pmed.0020124.g002-M.jpg] [46]Figure 2. PPV (Probability That a Research Finding Is True) as a Function of the Pre-Study Odds for Various Numbers of Conducted Studies, n Panels correspond to power of 0.20, 0.50, and 0.80. [47][table_thumb.gif] [48]Table 3. Research Findings and True Relationships in the Presence of Multiple Studies Corollaries A practical example is shown in [49]Box 1. Based on the above considerations, one may deduce several interesting corollaries about the probability that a research finding is indeed true. Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true. Small sample size means smaller power and, for all functions above, the PPV for a true research finding decreases as power decreases towards 1 - b = 0.05. Thus, other factors being equal, research findings are more likely true in scientific fields that undertake large studies, such as randomized controlled trials in cardiology (several thousand subjects randomized) [[50]14] than in scientific fields with small studies, such as most research of molecular predictors (sample sizes 100-fold smaller) [[51]15]. Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true. Power is also related to the effect size. Thus research findings are more likely true in scientific fields with large effects, such as the impact of smoking on cancer or cardiovascular disease (relative risks 3-20), than in scientific fields where postulated effects are small, such as genetic risk factors for multigenetic diseases (relative risks 1.1-1.5) [[52]7]. Modern epidemiology is increasingly obliged to target smaller effect sizes [[53]16]. Consequently, the proportion of true research findings is expected to decrease. In the same line of thinking, if the true effect sizes are very small in a scientific field, this field is likely to be plagued by almost ubiquitous false positive claims. For example, if the majority of true genetic or nutritional determinants of complex diseases confer relative risks less than 1.05, genetic or nutritional epidemiology would be largely utopian endeavors. Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true. As shown above, the post-study probability that a finding is true (PPV) depends a lot on the pre-study odds (R). Thus, research findings are more likely true in confirmatory designs, such as large phase III randomized controlled trials, or meta-analyses thereof, than in hypothesis-generating experiments. Fields considered highly informative and creative given the wealth of the assembled and tested information, such as microarrays and other high-throughput discovery-oriented research [[54]4,[55]8,[56]17], should have extremely low PPV. Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true. Flexibility increases the potential for transforming what would be "negative" results into "positive" results, i.e., bias, u. For several research designs, e.g., randomized controlled trials [[57]18-20] or meta-analyses [[58]21,[59]22], there have been efforts to standardize their conduct and reporting. Adherence to common standards is likely to increase the proportion of true findings. The same applies to outcomes. True findings may be more common when outcomes are unequivocal and universally agreed (e.g., death) rather than when multifarious outcomes are devised (e.g., scales for schizophrenia outcomes) [[60]23]. Similarly, fields that use commonly agreed, stereotyped analytical methods (e.g., Kaplan-Meier plots and the log-rank test) [[61]24] may yield a larger proportion of true findings than fields where analytical methods are still under experimentation (e.g., artificial intelligence methods) and only "best" results are reported. Regardless, even in the most stringent research designs, bias seems to be a major problem. For example, there is strong evidence that selective outcome reporting, with manipulation of the outcomes and analyses reported, is a common problem even for randomized trails [[62]25]. Simply abolishing selective publication would not make this problem go away. Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true. Conflicts of interest and prejudice may increase bias, u. Conflicts of interest are very common in biomedical research [[63]26], and typically they are inadequately and sparsely reported [[64]26,[65]27]. Prejudice may not necessarily have financial roots. Scientists in a given field may be prejudiced purely because of their belief in a scientific theory or commitment to their own findings. Many otherwise seemingly independent, university-based studies may be conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure. Such nonfinancial conflicts may also lead to distorted reported results and interpretations. Prestigious investigators may suppress via the peer review process the appearance and dissemination of findings that refute their findings, thus condemning their field to perpetuate false dogma. Empirical evidence on expert opinion shows that it is extremely unreliable [[66]28]. Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true. This seemingly paradoxical corollary follows because, as stated above, the PPV of isolated findings decreases when many teams of investigators are involved in the same field. This may explain why we occasionally see major excitement followed rapidly by severe disappointments in fields that draw wide attention. With many teams working on the same field and with massive experimental data being produced, timing is of the essence in beating competition. Thus, each team may prioritize on pursuing and disseminating its most impressive "positive" results. "Negative" results may become attractive for dissemination only if some other team has found a "positive" association on the same question. In that case, it may be attractive to refute a claim made in some prestigious journal. The term Proteus phenomenon has been coined to describe this phenomenon of rapidly alternating extreme research claims and extremely opposite refutations [[67]29]. Empirical evidence suggests that this sequence of extreme opposites is very common in molecular genetics [[68]29]. These corollaries consider each factor separately, but these factors often influence each other. For example, investigators working in fields where true effect sizes are perceived to be small may be more likely to perform large studies than investigators working in fields where true effect sizes are perceived to be large. Or prejudice may prevail in a hot scientific field, further undermining the predictive value of its research findings. Highly prejudiced stakeholders may even create a barrier that aborts efforts at obtaining and disseminating opposing results. Conversely, the fact that a field is hot or has strong invested interests may sometimes promote larger studies and improved standards of research, enhancing the predictive value of its research findings. Or massive discovery-oriented testing may result in such a large yield of significant relationships that investigators have enough to report and search further and thus refrain from data dredging and manipulation. Most Research Findings Are False for Most Research Designs and for Most Fields In the described framework, a PPV exceeding 50% is quite difficult to get. [69]Table 4 provides the results of simulations using the formulas developed for the influence of power, ratio of true to non-true relationships, and bias, for various types of situations that may be characteristic of specific study designs and settings. A finding from a well-conducted, adequately powered randomized controlled trial starting with a 50% pre-study chance that the intervention is effective is eventually true about 85% of the time. A fairly similar performance is expected of a confirmatory meta-analysis of good-quality randomized trials: potential bias probably increases, but power and pre-test chances are higher compared to a single randomized trial. Conversely, a meta-analytic finding from inconclusive studies where pooling is used to "correct" the low power of single studies, is probably false if R =< 1:3. Research findings from underpowered, early-phase clinical trials would be true about one in four times, or even less frequently if bias is present. Epidemiological studies of an exploratory nature perform even worse, especially when underpowered, but even well-powered epidemiological studies may have only a one in five chance being true, if R = 1:10. Finally, in discovery-oriented research with massive testing, where tested relationships exceed true ones 1,000-fold (e.g., 30,000 genes tested, of which 30 may be the true culprits) [[70]30,[71]31], PPV for each claimed relationship is extremely low, even with considerable standardization of laboratory and statistical methods, outcomes, and reporting thereof to minimize bias. [72][table_thumb.gif] [73]Table 4. PPV of Research Findings for Various Combinations of Power (1 - b), Ratio of True to Not-True Relationships (R), and Bias (u) Claimed Research Findings May Often Be Simply Accurate Measures of the Prevailing Bias As shown, the majority of modern biomedical research is operating in areas with very low pre- and post-study probability for true findings. Let us suppose that in a research field there are no true findings at all to be discovered. History of science teaches us that scientific endeavor has often in the past wasted effort in fields with absolutely no yield of true scientific information, at least based on our current understanding. In such a "null field," one would ideally expect all observed effect sizes to vary by chance around the null in the absence of bias. The extent that observed findings deviate from what is expected by chance alone would be simply a pure measure of the prevailing bias. For example, let us suppose that no nutrients or dietary patterns are actually important determinants for the risk of developing a specific tumor. Let us also suppose that the scientific literature has examined 60 nutrients and claims all of them to be related to the risk of developing this tumor with relative risks in the range of 1.2 to 1.4 for the comparison of the upper to lower intake tertiles. Then the claimed effect sizes are simply measuring nothing else but the net bias that has been involved in the generation of this scientific literature. Claimed effect sizes are in fact the most accurate estimates of the net bias. It even follows that between "null fields," the fields that claim stronger effects (often with accompanying claims of medical or public health importance) are simply those that have sustained the worst biases. For fields with very low PPV, the few true relationships would not distort this overall picture much. Even if a few relationships are true, the shape of the distribution of the observed effects would still yield a clear measure of the biases involved in the field. This concept totally reverses the way we view scientific results. Traditionally, investigators have viewed large and highly significant effects with excitement, as signs of important discoveries. Too large and too highly significant effects may actually be more likely to be signs of large bias in most fields of modern research. They should lead investigators to careful critical thinking about what might have gone wrong with their data, analyses, and results. Of course, investigators working in any field are likely to resist accepting that the whole field in which they have spent their careers is a "null field." However, other lines of evidence, or advances in technology and experimentation, may lead eventually to the dismantling of a scientific field. Obtaining measures of the net bias in one field may also be useful for obtaining insight into what might be the range of bias operating in other fields where similar analytical methods, technologies, and conflicts may be operating. How Can We Improve the Situation? Is it unavoidable that most research findings are false, or can we improve the situation? A major problem is that it is impossible to know with 100% certainty what the truth is in any research question. In this regard, the pure "gold" standard is unattainable. However, there are several approaches to improve the post-study probability. Better powered evidence, e.g., large studies or low-bias meta-analyses, may help, as it comes closer to the unknown "gold" standard. However, large studies may still have biases and these should be acknowledged and avoided. Moreover, large-scale evidence is impossible to obtain for all of the millions and trillions of research questions posed in current research. Large-scale evidence should be targeted for research questions where the pre-study probability is already considerably high, so that a significant research finding will lead to a post-test probability that would be considered quite definitive. Large-scale evidence is also particularly indicated when it can test major concepts rather than narrow, specific questions. A negative finding can then refute not only a specific proposed claim, but a whole field or considerable portion thereof. Selecting the performance of large-scale studies based on narrow-minded criteria, such as the marketing promotion of a specific drug, is largely wasted research. Moreover, one should be cautious that extremely large studies may be more likely to find a formally statistical significant difference for a trivial effect that is not really meaningfully different from the null [[74]32-34]. Second, most research questions are addressed by many teams, and it is misleading to emphasize the statistically significant findings of any single team. What matters is the totality of the evidence. Diminishing bias through enhanced research standards and curtailing of prejudices may also help. However, this may require a change in scientific mentality that might be difficult to achieve. In some research designs, efforts may also be more successful with upfront registration of studies, e.g., randomized trials [[75]35]. Registration would pose a challenge for hypothesis-generating research. Some kind of registration or networking of data collections or investigators within fields may be more feasible than registration of each and every hypothesis-generating experiment. Regardless, even if we do not see a great deal of progress with registration of studies in other fields, the principles of developing and adhering to a protocol could be more widely borrowed from randomized controlled trials. Finally, instead of chasing statistical significance, we should improve our understanding of the range of R values--the pre-study odds--where research efforts operate [[76]10]. Before running an experiment, investigators should consider what they believe the chances are that they are testing a true rather than a non-true relationship. Speculated high R values may sometimes then be ascertained. As described above, whenever ethically acceptable, large studies with minimal bias should be performed on research findings that are considered relatively established, to see how often they are indeed confirmed. I suspect several established "classics" will fail the test [[77]36]. Nevertheless, most new discoveries will continue to stem from hypothesis-generating research with low or very low pre-study odds. We should then acknowledge that statistical significance testing in the report of a single study gives only a partial picture, without knowing how much testing has been done outside the report and in the relevant field at large. Despite a large statistical literature for multiple testing corrections [[78]37], usually it is impossible to decipher how much data dredging by the reporting authors or other research teams has preceded a reported research finding. Even if determining this were feasible, this would not inform us about the pre-study odds. Thus, it is unavoidable that one should make approximate assumptions on how many relationships are expected to be true among those probed across the relevant research fields and research designs. The wider field may yield some guidance for estimating this probability for the isolated research project. Experiences from biases detected in other neighboring fields would also be useful to draw upon. Even though these assumptions would be considerably subjective, they would still be very useful in interpreting research claims and putting them in context. Box 1. An Example: Science at Low Pre-Study Odds Let us assume that a team of investigators performs a whole genome association study to test whether any of 100,000 gene polymorphisms are associated with susceptibility to schizophrenia. Based on what we know about the extent of heritability of the disease, it is reasonable to expect that probably around ten gene polymorphisms among those tested would be truly associated with schizophrenia, with relatively similar odds ratios around 1.3 for the ten or so polymorphisms and with a fairly similar power to identify any of them. Then R = 10/100,000 = 10^ -4, and the pre-study probability for any polymorphism to be associated with schizophrenia is also R/(R + 1) = 10^ -4. Let us also suppose that the study has 60% power to find an association with an odds ratio of 1.3 at a = 0.05. Then it can be estimated that if a statistically significant association is found with the p-value barely crossing the 0.05 threshold, the post-study probability that this is true increases about 12-fold compared with the pre-study probability, but it is still only 12 ? 10^ -4. Now let us suppose that the investigators manipulate their design, analyses, and reporting so as to make more relationships cross the p = 0.05 threshold even though this would not have been crossed with a perfectly adhered to design and analysis and with perfect comprehensive reporting of the results, strictly according to the original study plan. Such manipulation could be done, for example, with serendipitous inclusion or exclusion of certain patients or controls, post hoc subgroup analyses, investigation of genetic contrasts that were not originally specified, changes in the disease or control definitions, and various combinations of selective or distorted reporting of the results. Commercially available "data mining" packages actually are proud of their ability to yield statistically significant results through data dredging. In the presence of bias with u = 0.10, the post-study probability that a research finding is true is only 4.4 ? 10^ -4. Furthermore, even in the absence of any bias, when ten independent research teams perform similar experiments around the world, if one of them finds a formally statistically significant association, the probability that the research finding is true is only 1.5 ? 10^ -4, hardly any higher than the probability we had before any of this extensive research was undertaken! References 1. Ioannidis JP, Haidich AB, Lau J (2001) Any casualties in the clash of randomised and observational evidence? BMJ 322: 879-880. [79]Find this article online 2. Lawlor DA, Davey Smith G, Kundu D, Bruckdorfer KR, Ebrahim S (2004) Those confounded vitamins: What can we learn from the differences between observational versus randomised trial evidence? Lancet 363: 1724-1727. [80]Find this article online 3. Vandenbroucke JP (2004) When are observational studies as credible as randomised trials? Lancet 363: 1728-1731. [81]Find this article online 4. Michiels S, Koscielny S, Hill C (2005) Prediction of cancer outcome with microarrays: A multiple random validation strategy. Lancet 365: 488-492. [82]Find this article online 5. Ioannidis JPA, Ntzani EE, Trikalinos TA, Contopoulos-Ioannidis DG (2001) Replication validity of genetic association studies. Nat Genet 29: 306-309. [83]Find this article online 6. Colhoun HM, McKeigue PM, Davey Smith G (2003) Problems of reporting genetic associations with complex outcomes. Lancet 361: 865-872. [84]Find this article online 7. Ioannidis JP (2003) Genetic associations: False or true? Trends Mol Med 9: 135-138. [85]Find this article online 8. Ioannidis JPA (2005) Microarrays and molecular research: Noise discovery? Lancet 365: 454-455. [86]Find this article online 9. Sterne JA, Davey Smith G (2001) Sifting the evidence--What's wrong with significance tests. BMJ 322: 226-231. [87]Find this article online 10. Wacholder S, Chanock S, Garcia-Closas M, El ghormli L, Rothman N (2004) Assessing the probability that a positive report is false: An approach for molecular epidemiology studies. J Natl Cancer Inst 96: 434-442. [88]Find this article online 11. Risch NJ (2000) Searching for genetic determinants in the new millennium. Nature 405: 847-856. [89]Find this article online 12. Kelsey JL, Whittemore AS, Evans AS, Thompson WD (1996) Methods in observational epidemiology, 2nd ed. New York: Oxford U Press. 432 p. 13. Topol EJ (2004) Failing the public health--Rofecoxib, Merck, and the FDA. N Engl J Med 351: 1707-1709. [90]Find this article online 14. Yusuf S, Collins R, Peto R (1984) Why do we need some large, simple randomized trials? Stat Med 3: 409-422. [91]Find this article online 15. Altman DG, Royston P (2000) What do we mean by validating a prognostic model? Stat Med 19: 453-473. [92]Find this article online 16. Taubes G (1995) Epidemiology faces its limits. Science 269: 164-169. [93]Find this article online 17. Golub TR, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, et al. (1999) Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science 286: 531-537. [94]Find this article online 18. Moher D, Schulz KF, Altman DG (2001) The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet 357: 1191-1194. [95]Find this article online 19. Ioannidis JP, Evans SJ, Gotzsche PC, O'Neill RT, Altman DG, et al. (2004) Better reporting of harms in randomized trials: An extension of the CONSORT statement. Ann Intern Med 141: 781-788. [96]Find this article online 20. International Conference on Harmonisation E9 Expert Working Group (1999) ICH Harmonised Tripartite Guideline. Statistical principles for clinical trials. Stat Med 18: 1905-1942. [97]Find this article online 21. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, et al. (1999) Improving the quality of reports of meta-analyses of randomised controlled trials: The QUOROM statement. Quality of Reporting of Meta-analyses. Lancet 354: 1896-1900. [98]Find this article online 22. Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, et al. (2000) Meta-analysis of observational studies in epidemiology: A proposal for reporting. Meta-analysis of Observational Studies in Epidemiology (MOOSE) group. JAMA 283: 2008-2012. [99]Find this article online 23. Marshall M, Lockwood A, Bradley C, Adams C, Joy C, et al. (2000) Unpublished rating scales: A major source of bias in randomised controlled trials of treatments for schizophrenia. Br J Psychiatry 176: 249-252. [100]Find this article online 24. Altman DG, Goodman SN (1994) Transfer of technology from statistical journals to the biomedical literature. Past trends and future predictions. JAMA 272: 129-132. [101]Find this article online 25. Chan AW, Hrobjartsson A, Haahr MT, Gotzsche PC, Altman DG (2004) Empirical evidence for selective reporting of outcomes in randomized trials: Comparison of protocols to published articles. JAMA 291: 2457-2465. [102]Find this article online 26. Krimsky S, Rothenberg LS, Stott P, Kyle G (1998) Scientific journals and their authors' financial interests: A pilot study. Psychother Psychosom 67: 194-201. [103]Find this article online 27. Papanikolaou GN, Baltogianni MS, Contopoulos-Ioannidis DG, Haidich AB, Giannakakis IA, et al. (2001) Reporting of conflicts of interest in guidelines of preventive and therapeutic interventions. BMC Med Res Methodol 1: 3. [104]Find this article online 28. Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC (1992) A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. Treatments for myocardial infarction. JAMA 268: 240-248. [105]Find this article online 29. Ioannidis JP, Trikalinos TA (2005) Early extreme contradictory estimates may appear in published research: The Proteus phenomenon in molecular genetics research and randomized trials. J Clin Epidemiol 58: 543-549. [106]Find this article online 30. Ntzani EE, Ioannidis JP (2003) Predictive ability of DNA microarrays for cancer outcomes and correlates: An empirical assessment. Lancet 362: 1439-1444. [107]Find this article online 31. Ransohoff DF (2004) Rules of evidence for cancer molecular-marker discovery and validation. Nat Rev Cancer 4: 309-314. [108]Find this article online 32. Lindley DV (1957) A statistical paradox. Biometrika 44: 187-192. [109]Find this article online 33. Bartlett MS (1957) A comment on D.V. Lindley's statistical paradox. Biometrika 44: 533-534. [110]Find this article online 34. Senn SJ (2001) Two cheers for P-values. J Epidemiol Biostat 6: 193-204. [111]Find this article online 35. De Angelis C, Drazen JM, Frizelle FA, Haug C, Hoey J, et al. (2004) Clinical trial registration: A statement from the International Committee of Medical Journal Editors. N Engl J Med 351: 1250-1251. [112]Find this article online 36. Ioannidis JPA (2005) Contradicted and initially stronger effects in highly cited clinical research. JAMA 294: 218-228. [113]Find this article online 37. Hsueh HM, Chen JJ, Kodell RL (2003) Comparison of methods for estimating the number of true null hypotheses in multiplicity testing. J Biopharm Stat 13: 675-689. [114]Find this article online References 23. mailto:jioannid at cc.uoi.gr 24. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B1 25. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B4 26. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B5 27. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B6 28. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B9 29. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B10 30. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B11 31. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-T001 32. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B10 33. http://medicine.plosjournals.org/perlserv/?request=slideshow&type=table&doi=10.1371/journal.pmed.0020124&id=4098 34. http://medicine.plosjournals.org/perlserv/?request=slideshow&type=table&doi=10.1371/journal.pmed.0020124&id=4098 35. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-T002 36. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-G001 37. http://medicine.plosjournals.org/perlserv/?request=slideshow&type=figure&doi=10.1371/journal.pmed.0020124&id=33288 38. http://medicine.plosjournals.org/perlserv/?request=slideshow&type=figure&doi=10.1371/journal.pmed.0020124&id=33288 39. http://medicine.plosjournals.org/perlserv/?request=slideshow&type=table&doi=10.1371/journal.pmed.0020124&id=4100 40. http://medicine.plosjournals.org/perlserv/?request=slideshow&type=table&doi=10.1371/journal.pmed.0020124&id=4100 41. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B12 42. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B13 43. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-T003 44. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-G002 45. http://medicine.plosjournals.org/perlserv/?request=slideshow&type=figure&doi=10.1371/journal.pmed.0020124&id=33292 46. http://medicine.plosjournals.org/perlserv/?request=slideshow&type=figure&doi=10.1371/journal.pmed.0020124&id=33292 47. http://medicine.plosjournals.org/perlserv/?request=slideshow&type=table&doi=10.1371/journal.pmed.0020124&id=4102 48. http://medicine.plosjournals.org/perlserv/?request=slideshow&type=table&doi=10.1371/journal.pmed.0020124&id=4102 49. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#BOX1 50. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B14 51. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B15 52. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B7 53. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B16 54. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B4 55. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B8 56. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B17 57. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B18 58. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B21 59. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B22 60. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B23 61. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B24 62. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B25 63. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B26 64. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B26 65. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B27 66. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B28 67. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B29 68. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B29 69. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-T004 70. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B30 71. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B31 72. http://medicine.plosjournals.org/perlserv/?request=slideshow&type=table&doi=10.1371/journal.pmed.0020124&id=4104 73. http://medicine.plosjournals.org/perlserv/?request=slideshow&type=table&doi=10.1371/journal.pmed.0020124&id=4104 74. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B32 75. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B35 76. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B10 77. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B36 78. http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10%2E1371%2Fjournal%2Epmed%2E0020124#JOURNAL-PMED-0020124-B37 79. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0959-8138%282001%29322%5B0879%3AACITCO%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B1&sitename=PLOSONLINE 80. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0140-6736%282004%29363%5B1724%3ATCVWCW%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B2&sitename=PLOSONLINE 81. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0140-6736%282004%29363%5B1728%3AWAOSAC%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B3&sitename=PLOSONLINE 82. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0140-6736%282005%29365%5B0488%3APOCOWM%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B4&sitename=PLOSONLINE 83. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=1061-4036%282001%29029%5B0306%3ARVOGAS%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B5&sitename=PLOSONLINE 84. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0140-6736%282003%29361%5B0865%3APORGAW%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B6&sitename=PLOSONLINE 85. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=1471-4914%282003%29009%5B0135%3AGAFOT%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B7&sitename=PLOSONLINE 86. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0140-6736%282005%29365%5B0454%3AMAMRND%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B8&sitename=PLOSONLINE 87. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0959-8138%282001%29322%5B0226%3ASTEWWS%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B9&sitename=PLOSONLINE 88. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0027-8874%282004%29096%5B0434%3AATPTAP%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B10&sitename=PLOSONLINE 89. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0028-0836%282000%29405%5B0847%3ASFGDIT%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B11&sitename=PLOSONLINE 90. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0028-4793%282004%29351%5B1707%3AFTPHMA%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B13&sitename=PLOSONLINE 91. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0277-6715%281984%29003%5B0409%3AWDWNSL%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B14&sitename=PLOSONLINE 92. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0277-6715%282000%29019%5B0453%3AWDWMBV%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B15&sitename=PLOSONLINE 93. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0193-4511%281995%29269%5B0164%3AEFIL%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B16&sitename=PLOSONLINE 94. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0193-4511%281999%29286%5B0531%3AMCOCCD%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B17&sitename=PLOSONLINE 95. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0140-6736%282001%29357%5B1191%3ATCSRRF%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B18&sitename=PLOSONLINE 96. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0003-4819%282004%29141%5B0781%3ABROHIR%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B19&sitename=PLOSONLINE 97. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0277-6715%281999%29018%5B1905%3AIHTGSP%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B20&sitename=PLOSONLINE 98. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0140-6736%281999%29354%5B1896%3AITQORO%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B21&sitename=PLOSONLINE 99. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0098-7484%282000%29283%5B2008%3AMOOSIE%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B22&sitename=PLOSONLINE 100. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0007-1250%282000%29176%5B0249%3AURSAMS%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B23&sitename=PLOSONLINE 101. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0098-7484%281994%29272%5B0129%3ATOTFSJ%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B24&sitename=PLOSONLINE 102. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0098-7484%282004%29291%5B2457%3AEEFSRO%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B25&sitename=PLOSONLINE 103. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0033-3190%281998%29067%5B0194%3ASJATAF%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B26&sitename=PLOSONLINE 104. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=1471-2288%282001%29001%5B0003%3AROCOII%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B27&sitename=PLOSONLINE 105. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0098-7484%281992%29268%5B0240%3AACOROM%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B28&sitename=PLOSONLINE 106. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0895-4356%282005%29058%5B0543%3AEECEMA%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B29&sitename=PLOSONLINE 107. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0140-6736%282003%29362%5B1439%3APAODMF%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B30&sitename=PLOSONLINE 108. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=1474-175X%282004%29004%5B0309%3AROEFCM%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B31&sitename=PLOSONLINE 109. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0006-3444%281957%29044%5B0187%3AASP%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B32&sitename=PLOSONLINE 110. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0006-3444%281957%29044%5B0533%3AACODLS%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B33&sitename=PLOSONLINE 111. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=1359-5229%282001%29006%5B0193%3ATCFP%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B34&sitename=PLOSONLINE 112. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0028-4793%282004%29351%5B1250%3ACTRASF%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B35&sitename=PLOSONLINE 113. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=0098-7484%282005%29294%5B0218%3ACAISEI%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B36&sitename=PLOSONLINE 114. http://medicine.plosjournals.org/perlserv/?request=get-citation-links&doi=1520-5711%282003%29013%5B0675%3ACOMFET%5D2.0.CO%3B2&id=JOURNAL-PMED-0020124-B37&sitename=PLOSONLINE From checker at panix.com Wed Sep 14 01:30:53 2005 From: checker at panix.com (Premise Checker) Date: Tue, 13 Sep 2005 21:30:53 -0400 (EDT) Subject: [Paleopsych] NYT: In Chimpanzee DNA, Signs of Y Chromosome's Evolution Message-ID: In Chimpanzee DNA, Signs of Y Chromosome's Evolution New York Times, 5.9.1 http://www.nytimes.com/2005/09/01/science/01chimp.html By [3]NICHOLAS WADE Scientists have decoded the chimp genome and compared it with that of humans, a major step toward defining what makes people human and developing a deep insight into the evolution of human sexual behavior. The comparison pinpoints the genetic differences that have arisen in the two species since they split from a common ancestor some six million years ago. The realization that chimpanzees hold a trove of information about human evolution and nature comes at a time when they and other great apes are under harsh pressures in their native habitat. Their populations are dwindling fast as forests are cut down and people shoot them for meat. They may soon disappear from the wild altogether, primatologists fear, except in the few sanctuaries that have been established. Chimpanzees and people possess almost identical sets of genes, so the genes that have changed down the human lineage should hold the key to what makes people human. Biologists suspect that only a handful of genes are responsible for the major changes that reshaped the apelike ancestor of both species into a human and that these genes should be identifiable by having evolved at a particularly rapid rate. The comparison of the human and chimp genomes, reported in today's issue of Nature, takes a first step in this direction but has not yet tracked down the critical handful of genes responsible for human evolution. One problem is the vast number of differences - some 40 million - in the sequence of DNA units in the chimp and human genomes. Most are caused by a random process known as genetic drift and have little effect. For now, their large numbers make it difficult for scientists to find the changes caused by natural selection. But another aspect of the comparison has yielded insights into a different question, the evolution of the human Y chromosome. The new finding implies that humans have led sexually virtuous lives for the last six million years, at least in comparison with the flamboyant promiscuity of chimpanzees. Some 300 million years ago, the Y chromosome used to carry the same 1,000 or so genes as its partner, the X chromosome. But because the Y cannot exchange DNA with the X and update its genes, in humans it has lost all but 16 of its X-related genes through mutation or failure to stay relevant to their owner's survival. However, the Y has gained some genes from other chromosomes because it is a safe haven for genes that benefit only men, since it never enters a woman's body. These added genes, not surprisingly, all have functions involved in making sperm. The scientific world's leading student of the Y chromosome, David Page of the Whitehead Institute in Cambridge, Mass., has been seeking to understand whether the Y will lose yet more genes and lapse into terminal decay, taking men with it. The idea of the Y's extinction "was so delicious from the perspective of gender politics," Dr. Page said. "But many of my colleagues became confused with this blending of gender politics with scientific predictions." Two years ago, he discovered a surprising mechanism that protects the sperm-making genes. Those genes exist in pairs, arranged so that when the DNA of the chromosome is folded back on itself, the two copies of the gene are aligned. If one copy of the gene has been hit by a mutation, the cell can repair it by correcting the mismatch in DNA units. The 16 X-related genes are present in only single copies. Dr. Page and his colleagues thought the chimpanzee genome might show how they were protected. To their surprise, they report in Nature, the protection was not there. The chimp Y chromosome has lost the use of 5 of its 16 X-related genes. The genes are there, but have been inactivated by mutation. The explanation, in his view, lies in the chimpanzee's high-spirited sexual behavior. Female chimps mate with all males around, so as to make each refrain from killing a child that might be his. The alpha male nonetheless scores most of the paternities, according to DNA tests. This must be because of sperm competition, primatologists believe - the alpha male produces more and better sperm, which outcompete those of rival males. This mating system puts such intense pressure on the sperm-making genes that any improved version will be favored by natural selection. All the other genes will be dragged along with it, Dr. Page believes, even if an X-related gene has been inactivated. If chimps have lost five of their X-related genes in the last six million years because of sperm competition, and humans have lost none, humans presumably had a much less promiscuous mating system. But experts who study fossil human remains believe that the human mating system of long-term bonds between a man and woman evolved only some 1.7 million years ago. Males in the human lineage became much smaller at this time, a sign of reduced competition. The new result implies that even before that time, during the first four million years after the chimp-human split, the human mating system did not rely on sperm competition. Dr. Page said his finding did not reach to the nature of the joint chimp-human ancestor, but that "it's a reasonable inference" that the ancestor might have been gorillalike rather than chimplike, as supposed by some primatologists. The gorilla mating system has no sperm competition because the silverback maintains exclusive access to his harem. Frans B. M. de Waal of the Yerkes National Primate Research Center in Atlanta said he agreed with fossil experts that the human pair bonding system probably evolved 1.7 million years ago but that the joint ancestor could have resembled a chimp, a bonobo, a gorilla, or something else entirely. The scientists who have compared the whole genomes of the two species say they have found 35 million sites on the aligned genomes where there are different DNA units, and another five million where units have been added or deleted. Each genome is about three billion units in length. The chimp genome was completed in draft form in December 2003 by the Broad Institute in Cambridge and Washington University in St. Louis. Statistical tests for accelerated evolution are not yet powerful enough to identify the major genes that have shaped humans. "We knew that this was only a beginning, but from a general standpoint we have captured the vast majority of the differences between human and chimps," said Robert H. Waterston of the University of Washington, Seattle, the senior author of the report. The genome of a third primate, the orangutan, is now in progress and will help identify the genes special to human evolution, he said. At the level of the whole animal, primatologists have uncovered copious similarities between the social behavior of chimpanzees, bonobos and humans, some of which may eventually be linked to genes. But this rich vein of discovery may be choked off if the great apes can no longer be studied in the wild. "The situation is very bad, and our feeling is that by 2040 most of the habitat will be gone, except for those little regions we have set aside," Dr. de Waal said. From checker at panix.com Wed Sep 14 01:30:35 2005 From: checker at panix.com (Premise Checker) Date: Tue, 13 Sep 2005 21:30:35 -0400 (EDT) Subject: [Paleopsych] NYT: But Is There Intelligent Spaghetti Out There? Message-ID: But Is There Intelligent Spaghetti Out There? http://www.nytimes.com/2005/08/29/arts/design/29mons.html By [3]SARAH BOXER Is the super-intelligent, super-popular god known as the Flying Spaghetti Monster any match for the prophets of intelligent design? This month, the Kansas State Board of Education gave preliminary approval to allow teaching alternatives to evolution like intelligent design (the theory that a smart being designed the universe). And President Bush and Senator Bill Frist of Tennessee both gave the thumbs up to teaching intelligent design. Long before that, Bobby Henderson, a 25-year-old with a physics degree from Oregon State University, had a divine vision. An intelligent god, a Flying Spaghetti Monster, he said, "revealed himself to me in a dream." He posted a sketch on his Web site, [4]venganza.org, showing an airborne tangle of spaghetti and meatballs with two eyes looming over a mountain, trees and a stick man labeled "midgit." Prayers to the Flying Spaghetti Monster, his site says, end with "ramen," not "amen." Then, Mr. Henderson, who says on his site that he is desperately trying to avoid taking a job programming slot machines in Las Vegas, posted an open letter to the Kansas board. In perfect deadpan he wrote that although he agreed that science students should "hear multiple viewpoints" of how the universe came to be, he was worried that they would be hearing only one theory of intelligent design. After all, he noted, there are many such theories, including his own fervent belief that "the universe was created by a Flying Spaghetti Monster." He demanded equal time in the classroom and threatened a lawsuit. Soon he was flooded with e-mail messages. Ninety-five percent of those who wrote to him, he said on his Web site, were "in favor of teaching Flying Spaghetti Monsterism in schools." Five percent suggested that he would be going to hell. Lawyers contacted him inquiring how serious he was about a lawsuit against the Kansas board. His answer: "Very." This month, the news media, both mainstream and digital, jumped in. The New Scientist magazine wrote an article. So did Die Welt. Two online encyclopedias, Uncyclopedia and Wikipedia, wrote entries on the Flying Spaghetti Monster. The Web site [5]Boingboing.net mounted a challenge: "We are willing to pay any individual $250,000 if they can produce empirical evidence which proves that Jesus is not the son of the Flying Spaghetti Monster." Now, Mr. Henderson says on his Web site, "over 10 million people have been touched by His Noodly Appendage." But what does that mean? When push comes to shove, will the religion that has come to be known as Pastafarianism do what it was intended to do - prove that it is ridiculous to teach intelligent design as science? Mr. Henderson, who said in an e-mail message that his divine vision was induced by "a lack of sleep and a mounting disgust over the whole I.D. issue," has wit on his side. His god not only resembles human brains (proof, a fan writes, that "we were created in His image") but also looks like the kind of bacteria that proponents of intelligent design hold up as too complex to be the work of evolution alone. Two dozen academics have endorsed the pasta god. Three members of the Kansas board who already opposed teaching intelligent design wrote kind letters to Mr. Henderson. Dozens of people have posted their sightings of the deity (along with some hilarious pictures). One woman even wrote in to say that she had "conceived the spirit of our Divine Lord," the Flying Spaghetti Monster, while eating alone at the Olive Garden. "I heard singing, and tomato sauce rained from the sky, and I saw angel hair pasta flying about with little farfalle wings and playing harps," she wrote. "It was beautiful." The Spaghetti Monster, she went on, impregnated her and told her, "You shall name Him ... Prego ... and He shall bring in a new era of love." Parody is a lot of fun. And parody begets more parody, especially on the Internet. It's contagious. But has anyone ever converted to a parody religion? The history books show that parody isn't always the smartest strategy when it comes to persuasion. Remember Galileo? Some recent scholars say that it may not have been his science so much as his satire, "Dialogue Concerning the Two Chief World Systems," that got everyone steamed up. Under threat of death, Galileo ended up recanting his view that the earth revolves around the sun, and had to wait 350 years for vindication. And yet the Church of the Flying Spaghetti Monster flourishes. It even has schisms. A rival faction, based on SPAM (Spaghetti & Pulsar Activating Meatballs), has formed. And there's bickering, Mr. Henderson said in an e-mail message, about whether the god is made of spaghetti or linguini. Those people, he noted, "give me a headache." References 3. http://query.nytimes.com/search/query?ppds=bylL&v1=SARAH%20BOXER&fdq=19960101&td=sysdate&sort=newest&ac=SARAH%20BOXER&inline=nyt-per 4. http://venganza.org/ 5. http://Boingboing.net/ From checker at panix.com Thu Sep 15 01:35:18 2005 From: checker at panix.com (Premise Checker) Date: Wed, 14 Sep 2005 21:35:18 -0400 (EDT) Subject: [Paleopsych] Runner's World: How Many Calories Are You Really Burning? Message-ID: How Many Calories Are You Really Burning? http://www.runnersworld.com/article/printer_friendly/0,5046,s6-197-0-0-8402,00.html If you think running and walking both torch the same number of calories per mile, you better put down that cookie by: Amby Burfoot A few months ago I got into an argument with someone who's far smarter than I am. I should have known better, but you know how these things go. Needless to say, I lost the argument. Still, I learned something important in the process. David Swain is a bicyclist who likes to ride across the country every couple of years. Since I spend most of my time on my feet, I figured I could teach him something about walking and running. Perhaps I should have paid more attention to Swain's Ph.D. in exercise physiology, his position as director of the Wellness Institute and Research Center at Old Dominion University, and his work on the "Metabolic Calculations" appendix to the American College of Sports Medicine's Guidelines for Exercise Testing and Prescription. Both Swain and I are interested in the fitness-health connection, which makes walking and running great subjects for discussion. To put it simply, they are far and away the leading forms of human movement. Every able-bodied human learns how to walk and run without any particular instruction. The same cannot be said of activities such as swimming, bicycling, skateboarding, and hitting a 3-iron. This is why walking and running are the best ways to get in shape, burn extra calories, and improve your health. Our argument began when I told Swain that both walking and running burn the same number of calories per mile. I was absolutely certain of this fact for two unassailable reasons: (1) I had read it a billion times; and (2) I had repeated it a billion times. Most runners have heard that running burns about 100 calories a mile. And since walking a mile requires you to move the same body weight over the same distance, walking should also burn about 100 calories a mile. Sir Isaac Newton said so. Swain was unimpressed by my junior-high physics. "When you perform a continuous exercise, you burn five calories for every liter of oxygen you consume," he said. "And running in general consumes a lot more oxygen than walking." What the Numbers Show I was still gathering my resources for a retort when a new article crossed my desk, and changed my cosmos. In "Energy Expenditure of Walking and Running," published last December in Medicine & Science in Sports & Exercise, a group of Syracuse University researchers measured the actual calorie burn of 12 men and 12 women while running and walking 1,600 meters (roughly a mile) on a treadmill. Result: The men burned an average of 124 calories while running, and just 88 while walking; the women burned 105 and 74. (The men burned more than the women because they weighed more.) Swain was right! The investigators at Syracuse didn't explain why their results differed from a simplistic interpretation of Newton's Laws of Motion, but I figured it out with help from Swain and Ray Moss, Ph.D., of Furman University. Running and walking aren't as comparable as I had imagined. When you walk, you keep your legs mostly straight, and your center of gravity rides along fairly smoothly on top of your legs. In running, we actually jump from one foot to the other. Each jump raises our center of gravity when we take off, and lowers it when we land, since we bend the knee to absorb the shock. This continual rise and fall of our weight requires a tremendous amount of Newtonian force (fighting gravity) on both takeoff and landing. Now that you understand why running burns 50 percent more calories per mile than walking, I hate to tell you that it's a mostly useless number. Sorry. We mislead ourselves when we talk about the total calorie burn (TCB) of exercise rather than the net calorie burn (NCB). To figure the NCB of any activity, you must subtract the resting metabolic calories your body would have burned, during the time of the workout, even if you had never gotten off the sofa. You rarely hear anyone talk about the NCB of workouts, because this is America, dammit, and we like our numbers big and bold. Subtraction is not a popular activity. Certainly not among the infomercial hucksters and weight-loss gurus who want to promote exercise schemes. "It's bizarre that you hear so much about the gross calorie burn instead of the net," says Swain. "It could keep people from realizing why they're having such a hard time losing weight." Thanks to the Syracuse researchers, we now know the relative NCB of running a mile in 9:30 versus walking the same mile in 19:00. Their male subjects burned 105 calories running, 52 walking; the women, 91 and 43. That is, running burns twice as many net calories per mile as walking. And since you can run two miles in the time it takes to walk one mile, running burns four times as many net calories per hour as walking. Run Slow or Walk Fast? I didn't come here to bash walking, however. Walking is an excellent form of exercise that builds aerobic fitness, strengthens bones, and burns lots of calories. A study released in early 2004 showed that the Amish take about six times as many steps per day as adults in most American communities, and have about 87-percent lower rates of obesity. In fact, I had read years ago that fast walking burns more calories than running at the same speed. Now was the time to test this hypothesis. Wearing a heart-rate monitor, I ran on a treadmill for two minutes at 3.0 mph (20 minutes per mile), and at 3.5, 4.0, 4.5, 5.0, and 5.5 mph (10:55 per mile). After a 10-minute rest to allow my heart rate to return to normal, I repeated the same thing walking. Here's my running vs. walking heart rate at the end of each two-minute stint: 3.0 (99/81), 3.5 (104/85), 4.0 (109/94), 4.5 (114/107), 5.0 (120/126), 5.5 (122/145). My conclusion: Running is harder than walking at paces slower than 12-minutes-per-mile. At faster paces, walking is harder than running. How to explain this? It's not easy, except to say that walking at very fast speeds forces your body to move in ways it wasn't designed to move. This creates a great deal of internal "friction" and inefficiency, which boosts heart rate, oxygen consumption, and calorie burn. So, as Jon Stewart might say, "Walking fast...good. Walking slow...uh, not so much." The bottom line: Running is a phenomenal calorie-burning exercise. In public-health terms--that is, in the fight against obesity--it's even more important that running is a low-cost, easy-to-do, year-round activity. Walking doesn't burn as many calories, but it remains a terrific exercise. As David Swain says, "The new research doesn't mean that walking burns any fewer calories than it used to. It just means that walkers might have to walk a little more, or eat a little less, to hit their weight goal." What's the Burn? A Calorie Calculator You can use the formulas below to determine your calorie-burn while running and walking. The "Net Calorie Burn" measures calories burned, minus basal metabolism. Scientists consider this the best way to evaluate the actual calorie-burn of any exercise. The walking formulas apply to speeds of 3 to 4 mph. At 5 mph and faster, walking burns more calories than running. Your Total Calorie Burn/Mile Your Net Calorie Burn/Mile Running .75 x your weight (in lbs.) .63 x your weight Walking .53 x your weight .30 x your weight Adapted from "Energy Expenditure of Walking and Running," Medicine & Science in Sport & Exercise, Cameron et al, Dec. 2004. From checker at panix.com Thu Sep 15 01:35:31 2005 From: checker at panix.com (Premise Checker) Date: Wed, 14 Sep 2005 21:35:31 -0400 (EDT) Subject: [Paleopsych] The American Prospect: The Right Fight Message-ID: The Right Fight http://www.prospect.org/web/printfriendly-view.ww?id=10140 It took the Bush administration to bring a truce between the postmodern left and the scientific community. By [2]Chris Mooney Web Exclusive: 08.15.05 Circa 1996, many of the nation's intellectuals could be found chattering about the famous "Sokal hoax." Remember that? It all began when New York University physicist Alan Sokal submitted an [5]article to the left-wing academic journal Social Text that basically amounted to gibberish. It essentially argued that physical reality does not exist: It has thus become increasingly apparent that physical "reality,'' no less than social "reality,'' is at bottom a social and linguistic construct; that scientific "knowledge," far from being objective, reflects and encodes the dominant ideologies and power relations of the culture that produced it; that the truth claims of science are inherently theory-laden and self-referential; and consequently, that the discourse of the scientific community, for all its undeniable value, cannot assert a privileged epistemological status with respect to counter-hegemonic narratives emanating from dissident or marginalized communities . The article had a giveaway title: "Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity." Coming from a physicist, this should have raised serious red flags. Nevertheless, Social Text was stupid enough to publish the thing, and then Sokal [6]exposed the hoax in Lingua Franca magazine. On the one hand, this was a pretty mean trick to pull on poor Social Text. On the other, editors unable to distinguish real physics from spoof physics probably shouldn't be publishing articles arguing against physical reality. At any rate, Sokal claimed his objectives were thoroughly constructive. He wanted, he said, to shake the academic left out of its postmodern torpor and force its leading intellectuals to recognize that jargony articles and a general tone of relativism and subjectivism weren't helping anybody -- certainly not the oppressed people of the world. "For most of the past two centuries," Sokal wrote, "the Left has been identified with science and against obscurantism . Theorizing about 'the social construction of reality' won't help us find an effective treatment for AIDS or devise strategies for preventing global warming. Nor can we combat false ideas in history, sociology, economics, and politics if we reject the notions of truth and falsity." The Sokal hoax hit liberal academia like a thunderclap and prompted many a gloat from scientists. It went hand in hand with books like [7]Higher Superstition, an all-out attack on the perceived anti-science obscurantism of the academic left. For many pro-science liberals as well as many anti-campus conservatives, the notion slowly took hold that there were a lot of out-of-touch left-wing academics, nestled in secluded universities, who were conducting a campaign against scientific knowledge in obscure journals through excessive quotation of Foucault and Derrida. Even at the time, however, the quest to root out anti-science tendencies in academia seemed a strange deployment of resources. After all, the Gingrich Republicans had just taken over Congress, set out to radically slash science budgets, and preached denial about global warming. If there was a war on science afoot, university professors probably weren't the leading culprits. Certainly they weren't the most powerful ones. Indeed, despite some undeniable academic excesses, the "science wars" were always somewhat overblown. The sociological, historical, philosophical, and cultural study of science is a very worthwhile endeavor. If scholars engaged in such research sometimes take a stance of agnosticism toward the truth claims of science, perhaps that's simply their way of remaining detached from the subject they're studying. But it doesn't necessarily follow that these scholars are absolute relativists, to the extent of thinking that concepts like gravity are a mere matter of opinion. Social Text founding Editor Stanley Aronowitz has himself written that "[t]he critical theories of science do not refute the results of scientific discoveries since, say, the Copernican revolution or since Galileo's development of the telescope." When it comes to the field of science studies, meanwhile, much scholarly work in the area lends itself not to left-wing attacks on science but rather to defenses of science from forms of abuse prevalent on the political right. To cite just one example, leading science-studies scholar Sheila Jasanoff's 1991 book, The Fifth Branch: Science Advisers as Policymakers, presents a potent critique of demands for unreasonable levels of scientific certainty before political decisions can be made, especially when it comes to protecting public health and the environment. So perhaps it's no surprise that the science wars of the 1990s have almost entirely subsided, and, as the scientific community has increasingly become embroiled with the Bush administration across a wide range of issues (from evolution to climate science), a very new zeitgeist has emerged. The summer issue of The American Scholar, a leading read among academic humanists and the literary set, provides a case in point. "Science matters," blazons the cover. Inside, Editor Robert Wilson explains to readers that although "the attack on science has always been our game the enemy of our enemy is most definitely not our friend." The right's attack on science, Wilson continues, "is an attack on reason, and it cannot be ignored, or excused, or allowed to go uncontested." With those words, I think it's safe to say that peace has officially been made in the science wars of the 1990s. And not a moment too soon. The evolution deniers (and other reality deniers) are gathering momentum. On matters like this, the university community -- composed of scientists and scholars alike -- really ought to be on the same page. Chris Mooney is the Washington correspondent for [8]Seed Magazine and a columnist for The American Prospect Online. His first book, [9]The Republican War on Science, will be published in September. His daily blog and other writings can be found at [10]www.chriscmooney.com. References 2. http://www.prospect.org/web/page.ww?name=View+Author§ion=root&id=174 3. http://www.prospect.org/web/printfriendly-view.ww?id=10140 4. http://www.prospect.org/web/start-email.ww?id=10140 5. http://www.physics.nyu.edu/faculty/sokal/transgress_v2/transgress_v2_singlefile.html 6. http://www.physics.nyu.edu/faculty/sokal/lingua_franca_v4/lingua_franca_v4.html 7. http://www.amazon.com/exec/obidos/tg/detail/-/0801857074/103-4828884-6127823?v=glance 8. http://www.seedmediagroup.com/ 9. http://www.amazon.com/exec/obidos/ASIN/0465046754/chriscmooneyc-20/103-4828884-6127823 10. http://chriscmooney.com/ From checker at panix.com Thu Sep 15 01:35:42 2005 From: checker at panix.com (Premise Checker) Date: Wed, 14 Sep 2005 21:35:42 -0400 (EDT) Subject: [Paleopsych] Nature Neuroscience: Book Review: The Ethical Brain Message-ID: Book Review: The Ethical Brain Nature Neuroscience 8, 1127 (2005) doi:10.1038/nn0905-1127 http://www.nature.com/neuro/journal/v8/n9/full/nn0905-1127.html Reviewed by: Charles Jennings Charles Jennings is at the Harvard Stem Cell Institute, Harvard University, 42 Church Street, Cambridge, Massachusetts 02138, USA. charles_jennings at harvard.edu Michael Gazzaniga is a leader in the field of cognitive neuroscience, and since 2002 he has been a member of President Bush's Council on Bioethics. In a group dominated by conservatives, Gazzaniga is sometimes a dissenting voice, for example, in his support for embryonic stem cell research. His work on split-brain patients has profound implications for understanding the neural basis of self, and his presence on the council has brought a neurobiological perspective to many current bioethical controversies. The Ethical Brain is a wide-ranging, yet short and readable, summary of his views. Gazzaniga is a technological optimist, with little patience for the vague 'slippery slope' arguments that are often invoked by those who worry about where biotechnology is leading us. A deeper concern?articulated, for example, by fellow council member Michael Sandel?is that the desire to manipulate human nature is a form of hubris that threatens to undermine our appreciation for life's gifts. Gazzaniga, however, will have none of this. He welcomes the prospect of genetic enhancement, prolongation of lifespan, memory pills and so forth, arguing that humanity's innate moral sense will always guide us to use our powers wisely. I would like to think he is right, but I did not always find his arguments persuasive. A case in point is his discussion of sex selection. In some Asian countries, notably China, a cultural preference for boys, combined with easy access to methods for sex determination and selective abortion, has led to a large distortion of birth ratios. Gazzaniga acknowledges the potential concern, but because some US fertility clinics are now starting to discourage sex selection, he concludes that humans can be trusted to do the right thing in the long run. Maybe so, but I am less sanguine than Gazzaniga about this massive biotechnological experiment, and about the world's largest country soon having 15 million young men unable to find marriage partners. Gazzaniga's faith in human destiny is based in part on his belief in a biologically based universal morality, and his discussion of this idea is one of the most interesting aspects of the book. He argues that our sense of right and wrong has been shaped by evolution, and that there consequently exists a core of moral instincts that are shared across all societies. Religious traditions, in his view, represent attempts to explain and validate these biological instincts. Our brains have a strong tendency to form beliefs as a way of making sense of the world, and as Gazzaniga's own work has emphasized, these are often confabulated on the basis of limited evidence, yet refractory to change once formed. As an explanation of religious faith, this viewpoint is surely anathema to many conservatives, but Gazzaniga (who was raised Catholic) shows no animosity toward religion, which he regards as a natural aspect of human biology. Gazzaniga hopes that a deeper understanding of our shared moral instincts and their biological basis could help to overcome ideological conflicts between different belief systems. This is an appealing idea ('biology good, ideology bad'), even though only a chronic optimist could think that universal education in cognitive neuroscience will lead to world peace. A skeptic might counter that our brains come prewired not only for moral reasoning but also for prejudice, tribalism, warfare?less attractive but no less universal aspects of human societies. Moreover, the scientific evidence for a moral instinct is based largely on simple test scenarios in which decisions have immediate and visible consequences for another individual. Although people tend to show similar responses on such tests, most real-world dilemmas are not like this. It seems unlikely that divisive societal debates on questions such as abortion or capital punishment could ever be resolved by an appeal to biology. Perhaps the most pressing issue in neuroethics is how (if at all) neuroscience should inform the justice system, and Gazzaniga devotes several chapters to this topic. The central problem is this: if decisions are made by the brain, a physical object that obeys physical laws, in what sense can they be considered 'free'? But if people are constrained by their brains, how can we hold them responsible for their actions? This quickly leads to problems, of course; if defendants could be acquitted simply by arguing "my brain made me do it," the entire justice system would collapse. Gazzaniga's proposed solution is to argue that responsibility is "a social construct that exists in the rules of a society [but not] in the neuronal structures of the brain." Yet I did not find this argument convincing. The justice system, held together by moral rules and concepts of accountability, is an emergent property of large numbers of brains. It may be dauntingly complex, but that does not put it beyond the realm of scientific study. Indeed, social neuroscience is an emerging field of research, and neuroimagers can now examine the mechanisms underlying not only people's own moral decisions, but also their perceptions about the accountability of other individuals. Gazzaniga is understandably concerned about neuroscience being drawn into the courtroom, but he acknowledges that it is inevitable. The challenge for neuroethicists, then, will be to help lawyers sort the wheat from the chaff, to recognize valid arguments for exculpation or leniency, while rejecting the abuses that will surely become increasingly tempting to defense counsels as brain science continues to advance. The Ethical Brain is not the last word on these difficult issues, but it does provide a clear and useful introduction to the field. Gazzaniga's fans include Tom Wolfe, who gives the book a cameo role in his novel I Am Charlotte Simmons, where it appears as recommended reading for a college course. In this case life would do well to imitate art?The Ethical Brain would be an excellent introduction for anyone who is interested in learning more about 'the next big thing' in bioethics. From checker at panix.com Thu Sep 15 01:35:51 2005 From: checker at panix.com (Premise Checker) Date: Wed, 14 Sep 2005 21:35:51 -0400 (EDT) Subject: [Paleopsych] SW: On Anthropic Reasoning Message-ID: Cosmology: On Anthropic Reasoning http://scienceweek.com/2005/sw050909-1.htm The following points are made by M. Livio and M.J. Rees (Science 2005 309:1022): 1) Does extraterrestrial intelligent life exist? The fact that we can even ask this question relies on an important truth: The properties of our Universe have allowed complexity (of the type that characterizes humans) to emerge. Obviously, the biological details of humans and their emergence depend on contingent features of Earth and its history. However, some requirements would seem generic for any form of life: galaxies, stars, and (probably) planets had to form; nucleosynthesis in stars had to give rise to atoms such as carbon, oxygen, and iron; and these atoms had to be in a stable environment where they could combine to form the molecules of life. 2) We can imagine universes where the constants of physics and cosmology have different values. Many such "counterfactual" universes would not have allowed the chain of processes that could have led to any kind of advanced life. For instance, even a universe with the same physical laws and the same values of all physical constants but one -- a cosmological constant Lambda (the "pressure" of the physical vacuum) higher by more than an order of magnitude -- would have expanded so fast that no galaxies could have formed. Other properties that appear to have been crucial for the emergence of complexity are (i) the presence of baryons (particles such as protons and neutrons); (ii) the fact that the Universe is not infinitely smooth, allowing for the formation of structure (quantified as the amplitude of the fluctuations in the cosmic microwave background, Q); and (iii) a gravitational force that is weaker by a factor of nearly 10^(40) than the microphysical forces that act within atoms and molecules -- were gravity not so weak, there would not be such a large difference between the atomic and the cosmic scales of mass, length, and time. 3) A key challenge confronting 21st-century physics is to decide which of these dimensionless parameters such as Q and Lambda are truly fundamental -- in the sense of being explicable within the framework of an ultimate, unified theory -- and which are merely accidental. The possibility that some are accidental has certainly become viable in the context of the "eternal inflation" scenario [1-3], where there are an infinity of separate "big bangs" within an exponentially expanding substratum. Some versions of string theory allow a huge variety of vacua, each characterized by different values of (or even different dimensionality) [4]. Both these concepts entail the existence of a vast ensemble of pocket universes -- a "multiverse." If some physical constants are not fundamental, then they may take different values in different members of the ensemble. Consequently, some pocket universes may not allow complexity or intelligent life to evolve within them. Humans would clearly have to find themselves in a pocket universe that is "biophilic." Some otherwise puzzling features of our Universe may then simply be the result of the epoch in which we exist and can observe. In other words, the values of the accidental constants would have to be within the ranges that would have allowed intelligent life to develop. The process of delineating and investigating the consequences of these biophilic domains is what has become known as "anthropic reasoning".[5] References (abridged): 1. P. J. Steinhardt, in The Very Early Universe, G. W. Gibbons, S. Hawking, S. T. C. Siklos, Eds. (Cambridge Univ. Press, Cambridge, 1983), p. 251 2. A. Vilenkin, Phys. Rev. D 27, 2848 (1983) 3. A. D. Linde, Mod. Phys. Lett. A 1, 81 (1986) 4. S. Kachru, R. Kallosh, A. Linde, S. P. Trivedi, Phys. Rev. D 68, 046005 (2003) 5. A. G. Riess et al., Astron. J. 116, 1009 (1998) Science http://www.sciencemag.org -------------------------------- Related Material: COSMOLOGY: ON THE ANTHROPIC PRINCIPLE The following points are made by Lawrence M. Krauss (Nature 2003 423:230): 1) The recognition, in the light of observational data, that Einstein's infamous cosmological constant might not be zero has changed almost everything about the way we think about the Universe, from reconsidering its origin to re-evaluating its ultimate future. But perhaps the most significant change in cosmological thinking involves a new willingness to discuss what used to be an idea that was not normally mentioned in polite company: the "anthropic principle". 2) This idea suggests that the precise values of various fundamental parameters describing our Universe might be understood only as a consequence of the fact that we exist to measure them. To paraphrase the cosmologist Andrei Linde, "If the Universe were populated everywhere by intelligent fish, they might wonder why it was full of water. Well, if it weren't, they wouldn't be around to observe it!". 3) The reason that physicists have been so reluctant to consider the anthropic principle seriously is that it goes against the grain of current attitudes. Most physicists have hoped that an ultimate physical explanation of reality would explain why the Universe must look precisely the way it does, rather than why it more often than not would not. Into the fray has entered James Bjorken. In a paper (Phys. Rev. D 2003 67:043508) entitled "Cosmology and the Standard Model", Bjorken proposes a new "scaling" approach, based on well-established notions in particle theory, for exploring how anthropically viable a small cosmological constant might be. 4) The realization that an extremely small, but non-zero, cosmological constant might exist has changed the interest of physicists in anthropic explanations of nature precisely because the value it seems to take is otherwise so inexplicable. In 1996, physicist Steven Weinberg and his colleagues Hugo Martel and Paul Shapiro argued that if the laws of physics allow different universes to exist with a cosmological constant chosen from an underlying probability distribution, then galaxies, stars and presumably astronomers might not ultimately evolve unless the cosmological constant were not much larger than the one we apparently observe today. Nature http://www.nature.com/nature -------------------------------- Notes by ScienceWeek: The "cosmological constant" is a mathematical term introduced by Einstein into the equations of general relativity, the purpose to obtain a solution of the equations corresponding to a "static universe". The term describes a pressure (if positive) or a tension (if negative) which can cause the Universe to expand or contract even in the absence of any matter ("vacuum energy"). When the expansion of the Universe was discovered, Einstein apparently began to regard the introduction of this term as a mistake, and he described the cosmological constant as the "greatest mistake of my life". But the term has reappeared as the proposed source of apparent accelerated cosmic expansion. -------------------------------- Related Material: ON QUINTESSENCE AND THE EVOLUTION OF THE COSMOLOGICAL CONSTANT The following points are made by P.J.E. Peebles (Nature 1999 398:25): 1) Contrary to expectations, the evidence is that the Universe is expanding at approximately twice the velocity required to overcome the gravitational pull of all the matter the Universe contains. The implication of this is that in the past the greater density of mass in the Universe gravitationally slowed the expansion, while in the future the expansion rate will be close to constant or perhaps increasing under the influence of a new type of matter that some call "quintessence". 2) Quintessence began as Einstein's cosmological constant, Lambda. It has negative gravitational mass: its gravity pushes things apart. 3) Particle physicists later adopted Einstein's Lambda as a good model for the gravitational effect of the active vacuum of quantum physics, although the idea is at odds with the small value of Lambda indicated by cosmology. 4) Theoretical cosmologists have noted that as the Universe expands and cools, Lambda tends to decrease. As the Universe cools, symmetries among forces are broken, particles acquire masses, and these processes tend to release an analogue of latent heat. The vacuum energy density accordingly decreases, and with it the value of Lambda. Perhaps an enormous Lambda drove an early rapid expansion that smoothed the primeval chaos to make the near uniform Universe we see today, with a decrease in Lambda over time to its current value. This is the cosmological inflation concept. 5) The author suggests that the recent great advances in detectors, telescopes, and observatories on the ground and in space have given us a rough picture of what happened as our Universe evolved from a dense, hot, and perhaps quite simple early state to its present complexity. Observations in progress are filling in the details, and that in turn is driving intense debate on how the behavior of our Universe can be understood within fundamental physics. Nature http://www.nature.com/nature -------------------------------- Notes by ScienceWeek: Active vacuum of quantum physics: This refers to the idea that the vacuum state in quantum mechanics has a zero-point energy (minimum energy) which gives rise to vacuum fluctuations, so the vacuum state does not mean a state of nothing, but is instead an active state. If a theory or process does not change when certain operations are performed on it, the theory or process is said to possess a symmetry with respect to those operations. For example, a circle remains unchanged under rotation or reflection, and a circle therefore has rotational and reflection symmetry. The term "symmetry breaking" refers to the deviation from exact symmetry exhibited by many physical systems, and in general, symmetry breaking encompasses both "explicit" symmetry breaking and "spontaneous" symmetry breaking. Explicit symmetry breaking is a phenomenon in which a system is not quite, but almost, the same for two configurations related by exact symmetry. Spontaneous symmetry breaking refers to a situation in which the solution of a set of physical equations fails to exhibit a symmetry possessed by the equations themselves. In general, the term "latent heat" refers to the quantity of heat absorbed or released when a substance changes its physical phase (e.g., solid to liquid) at constant temperature. The inflationary model, first proposed by Alan Guth in 1980, proposes that quantum fluctuations in the time period 10^(-35) to 10^(-32) seconds after time zero were quickly amplified into large density variations during the "inflationary" 10^(50) expansion of the Universe in that time frame. From checker at panix.com Thu Sep 15 01:36:47 2005 From: checker at panix.com (Premise Checker) Date: Wed, 14 Sep 2005 21:36:47 -0400 (EDT) Subject: [Paleopsych] SW: On Human-Non-Human Primate Neural Grafting Message-ID: Science Policy: On Human-Non-Human Primate Neural Grafting http://scienceweek.com/2005/sw050909-6.htm The following points are made by M. Greene et al (Science 2005 309:385): 1) If human neural stem cells were implanted into the brains of other primates what might this do to the mind of the recipient? Could such grafting teach us anything of value for treatment of neurological injury and disease? Could we change the capacities of the engrafted animal in a way that leads us to reexamine its moral status? These questions have gained significance since publication of research involving grafting human neural stem cells into the brains of fetal monkeys [1]. In 2004, the authors formed a multidisciplinary working group; two plenary meetings over 12 months provide the basis for this report. 2) There is considerable controversy (reflected within the discussion group) over the likely value of interspecies stem cell work for progress toward therapies [2]. We cannot graft human neural stem cells into human beings solely for experimental purposes, even if they will lead to human therapies. Group members arguing for the value of research on human cells in non-human primates (NHPs) pointed out that because the aim is to learn about human neural stem cells it makes most sense to use human lines. The fact that available NHP lines are few and poorly characterized [3] is an additional reason to use human lines. Another consideration is the need to assess candidate human cell lines for viability, potential to differentiate, and safety with regard to such possibilities as tumor formation. NHPs may be appropriate for in vivo screening. 3) Skeptics argued that differences between humans and NHPs could render results uninterpretable and that the preferred path for many questions is to study NHP neural stem cells in NHPs. Assessments of the scientific merit of the research must form and develop along with the field itself. 4) The authors unanimously rejected ethical objections grounded on unnaturalness or crossing species boundaries [4]. Whether it is possible to draw a meaningful distinction between the natural and the unnatural is a matter of dispute. However, stipulating that research is "unnatural" says nothing about its ethics. Much of modern medical practice involves tools, materials, and behaviors that cannot be found in nature but are not unethical as a consequence 5) Another concern is that human to non-human primate (H-NHP) neural grafting is wrong because it transgresses species boundaries [5]. However, the notion that there are fixed species boundaries is not well supported in science or philosophy. Moreover, human-nonhuman chimerism has already occurred through xenografting. For example, the safety and efficacy of engrafting fetal pig cells has been studied in people with Parkinson's disease and Huntington's disease without moral objection. Indeed, some have suggested that porcine sources may be less morally contentious than the use of human fetal tissue. Merely because something has been done does not prove it right. However, the authors see no new ethical or regulatory issues regarding chimeras themselves. 6) The central challenge is whether introducing human cells into NHP brains raises questions about moral status. A variety of reasons have been given for according different moral standing to humans and NHPs. In the Abrahamic traditions, humans are set apart by God as morally special and are given stewardship over other forms of life (Genesis 1:26-28). For Kantians, human capacities for rationality and autonomy demand that we be treated as ends in ourselves. Mill finds, in the richness of human mental life, an especially fecund source of utility. Singer, although strongly defending equal consideration of nonhuman interests, argues that self-awareness affects the ethically allowable treatment of a creature by changing the kinds of interests it can have. 7) In conclusion: The authors support the National Academy's recommendation that H-NHP neural grafting experiments be subject to special review. The authors agree that such review should complement, not replace, current review by animal-use panels and institutional review boards. The authors further recommend that experiments involving H-NHP neural grafting be required, wherever possible, to look for and report changes in cognitive function. Explicit data collection on cognition and behavior will help to ensure that ethical guidelines can be developed appropriately as the field advances. References (abridged): 1. V. Ourednik et al., Science 293, 1820 (2001) 2. J. S. Robert, Bioessays 26, 1005 (2004) 3. K.-Y. F. Pau, D. Wolf, Reprod. Biol. Endocrinol. 2, 41 (2004) 4. P. Karpowicz, C. B. Cohen, D. van der Kooy, Nat.Med. 10, 331 (2004) 5. F. Fukuyama, Washington Post, 15 February 2004, p. B04 Science http://www.sciencemag.org -------------------------------- Related Material: NEUROBIOLOGY: ON NEURAL STEM CELL INTERACTIONS The following points are made by A.E. Wurmser et al (Science 2004 304:1253): 1) The ability of stem cells to both self-renew and differentiate into many different cell types enables these versatile cells to generate and repair tissues and organs. Yet studies of the fruit fly Drosophila and of mammalian skin, intestine, bone marrow, and brain reveal that these inherent stem cell features are tightly regulated by the cells and proteins that constitute the extracellular environment (or "niche") that stem cells inhabit (1). For example, Shen et al. (2) have demonstrated that endothelial cells (ECs) that are enriched in the niche occupied by neural stem cells (NSCs) regulate NSC proliferation and induce these stem cells to become neurons in vitro. 2) It is well established that NSCs are not randomly distributed throughout the brain, but rather are concentrated around blood vessels (3-5). This location places NSCs in close proximity to the ECs that line blood vessels, facilitating communication between these two cell types (3-5). To test the degree of intercellular communication between NSCs and ECs, Shen et al (1) cultured NSCs and monitored changes in their behavior when ECs were brought into close proximity (2). These investigators maintained cultures of mouse embryonic NSCs (derived from the cerebral cortex of 10- to 11-day-old mouse embryos) by adding fibroblast growth factor-2. Under these conditions, NSCs proliferated slowly and many of them exited the cell cycle, choosing to differentiate instead (2). However, when NSCs were cocultured with ECs their proliferation rate doubled, resulting in the formation of large interconnected sheets of undifferentiated cells. 3) One aspect of the Shen et al strategy was to introduce ECs into NSC cultures by means of transwell inserts. The pores of the transwells were too small to allow cell-cell contact between NSCs and ECs, but were large enough to enable signaling factors secreted by ECs to diffuse into the NSC cultures. Remarkably, the removal of transwells containing ECs triggered the coordinated differentiation of proliferating NSCs into neurons. Only 9% of NSCs unexposed to ECs expressed mature neuronal markers, compared with 31 to 64% of NSCs exposed to the EC transwells. This trend also was observed with cultured NSCs derived from the subventricular zone of adult mouse brain (2). Thus, signaling molecules secreted by ECs induced a shift in the mixed population of proliferating and differentiating NSCs, pushing them toward self-renewal while simultaneously priming them for the production of neurons. References (abridged): 1. E. Fuchs et al., Cell 116, 769 (2004) 2. Q. Shen et al., Science 304, 1338 (2004) 3. T. D. Palmer et al., J. Comp. Neurol. 425, 479 (2000) 4. A. Capela, S. Temple, Neuron 35, 865 (2002) 5. A. Louissaint et al., Neuron 34, 945 ( 2002) Science http://www.sciencemag.org -------------------------------- Related Material: NEUROBIOLOGY: ON HUMAN NEURAL STEM CELLS The following points are made by Pasko Rakic (Nature 2004 427:685): 1) Neural stem cells are a focus of strong interest because of the possibility that they could be used to replace neurons that have been damaged or lost -- perhaps as a result of injury such as trauma or stroke, or through neurodegenerative disorders such as Parkinson's disease. These stem cells can give rise to neurons and their supporting cells (glia) and it is hoped that something akin to neural stem cells in the adult human brain could be stimulated to generate replacement neurons. 2) Non-mammalian vertebrates, such as the salamander, can regenerate large portions of their brain and spinal cord, but humans have evidently lost this capacity during evolution. Therefore, most research on neural stem cells is carried out on mammals such as rodents, which are genetically closer to humans. However, although mammalian genomes may be similar, this similarity masks vast species differences in the way the brain is organized and in its capacity for regeneration and susceptibility to environmental insults. The failure of brain repair in clinical trials based on the promising results seen after the use of similar procedures in rodents is sobering testimony to the importance of such species-specific distinctions. 3) Human neural stem cells behave differently from their rodent equivalents in culture(1), but direct study of human brain tissue by Sanai et al(2) demonstrates additional significant and clinically relevant species-specific differences. A large number of postmortem and biopsy samples reveal two basic findings. First, neural stem cells that can potentially give rise to neurons, as well as to two types of glial cell (astrocytes and oligodendrocytes), are situated in a region of the forebrain known as the subventricular zone. Second, a pathway known as the rostral migratory stream -- which in adult rodents contains neurons that migrate from the subventricular zone to the brain region concerned with sensing smell -- is absent in humans. 4) In adult mammals, including humans, the subventricular zone (more commonly known as the subependymal zone[3-5]) contains cells that have the characteristics of glial cells and that can generate neuronal cells in culture. Sanai et al(2) show that in adult humans these "glial progenitor cells" form a prominent layer, or ribbon, that is restricted to a specific region in the brain that lines the lateral cerebral ventricle. This region is also present in non-human primates, but it is thinner and less well delineated than in humans(4). References (abridged): 1. Ginis, I. & Rao, M. S. Exp. Neurol. 184, 61-77 (2003) 2. Sanai, N. et al. Nature 427, 740-744 (2004) 3. Lewis, P. D. Nature 217, 974-975 (1968) 4. McDermott, K. W. & Lantos, P. L. Brain Res. Dev. Brain Res. 57, 269-277 (1990) 5. Weickert, C. S. et al. J. Comp. Neurol. 423, 359-372 (2000) Nature http://www.nature.com/nature From checker at panix.com Thu Sep 15 01:36:59 2005 From: checker at panix.com (Premise Checker) Date: Wed, 14 Sep 2005 21:36:59 -0400 (EDT) Subject: [Paleopsych] U.S. Dept. of State: How to Identify Misinformation Message-ID: How to Identify Misinformation http://usinfo.state.gov/media/Archive/2005/Jul/27-595713.html How can a journalist or a news consumer tell if a story is true or false? There are no exact rules, but the following clues can help indicate if a story or allegation is true. * Does the story fit the pattern of a conspiracy theory? * Does the story fit the pattern of an "urban legend?" * Does the story contain a shocking revelation about a highly controversial issue? * Is the source trustworthy? * What does further research tell you? Does the story fit the pattern of a conspiracy theory? Does the story claim that vast, powerful, evil forces are secretly manipulating events? If so, this fits the profile of a conspiracy theory. Conspiracy theories are rarely true, even though they have great appeal and are often widely believed. In reality, events usually have much less exciting explanations. The U.S. military or intelligence community is a favorite villain in many conspiracy theories. For example, the Soviet disinformation apparatus regularly blamed the U.S. military or intelligence community for a variety of natural disasters as well as political events. In March 1992, then-Russian foreign intelligence chief Yevgeni Primakov admitted that the disinformation service of the Soviet KGB intelligence service had concocted the false story that the AIDS virus had been created in a US military laboratory as a biological weapon. When AIDS was first discovered, no one knew how this horrifying new disease had arisen, although scientists have now used DNA analysis to determine that "all HIV-1 strains known to infect man" are closely related to a simian immunodeficiency virus found in a western equatorial African chimpanzee, Pan troglodytes troglodytes. But the Soviets used widespread suspicions about the U.S. military to blame it for AIDS. ([1]More details on this.) In his book 9/11: The Big Lie, French author Thierry Meyssan falsely claimed that no plane hit the Pentagon on September 11, 2001. Instead, he claimed that the building had been struck by a cruise missile fired by elements within the U.S. government. No such vast conspiracy existed and many eyewitness accounts and evidence gathered on the scene confirmed that the hijacked airliner had struck the building. But, nevertheless, the book was a best-seller in France and has been translated into 19 languages, demonstrating the power that even the most groundless conspiracy theories can have. ([2]More details on 9/11: The Big Lie.) Does the story fit the pattern of an "urban legend?" Is the story startlingly good, bad, amazing, horrifying, or otherwise seemingly "too good" or "too terrible" to be true? If so, it may be an "urban legend." Urban legends, which often circulate by word of mouth, e-mail, or the Internet, are false claims that are widely believed because they put a common fear, hope, suspicion, or other powerful emotion into story form. For example, after the September 11 attacks, a story arose that someone had survived the World Trade Center collapse by "surfing" a piece of building debris from the 82^nd floor to the ground. Of course, no one could survive such a fall, but many initially believed this story, out of desperate hope that some people trapped in the towers miraculously survived their collapse. ([3]More details on this.) Another September 11 urban legend is that an undamaged Bible was found in the midst of the crash site at the Pentagon. In reality, it was a dictionary. But, if a Bible had survived unscathed, that would have seemed much more significant, and been seen by many as a sign of divine intervention. ([4]More details on this.) Since 1987, the false story that Americans or others are kidnapping or adopting children in order to use them in organ transplants has been widely believed. There is absolutely no evidence that any such event has ever occurred, but such allegations have won the most prestigious journalism prizes in France in 1995 and Spain in 1996. ([5]More details on this.) This urban legend is based on fears about both organ transplantation and international adoptions, both of which were relatively new practices in the 1980s. As advances in medical science made organ transplantation more widespread, unfounded fears began to spread that people would be murdered for their organs. At the same time, there were also unfounded fears about the fate of infants adopted by foreigners and taken far from their home countries. The so-called "baby parts" rumor combined both these fears in story form, which gave it great credibility even though there was absolutely no evidence for the allegation. In late 2004, a reporter for Saudi Arabia's Al Watan newspaper repeated a version of the organ trafficking urban legend, falsely claiming that U.S. forces in Iraq were harvesting organs from dead or wounded Iraqis for sale in the United States. This shows how the details of urban legends can change, to fit different circumstances. (More details in [6]English and [7]Arabic.) Highly controversial issues AIDS, organ transplantation, international adoption, and the September 11 attacks are all new, frightening or, in some ways, discomforting topics. Such highly controversial issues are natural candidates for the rise of false rumors, unwarranted fears and suspicions. Another example of a highly controversial issue is depleted uranium, a relatively new armor-piercing substance that was used by the U.S. military for the first time during the 1991 Gulf War. There are many exaggerated fears about depleted uranium because people associate it with weapons-grade uranium or fuel-grade uranium, which are much more dangerous substances. When most people hear the word uranium, a number of strongly held associations spring to mind, including the atomic bomb, Hiroshima, nuclear reactors, radiation illness, cancer, and birth defects. Depleted uranium is what is left over when natural uranium is enriched to make weapons-grade or fuel-grade uranium. In the process, the uranium loses, or is depleted, of almost half its radioactivity, which is how depleted uranium gets its name. But facts like this are less important in peoples' minds than the deeply ingrained associations they have with the world "uranium." For this reason, most people believe that depleted uranium is much more dangerous than it actually is. (More details on depleted uranium in [8]English and [9]Arabic.) Another highly controversial issue is that of forbidden weapons, such as chemical or biological weapons. The United States is regularly, and falsely, accused of using these weapons. (More details on this in [10]English and [11]Arabic.) In the same way, many other highly controversial issues are naturally prone to misunderstanding and false rumors. Any highly controversial issue or taboo behavior is ripe material for false rumors and urban legends. Consider the source Certain websites, publications, and individuals are known for spreading false stories, including: * [12]Aljazeera.com, a deceptive, look-alike website that has sought to fool people into thinking it is run by the Qatari satellite television station Al Jazeera * [13]Jihad Unspun, a website run by a Canadian woman who converted to Islam after the September 11 attacks when she became convinced that Osama bin Laden was right * [14]Islam Memo (Mafkarat-al-Islam), which spreads a great deal of disinformation about Iraq. (More details on Islam Memo and Jihad Unspun in [15]English and [16]Arabic.) There are many conspiracy theory websites, which contain a great deal of unreliable information. Examples include: * [17]Rense.com * Australian "private investigator" [18]Joe Vialls, who died in 2005 * [19]Conspiracy Planet Extremist groups, such as splinter communist parties, often publish disinformation. This can be especially difficult to identify if the false allegations are published by front groups. Front groups purport to be independent, non-partisan organizations but actually controlled by political parties or groups. Some examples of front groups are: * The [20]International Action Center, which is a front group for a splinter communist party called the [21]Workers World Party * The [22]Free Arab Voice, a website that serves as a front for Arab communist Muhammad Abu Nasr and his colleagues. (More details on Muhammad Abu Nasr in [23]English or [24]Arabic.) Research the allegations The only way to determine whether an allegation is true or false is to research it as thoroughly as possible. Of course, this may not always be possible given publication deadlines and time pressures, but there is no substitute for thorough research, going back to the original sources. Using the Internet, many allegations can be fairly thoroughly researched in a matter of hours. For example, in July 2005, the counter-misinformation team researched the allegation that U.S. soldiers in Iraq had killed innocent Iraqi boys playing football and then "planted" rocket-propelled grenades (RPGs) next to them, to make it appear that they were insurgents. Using a variety of search terms in "Google," a researcher was able to find the [25]article and photographs upon which the allegations were based. Because weapons did not appear in the initial photographs, but did appear in later photographs, some observers believed this was evidence that the weapons had been planted and that the boys who had been killed were not armed insurgents. The researcher was also able to find [26]weblog entries (numbered 100 and 333, on June 26 and July 15, 2005) from the commanding officer of the platoon that was involved in the incident and another member of his platoon. The weblog entries made it clear that: * the teenaged Iraqi boys were armed insurgents; * after the firefight between U.S. troops and the insurgents was over, the dead, wounded and captured insurgents were initially photographed separated from their weapons because the first priority was to make sure that it was impossible for any of the surviving insurgents to fire them again; * following medical treatment for the wounded insurgents, they were photographed with the captured weapons displayed, in line with Iraqi government requirements; * the insurgents were hiding in a dense palm grove, where visibility was limited to 20 meters, not a likely place for a football game, and they were seen carrying the RPGs on their shoulders. Thus, an hour or two of research on the Internet was sufficient to establish that the suspicions of the bloggers that the weapons had been planted on innocent Iraqi boys playing football were unfounded. Finally, if the counter-misinformation team can be of help, ask us. We can't respond to all requests for information, but if a request is reasonable and we have the time, we will do our best to provide accurate, authoritative information. Created: 27 Jul 2005 Updated: 27 Jul 2005 References 1. http://usinfo.state.gov/media/Archive/2005/Jan/14-777030.html 2. http://usinfo.state.gov/media/Archive/2005/Jun/28-581634.html 3. http://www.snopes.com/rumors/survivor.htm 4. http://www.snopes.com/rumors/bible.htm 5. http://usinfo.state.gov/media/Archive_Index/The_Baby_Parts_Myth.html 6. http://usinfo.state.gov/media/Archive/2005/Jan/14-475342.html 7. http://usinfo.state.gov/ar/Archive/2005/May/13-191292.html 8. http://usinfo.state.gov/media/Archive/2005/Jan/24-107572.html 9. http://usinfo.state.gov/ar/Archive/2005/May/13-329204.html 10. http://usinfo.state.gov/media/Archive/2005/Mar/11-723838.html 11. http://usinfo.state.gov/ar/Archive/2005/May/13-315186.html 12. http://aljazeera.com/ 13. http://www.jihadunspun.net/ 14. http://www.islammemo.cc/ 15. http://usinfo.state.gov/media/Archive/2005/Apr/08-205989.html 16. http://usinfo.state.gov/ar/Archive/2005/May/13-401696.html 17. http://www.rense.com/ 18. http://www.vialls.com/ 19. http://www.conspiracyplanet.com/ 20. http://www.iacenter.org/ 21. http://www.workersworld.net/wwp 22. http://www.freearabvoice.org/ 23. http://usinfo.state.gov/media/Archive/2005/Apr/08-205989.html 24. http://usinfo.state.gov/ar/Archive/2005/May/13-401696.html 25. http://www.nogw.com/download/2005_plant_weapons.pdf 26. http://www.roadstoiraq.com/index.php?p=361 From checker at panix.com Thu Sep 15 01:37:15 2005 From: checker at panix.com (Premise Checker) Date: Wed, 14 Sep 2005 21:37:15 -0400 (EDT) Subject: [Paleopsych] NYT: The Origin of Invasive Species Message-ID: The Origin of Invasive Species New York Times, 5.6.12 (note date) http://query.nytimes.com/search/article-printpage.html?res=9A0DE5D71638F931A25755C0A9639C8B63 By Richard Conniff OUT OF EDEN An Odyssey of Ecological Invasion. By Alan Burdick. Illustrated. 324 pp. Farrar, Straus & Giroux. $25. FOR the past few years, I have been engaged in hand-to-hand combat with a 10-foot-tall alien -- growing thick as toothpicks in a box, just outside my door. Late in summer, I thrash through the marsh to coat the blades of the phragmites reed with herbicide. (Some environmentalists endorse this; others abhor it.) In midwinter, I attach a rope to each end of a board and use it to trample down the dead stalks. (Politically correct, but Bruegelesque, or possibly Chaplinesque.) Sometimes, when I lurch out of the grass, sweating, muddy and in a spirit of high ecological dudgeon, my dog, Maggie, backs away growling. (''Who is this and what is he doing in my territory?'') Phragmites, a handsome European reed with a feathery crimson plume, got dumped on the coast of New England in heaps of 19th-century ship ballast. It has choked almost every wetland east of the Mississippi. But in my little pocket of coastal marsh, I have opened up the landscape again and made room for the native grasses and cattails I remember from childhood. I have also made room for nesting mute swans, another invader. (Should I coddle the eggs and stop them from reproducing? Please do not advise.) I am also acutely aware that if I lay off for a season or two, the phragmites will come booming back. The natural world has gone wildly astray, through the kindness of human beings, and there isn't any good way to put it right again. Should we even bother trying? It turns out there are lots of people who are obsessed, dismayed and perhaps also occasionally made demented by invasive species. Often, they fight noble battles they know they are doomed to lose. In ''Out of Eden,'' the science writer Alan Burdick travels the world to chronicle their intriguing, undervalued lives, and those of the species that trouble them. ''The greatest threat to biological diversity is no longer just bulldozers or pesticides but, in a sense, nature itself,'' Burdick writes. Since its accidental introduction to Guam, the Australian brown tree snake has extirpated nine native bird species, sending three to extinction. One of the last remaining pairs of another species hangs on only because the nest is now barricaded behind electric wires, with branches pruned back to keep snakes from creeping in from nearby trees: ''The couple looked trapped in its safety, like those people in Manhattan who secure their apartments with eight locks on the front door.'' Alas, the couple also suffered from ''sexual disharmony,'' and small wonder: one night, researchers with traps caught seven snakes at the door. Because of its knack for hitchhiking in freight containers and aircraft landing gear, the brown tree snake may yet set up shop in Hawaii or San Diego or even South Florida. Thousands of other invasive species, many of them of more benign or even beautiful varieties, already surround us. They herald an era of ''creeping sameness,'' called by one scientist ''the Homogecene.'' Tourists freshly escaped from the howling depths of winter may delight in the birdsong and the tropical vegetation in Honolulu. But everything around them, another scientist tells Burdick, ''is introduced'': ''Not a single plant, none of the lowland birds in Hawaii are native.'' We are turning the world into ''a McDonald's ecosystem,'' with the same species living roughly the same way everywhere. Burdick is best describing the minute details of this change: how does the onslaught of new species affect Hawaii's native forests? The answer comes partly from studying fruit fly species and obscure soil dwellers like springtails and mites that also date back as far as the forests. We meet Hawaiian drosophilists who ritually stopper specimen vials with torn aloha shirts, to distinguish themselves from mainlanders who still use telltale cotton balls. We encounter a parasitologist who ''would be more comfortable . . . doing pretty much anything other than talking. If he were a bird, he might be a night heron . . . liable to stand there blinking in the illumination of a flashlight, then dart away.'' We go underground with a burly entomologist, Frank Howarth, who listens to the distinctive love songs sent out by different native species of the tiny insects called plant hoppers, which dwell in caves and lava tubes. (Howarth once discovered a new species on the property of Loretta Lynn and named it Oliarus lorettae, after her.) The plant hoppers tap out their songs along the rootlets of a native tree; Burdick likens them to human lovers making a crosstown phone call. Unfortunately, that tree is being supplanted by an invasive tree species with different roots, resulting in something like a subterranean silent spring: ''The singers are growing mute,'' Burdick writes, ''each one marooned on the island of itself, unable to communicate, to mate, to sustain its end of evolutionary conversation.'' A KIND of unnatural selection at the hands of introduced species is apparently commonplace, with the luckier native species managing to survive through accelerated evolution. In 1968, before introduced mosquitoes caused a malaria pandemic, some native birds in Hawaii used to sleep any which way, leaving themselves exposed to mosquito bites. By 1986, those birds had been weeded out, leaving only birds that slept with their legs tucked under their bodies, their bills and faces buried in the fluffed feathers on their backs. In Australia, toxic cane toads introduced from the Americas have favored the proliferation of snakes with mouths too small to swallow them. Burdick tries to make the case that nature is adaptable enough to handle the changes in our topsy-turvy world. When scientists figure out how to isolate the problem and interpret all the variables, it appears, for instance, that even having 500-pound feral pigs rooting through the forests of Hawaii may not do the permanent damage conservationists fear. Instead of causing local extinctions, he writes, ''most successful invaders simply blend into the ecological woodwork. . . . To the local eye, biological diversity seems to have increased. Isn't that a good thing?'' Maybe Burdick is simply trying to avoid the hazards of environmental alarmism, but surely this goes too far. It doesn't square with the evidence he has diligently accumulated: What about the Australian tree spreading rapidly through the Everglades that ''draws in so much water through its roots that it essentially converts open marsh habitats . . . into . . . dry land''? What about the European green crab, which ''single-leggedly crushed the soft-shell clam industry north of Cape Cod''? And how about, shortly after a cholera epidemic in South America in 1991, ships dumping ballast water that released the same strain of cholera bacteria into oyster beds at Mobile Bay in Alabama? The argument that many, or even most, invasive species cause no harm risks encouraging a ''What, me worry?'' attitude in a public already too complacent about environmental change. Henry David Thoreau once defined weeding as the business of ''making invidious distinctions with the hoe.'' But in the science of ''invasion biology,'' the distinctions about what to keep in and what to weed out sometimes really matter. Burdick's account of the researchers who struggle with this largely thankless work is graceful and inviting. He would have written a better book, though, if he had made a more cogent case for why, every now and then, we need to cough up the money to buy those workers a better hoe.hAndrew Solomon's ''Noonday Demon'' received a National Book Award in 2002. He is currently writing a book about families grappling with traumatic difference. From shovland at mindspring.com Fri Sep 9 03:21:54 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Thu, 8 Sep 2005 23:21:54 -0400 (EDT) Subject: [Paleopsych] Congressman Ron Paul on Iraq Message-ID: <16335524.1126236115012.JavaMail.root@mswamui-bichon.atl.sa.earthlink.net> HON. RON PAUL OF TEXAS BEFORE THE US HOUSE OF REPRESENTATIVES September 8, 2005 Why We Fight Many reasons have been given for why we fight and our youth must die in Iraq. The reasons now given for why we must continue this war bear no resemblance to the reasons given to gain the support of the American people and the United States Congress prior to our invasion in March of 2003. Before the war, we were told we faced an imminent threat to our national security from Saddam Hussein. This rationale, now proven grossly mistaken, has been changed. Now we?re told we must honor the fallen by ?completing the mission.? To do otherwise would demean the sacrifice of those who have died or been wounded. Any lack of support for ?completing the mission? is said, by the promoters of the war, to be unpatriotic, un-American, and detrimental to the troops. They insist the only way one can support the troops is to never waver on the policy of nation building, no matter how ill-founded that policy may be. The obvious flaw in this argument is that the mission, of which they so reverently speak, has changed constantly from the very beginning. Though most people think this war started in March of 2003, the seeds were sown many years before. The actual military conflict, involving U.S. troops against Iraq, began in January 1991. The prelude to this actually dates back over a hundred years, when the value of Middle East oil was recognized by the industrialized West. Our use of troops to eject Saddam Hussein from Kuwait was the beginning of the current conflict with Muslim fundamentalists who have been, for the last decade, determined to force the removal of American troops from all Muslim countries-- especially the entire Arabian Peninsula, which they consider holy. Though the strategic and historic reasons for our involvement in the Middle East are complex, the immediate reasons given in 2002 and 2003 for our invasion of Iraq were precise. The only problem is they were not based on facts. The desire by American policymakers to engineer regime change in Iraq had been smoldering since the first Persian Gulf conflict in 1991. This reflected a dramatic shift in our policy, since in the 1980s we maintained a friendly alliance with Saddam Hussein as we assisted him in his war against our arch nemesis, the Iranian Ayatollah. Most Americans ignore that we provided assistance to this ruthless dictator with biological and chemical weapons technology. We heard no complaints in the 1980s about his treatment of the Kurds and Shiites, or the ruthless war he waged against Iran. Our policy toward Iraq played a major role in convincing Saddam Hussein he had free reign in the Middle East, and the results demonstrate the serious shortcomings of our foreign policy of interventionism that we have followed now for over a hundred years. In 1998 Congress capitulated to the desires of the Clinton administration and overwhelmingly passed the Iraq Liberation Act, which stated quite clearly that our policy was to get rid of Saddam Hussein. This act made it official: ?The policy of the United States to support efforts to remove the regime headed by Saddam Hussein.? This resolution has been cited on numerous occasions by neo-conservatives as justification for the pre-emptive, deliberate invasion of Iraq. When the resolution was debated, I saw it as a significant step toward a war that would bear no good fruit. No legitimate national security concerns were cited for this dramatic and serious shift in policy. Shortly after the new administration took office in January 2001, this goal of eliminating Saddam Hussein quickly morphed into a policy of remaking the entire Middle East, starting with regime change in Iraq. This aggressive interventionist policy surprised some people, since the victorious 2000 campaign indicated we should pursue a foreign policy of humility, no nation building, reduced deployment of our forces overseas, and a rejection of the notion that we serve as world policemen. The 9/11 disaster proved a catalyst to push for invading Iraq and restructuring the entire Middle East. Though the plan had existed for years, it quickly was recognized that the fear engendered by the 9/11 attacks could be used to mobilize the American people and Congress to support this war. Nevertheless, supposedly legitimate reasons had to be given for the already planned pre-emptive war, and as we now know the ?intelligence had to be fixed to the policy.? Immediately after 9/11 the American people were led to believe that Saddam Hussein somehow was responsible for the attacks. The fact that Saddam Hussein and Osama bin Laden were enemies, not friends, was kept from the public by a compliant media and a lazy Congress. Even today many Americans still are convinced of an alliance between the two. The truth is Saddam Hussein never permitted al Qaeda into Iraq out of fear that his secular government would be challenged. And yet today we find that al Qaeda is now very much present in Iraq, and causing chaos there. The administration repeatedly pumped out alarming propaganda that Saddam Hussein was a threat to us with his weapons of mass destruction, meaning nuclear, biological, and chemical. Since we helped Saddam Hussein obtain biological and chemical weapons in the 1980s, we assumed that he had maintained a large supply-- which of course turned out not to be true. The people, frightened by 9/11, easily accepted these fear-mongering charges. Behind the scenes many were quite aware that Israel?s influence on our foreign policy played a role. She had argued for years, along with the neo-conservatives, for an Iraqi regime change. This support was nicely coordinated with the Christian Zionists? enthusiasm for the war. As these reasons for the war lost credibility and support, other reasons were found for why we had to fight. As the lone superpower, we were told we had a greater responsibility to settle the problems of the world lest someone else gets involved. Maintaining and expanding our empire is a key element of the neo-conservative philosophy. This notion that we must fight to spread American goodness was well received by these neo-Jacobins. They saw the war as a legitimate moral crusade, arguing that no one should be allowed to stand in our way! In their minds using force to spread democracy is legitimate and necessary. We also were told the war was necessary for national security purposes because of the threat Saddam Hussein presented, although the evidence was fabricated. Saddam Hussein?s ability to attack us was non-existent, but the American people were ripe for alarming predictions by those who wanted this war. Of course the routine canard for our need to fight, finance, and meddle around the world ever since the Korean War was repeated incessantly: UN Resolutions had to be enforced lest the United Nations be discredited. The odd thing was that on this occasion the United Nations itself did everything possible to stop our pre-emptive attack. And as it turned out, Saddam Hussein was a lot closer to compliance than anyone dreamed. It wasn?t long before concern for the threat of Saddam Hussein became near hysterical, drowning out any reasoned opposition to the planned war. The one argument that was not publicly used by those who propagandized for the war may well be the most important-- oil. Though the administration in 1990 hinted briefly that we had to eject Saddam Hussein from Kuwait because of oil, the stated reasons for that conflict soon transformed into stopping a potential Hitler and enforcing UN resolutions. Publicly oil is not talked about very much, but behind the scenes many acknowledge this is the real reason we fight. This is not only the politicians who say this. American consumers have always enjoyed cheap gasoline and want it kept that way. The real irony is that the war has reduced Iraqi oil production by one-half million barrels per day and prices are soaring-- demonstrating another unintended economic consequence of war. Oil in the Middle East has been a big issue since the industrial revolution, when it was realized that the black substance bubbling out of the ground in places like Iraq had great value. It?s interesting to note that in the early 20th century Germany, fully aware of oil?s importance, allied itself with the Turkish Ottoman Empire and secured the earliest rights to drill Iraqi oil. They built the Anatalia railroad between Baghdad and Basra, and obtained oil and mineral rights on twenty kilometers on each side of this right-of-way. World War I changed all this, allowing the French and the British to divide the oil wealth of the entire Middle East. The Versailles Treaty created the artificial nation of Iraq, and it wasn?t long before American oil companies were drilling and struggling to participate in the control of Middle East oil. But it was never smooth sailing for any occupying force in Iraq. After WWI, the British generals upon arriving to secure ?their? oil said: ?Our armies do not come into your cities and lands as conquerors or enemies, but as liberators.? Not long afterward a jihad was declared against Britain and eventually they were forced to leave. The more things change, the more they stay the same! Too bad we are not better at studying history. After World War II the U.S. emerged as the #1 world power, and moved to assume what some believed was our responsibility to control Middle East oil in competition with the Soviets. This role prompted us to use our CIA, along with the help of the British, to oust democratically elected Mohammed Mosadeh from power in Iran and install the Shah as a U.S. puppet. We not only supported Saddam Hussein against Iran, we also supported Osama bin Laden in the 1980s-- aggravating the situation in the Middle East and causing unintended consequences. With CIA assistance we helped develop the educational program to radicalize Islamic youth in many Arab nations, especially in Saudi Arabia to fight the Soviets. We even provided a nuclear reactor to Iran in 1967-- which today leads us to threaten another war. All of this has come back to haunt us. Meddling in the affairs of others has consequences. Finally, after years of plotting and maneuvering, the neo-conservative plan to invade Iraq came before the U.S. House in October 2002 to be rubber-stamped. Though the plan was hatched years before, and the official policy of the United States government was to remove Saddam Hussein ever since 1998, various events delayed the vote until this time. By October the vote was deemed urgent, so as to embarrass anyone who opposed it. This would make them politically vulnerable in the November election. The ploy worked. The resolution passed easily, and it served the interests of proponents of war in the November election. The resolution, HJ RES 114, explicitly cited the Iraqi Liberation Act of 1998 as one of the reasons we had to go to war. The authorization granted the President to use force against Iraq cited two precise reasons: 1. ?To defend the national security of the U.S. against the continuing threat posed by Iraq and? 2. ?Enforce all relevant United Nations Council resolutions regarding Iraq.? Many other reasons were given to stir the emotions of the American public and the U.S. Congress, reasons that were grossly misleading and found not to be true. The pretense of a legal justification was a sham. The fact that Congress is not permitted under the Constitution to transfer the war power to a president was ignored. Only Congress can declare war, if we were inclined to follow the rule of law. To add insult to injury, HJ RES 114 cited United Nations resolutions as justifications for the war. Ignoring the Constitution while using the UN to justify the war showed callous disregard for the restraints carefully written in the Constitution. The authors deliberately wanted to make war difficult to enter without legislative debate, and they purposely kept the responsibility out of the hands of the executive branch. Surely they never dreamed an international government would have influence over our foreign policy or tell us when we should enter into armed conflict. The legal maneuvering to permit this war was tragic to watch, but the notion that Saddam Hussein-- a third world punk without an air force, navy, and hardly an army or any anti-aircraft weaponry-- was an outright threat to the United States six thousand miles away, tells you how hysterical fear can be used to pursue a policy of needless war for quite different reasons. Today, though, all the old reasons for going to war have been discredited, and are no longer used to justify continuing the war. Now we are told we must ?complete the mission,? and yet no one seems to know exactly what the mission is or when it can be achieved. By contrast, when war is properly declared against a country we can expect an all-out effort until the country surrenders. Without a declaration of war as the Constitution requires, it?s left to the President to decide when to start the war and when the war is over. We had sad experiences with this process in Korea and especially in Vietnam. Pursuing this war merely to save face, or to claim it?s a way to honor those who already have died or been wounded, is hardly a reason that more people should die. We?re told that we can?t leave until we have a democratic Iraq. But what if Iraq votes to have a Shiite theocracy, which it looks like the majority wants as their form of government-- and women, Christians, and Sunnis are made second-class citizens? It?s a preposterous notion and it points out the severe shortcomings of a democracy where a majority rules and minorities suffer. Thankfully, our founding fathers understood the great dangers of a democracy. They insisted on a constitutional republic with a weak central government and an executive branch beholden to the legislative branch in foreign affairs. The sooner we realize we can?t afford this war the better. We?ve gotten ourselves into a civil war within the Islamic community. But could it be, as it had been for over a hundred years prior to our invasion, that oil really is the driving issue behind a foreign presence in the Middle East? It?s rather ironic that the consequence of our intervention has been skyrocketing oil prices, with Iraqi oil production still significantly below pre-war levels. If democracy is not all it?s cracked up to be, and a war for oil is blatantly immoral and unproductive, the question still remains-- why do we fight? More precisely, why should we fight? When is enough killing enough? Why does man so casually accept war, which brings so much suffering to so many, when so little is achieved? Why do those who suffer and die so willingly accept the excuses for the wars that need not be fought? Why do so many defer to those who are enthused about war, and who claim it?s a solution to a problem, without asking them why they themselves do not fight? It?s always other men and other men?s children who must sacrifice life and limb for the reasons that make no sense, reasons that are said to be our patriotic duty to fight and die for. How many useless wars have been fought for lies that deserved no hearing? When will it all end? Why We Should Not Fight Since no logical answers can be given for why we fight, it might be better to talk about why we should not fight. A case can be made that if this war does not end soon it will spread and engulf the entire region. We?ve already been warned that war against Iran is an option that remains on the table for reasons no more reliable than those given for the pre-emptive strike against Iraq. Let me give you a few reasons why this war in Iraq should not be fought. It is not in our national interest. On the contrary, pursuing this war endangers our security, increases the chances of a domestic terrorist attack, weakens our defenses, and motivates our enemies to join together in opposition to our domineering presence around the world. Does anyone believe that Russia, China, and Iran will give us free reign over the entire Middle East and its oil? Tragically, we?re setting the stage for a much bigger conflict. It?s possible that this war could evolve into something much worse than Vietnam. This war has never been declared. It?s not a constitutional war, and without a proper beginning there can be no proper ending. The vagueness instills doubts in all Americans, both supporters and non-supporters, as to what will be accomplished. Supporters of the war want total victory, which is not achievable with a vague mission. Now the majority of Americans are demanding an end to this dragged-out war that many fear will spread before it?s over. It?s virtually impossible to beat a determined guerrilla resistance to a foreign occupying force. After 30 years the Vietnam guerillas, following unbelievable suffering, succeeded in forcing all foreign troops from their homeland. History shows that Iraqi Muslims have always been determined to resist any foreign power on their soil. We ignored that history and learned nothing from Vietnam. How many lives, theirs and ours, are worth losing to prove the tenacity of guerilla fighters supported by a large number of local citizens? Those who argue that it?s legitimate to protect ?our oil? someday must realize that it?s not our oil, no matter how strong and sophisticated our military is. We know the war so far has played havoc with oil prices, and the market continues to discount problems in the region for years to come. No end is in sight regarding the uncertainty of Middle East oil production caused by this conflict. So far our policies inadvertently have encouraged the development of an Islamic state, with Iranian-allied Shiites in charge. This has led to Iranian support for the insurgents, and has placed Iran in a position of becoming the true victor in this war as its alliance with Iraq grows. This could place Iran and its allies in the enviable position of becoming the oil powerhouse in the region, if not the world, once it has control over the oil fields near Basra. This unintended alliance with Iran, plus the benefit to Osama bin Laden?s recruiting efforts, will in the end increase the danger to Israel by rallying the Arab and Muslim people against us. One of the original stated justifications for the war has been accomplished. Since 1998 the stated policy of the United States government was to bring about regime change and get rid of Saddam Hussein. This has been done, but instead of peace and stability we have sown the seeds of chaos. Nevertheless, the goal of removing Saddam Hussein has been achieved and is a reason to stop the fighting. There were no weapons of mass destruction, no biological or chemical or nuclear weapons, so we can be assured the Iraqis pose no threat to anyone, certainly not to the United States. No evidence existed to show an alliance between Iraq and al Qaeda before the war, and ironically our presence there is now encouraging al Qaeda and Osama bin Laden to move in to fill the vacuum we created. The only relationship between Iraq and 9/11 is that our policy in the Middle East continues to increase the likelihood of another terrorist attack on our homeland. We should not fight because it?s simply not worth it. What are we going to get for nearly 2,000 soldier deaths and 20 thousand severe casualties? Was the $350 billion worth it? This is a cost that will be passed on to future generations through an expanded national debt. I?ll bet most Americans can think of a lot better ways to have spent this money. Today?s program of guns and butter will be more damaging to our economy than a similar program was in the 1960s, which gave us the stagflation of the 1970s. The economic imbalances today are much greater than they were in those decades. Eventually, we will come to realize that the Wilsonian idealism of using America?s resources to promote democracy around the world through force is a seriously flawed policy. Wilson pretended to be spreading democracy worldwide, and yet women in the U.S. at that time were not allowed to vote. Democracy, where the majority dictates the rules, cannot protect minorities and individual rights. And in addition, using force to impose our will on others almost always backfires. There?s no reason that our efforts in the 21st century to impose a western style government in Iraq will be any more successful than the British were after World War I. This especially can?t work if democracy is only an excuse for our occupation and the real reasons are left unrecognized. It boils down to the fact that we don?t really have any sound reasons for continuing this fight. The original reasons for the war never existed, and the new reasons aren?t credible. We hear only that we must carry on so those who have already suffered death and injury didn?t do so in vain. If the original reasons for starting the war were false, simply continuing in the name of those fallen makes no sense. More loss of life can never justify earlier loss of life if they died for false reasons. This being the case, it?s time to reassess the policies that have gotten us into this mess. What does all this mean? The mess we face in the Middle East and Afghanistan, and the threat of terrorism within our own borders, are not a result of the policies of this administration alone. Problems have been building for many years, and have only gotten much worse with our most recent policy of forcibly imposing regime change in Iraq. We must recognize that the stalemate in Korea, the loss in Vietnam, and the quagmire in Iraq and Afghanistan all result from the same flawed foreign policy of interventionism that our government has pursued for over 100 years. It would be overly simplistic to say the current administration alone is responsible for the mess in Iraq. By rejecting the advice of the Founders and our early presidents, our leaders have drifted away from the admonitions against entangling alliances and nation building. Policing the world is not our calling or our mandate. Besides, the Constitution doesn?t permit it. Undeclared wars have not enhanced our national security. The consensus on foreign interventionism has been pervasive. Both major parties have come to accept our role as the world?s policeman, despite periodic campaign rhetoric stating otherwise. The media in particular, especially in the early stages, propagandize in favor of war. It?s only when the costs become prohibitive and the war loses popular support that the media criticize the effort. It isn?t only our presidents that deserve the blame when they overstep their authority and lead the country into inappropriate wars. Congress deserves equally severe criticism for acquiescing to the demands of the executive to go needlessly to war. It has been known throughout history that kings, dictators, and the executive branch of governments are always overly eager to go to war. This is precisely why our founders tried desperately to keep decisions about going to war in the hands of the legislature. But this process has failed us for the last 65 years. Congress routinely has rubber stamped the plans of our presidents and even the United Nations to enter into war through the back door. Congress at any time can prevent or stop all undue foreign entanglements pursued by the executive branch merely by refusing to finance them. The current Iraq war, now going on for 15 years, spans the administration of three presidents and many congresses controlled by both parties. This makes Congress every bit as responsible for the current quagmire as the president. But the real problem is the acceptance by our country as a whole of the principle of meddling in the internal affairs of other nations when unrelated to our national security. Intervention, no matter how well intended, inevitably boomerangs and comes back to haunt us. Minding our own business is not only economical; it?s the only policy that serves our national security interests and the cause of peace. The neo-conservatives who want to remake the entire Middle East are not interested in the pertinent history of this region. Creating an artificial Iraq after World War I as a unified country was like mixing water and oil. It has only led to frustration, anger, and hostilities-- with the resulting instability creating conditions ripe for dictatorships. The occupying forces will not permit any of the three regions of Iraq to govern themselves. This is strictly motivated by a desire to exert control over the oil. Self-determination and independence for each region, or even a true republican form of government with a minimalist central authority is never considered-- yet it is the only answer to the difficult political problems this area faces. The relative and accidental independence of the Kurds and the Shiites in the 1990s served those regions well, and no suicide terrorism existed during that decade. The claim that our immediate withdrawal from Iraq would cause chaos is not proven. It didn?t happen in Vietnam or even Somalia. Even today, the militias of the Kurds and the Shiites may well be able to maintain order in their regions much better than we can currently. Certainly the Sunnis can take care of themselves, and it might be in their best interests for all three groups not to fight each other when we leave. One thing for sure: if we left no more young Americans would have to die for an indefinable cause. Instead, we have been forcing on the people of Iraq a type of democracy that, if implemented, will mean an Islamic state under Sharia? law. Already we read stories of barbers no longer being safe shaving beards; Christians are threatened and forced to leave the country; and burqas are returning out of fear. Unemployment is over 50%, and oil production is still significantly below pre-war levels. These results are not worth fighting and dying for. In this war, like all others, the propagandists and promoters themselves don ?t fight, nor do their children. It?s always worth the effort to wage war when others must suffer and die. Many of those who today pump the nation up with war fever were nowhere to be found when their numbers were called in the 1960s-- when previous presidents and Congresses thought so little about sending young men off to war. Then it was in their best interests to find more important things to do-- despite the so-called equalizing draft. The inability of taxpayers to fund both guns-and-butter has not deterred those who smell the glory of war. Notoriously, great nations fall once their appetite for foreign domination outstrips their citizens? ability or willingness to pay. We tried the guns-and-butter approach in the 1960s with bad results, and the same will happen again as a consequence of the current political decision not to cut back on any expenditure, domestic or foreign. Veto nothing is current policy! Tax, borrow, and print to pay the bills is today?s conventional wisdom. The problem is that all the bills eventually must be paid. There?s no free lunch, and no free war. The economic consequences of such a policy are well known and documented. Excessive spending leads to excessive deficits, higher taxes, and more borrowing and inflation-- which spells economic problems that always clobber the middle class and the poor. Already the suffering has begun. A lackluster recovery, low paying jobs, outsourcing, and social unrest already are apparent. This economic price we pay, along with the human suffering, is an extravagant price for a war that was started with false information and now is prolonged for reasons unrelated to our national security. This policy has led to excessive spending overseas and neglect at home. It invites enemies to attack us, and drains the resources needed to defend our homeland and care for our own people. We are obligated to learn something from the tragedy of Katrina about the misallocation of funds away from our infrastructure to the rebuilding of Iraq after first destroying it. If ever there was a time for us to reassess our policy of foreign interventionism, it is today. It?s time to look inward and attend to the constitutional needs of our people, and forget about the grandiose schemes to remake the world in our image through the use of force. These efforts not only are doomed to fail, as they have for the past one hundred years, but they invite economic and strategic military problems that are harmful to our national security interests. We?ve been told that we must fight to protect our freedoms here at home. These reasons are given to make the sacrifices more tolerable and noble. Without an honorable cause, the suffering becomes intolerable. Hiding from the truth, though, in the end is no panacea for a war that promises no peace. The most important misjudgment regarding Iraq that must be dealt with is the charge that Muslim terrorists attack us out of envy for our freedoms, our prosperity, and our way of life. There is no evidence this is the case. On the contrary, those who have extensively researched this issue conclude that the #1 reason suicide terrorists attack anywhere in the world is because their land is occupied by a foreign military power. Pretending otherwise and constantly expanding our military presence in more Arab and Muslim countries as we have since 1990 has only increased the danger of more attacks on our soil, as well as in those countries that have allied themselves with us. If we deny this truth we do so at our own peril. It?s not unusual for the war crusaders to condemn those who speak the truth in an effort to end an unnecessary war. They claim those who want honest reasons for the enormous sacrifice are unpatriotic and un-American, but these charges only serve to exacerbate the social unrest. Any criticism of policy, no matter how flawed the policy is, is said to be motivated by a lack of support for the troops. Yet it is preposterous to suggest that a policy that would have spared the lives of 1900 servicemen and women lacks concern for the well being of our troops. The absence of good reasoning to pursue this war prompts the supporters of the war to demonize the skeptics and critics. They have no other defense. Those who want to continue this war accuse those who lost loved ones in Iraq, and oppose the war, of using the dead for personal political gain. But what do the war proponents do when they claim the reason we must fight on is to honor the sacrifice of the military personnel we lost by completing the mission? The big difference is that one group argues for saving lives, while the other justifies more killing. And by that logic, the additional deaths will require even more killing to make sure they too have not died in vain. Therefore, the greater number who have died, the greater is the motivation to complete the mission. This defies logic. This argument to persevere has been used throughout history to continue wars that could and should have ended much sooner. This was true for World War I and Vietnam. A sad realism struck me recently reading how our Marines in Afghanistan must now rely on donkey transportation in their efforts at nation building and military occupation. Evidently the Taliban is alive and well, as Osama bin Laden remains in this region. But doesn?t this tell us something about our na?ve assumption that our economic advantages and technical knowledge can subdue and control anybody? We?re traversing Afghan mountains on donkeys, and losing lives daily in Baghdad with homemade primitive bombs. Our power and dominance clearly is limited by the determination of those who see us as occupiers, proving that just more money and sophisticated weapons won?t bring us victory. Sophisticated weapons and the use of unlimited military power is no substitute for diplomacy designed to promote peace while reserving force only for defending our national interests. Changing our policy of meddling in the affairs of others won?t come quickly or easily. But a few signals to indicate a change in our attitude would go a long way to bringing peace to a troubled land. 1. We must soon, and Congress can do this through the budget process, stop the construction of all permanent bases in Iraq and any other Muslim country in the region. Think of how we would react if the Chinese had the military edge on us and laid claims to the Gulf of Mexico, building bases within the U.S. in order to promote their superior way of life. Isn?t it ironic that we close down bases here at home while building new ones overseas? Domestic bases might well promote security, while bases in Muslim nations only elicit more hatred toward us. 2. The plans for the biggest U.S. embassy in the world, costing nearly 1 billion dollars, must be canceled. This structure in Baghdad sends a message, like the military bases being built, that we expect to be in Iraq and running Iraq for a long time to come. 3. All military forces, especially on the Arabian Peninsula, must be moved offshore at the earliest time possible. All responsibility for security and control of the oil must be transferred to the Iraqis from the United States as soon as possible, within months not years. The time will come when our policies dealing with foreign affairs will change for the better. But that will be because we can no longer afford the extravagance of war. This will occur when the American people realize that war causes too much suffering here at home, and the benefits of peace again become attractive to us all. Part of this recognition will involve a big drop in the value of the dollar, higher interest rates, and rampant price inflation. Though these problems are serious and threaten our freedoms and way of life, there?s every reason to work for the traditional constitutional foreign policy that promotes peace over war, while not being tempted to mold the world in our image through force. We should not forget that what we did not achieve by military force in Vietnam, was essentially achieved with the peace that came from our military failure and withdrawal of our armed forces. Today, through trade and peace, U.S. investment and economic cooperation has westernized Vietnam far more than our military efforts. We must remember initiating force to impose our will on others negates all the goodness for which we profess to stand. We cannot be fighting to secure our freedom if we impose laws like the Patriot Act and a national ID card on the American people. Unfortunately, we have lost faith and confidence in the system of government with which we have been blessed. Today too many Americans support, at least in the early stages, the use of force to spread our message of hope and freedom. They too often are confused by the rhetoric that our armies are needed to spread American goodness. Using force injudiciously, instead of spreading the worthy message of American freedom through peaceful means, antagonizes our enemies, alienates our allies, and threatens personal liberties here at home while burdening our economy. If confidence can be restored in our American traditions of peace and trade, our influence throughout the world would be enhanced just as it was once we rejected the military approach in Vietnam. This change in policy can come easily once the people of this country decide that there is a better way to conduct ourselves throughout the world. Whenever the people turn against war as a tool to promote certain beliefs, the war ceases. That?s what we need today. Then we can get down to the business of setting an example of how peace and freedom brings prosperity in an atmosphere that allows for excellence and virtue to thrive. A powerful bureaucratic military state negates all efforts to preserve these conditions that have served America so well up until recent times. That is not what the American dream is all about. Without a change in attitude, the American dream dies: a simple change that restates the principles of liberty enshrined in our Constitution will serve us well in solving all the problems we face. The American people are up to the task; I hope Congress is as well. From shovland at mindspring.com Sun Sep 11 16:26:57 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Sun, 11 Sep 2005 09:26:57 -0700 (GMT-07:00) Subject: [Paleopsych] Saudia Arabia May Already Be At Peak Oil Or Past It Message-ID: <13423610.1126456017487.JavaMail.root@mswamui-chipeau.atl.sa.earthlink.net> Twilight in the Desert: The Coming Saudi Oil Shock and the World Economy by Matthew R. Simmons In 1956, Shell Oil geologist M. King Hubbert discovered a grand illusion in the American oil industry. For tax purposes, he noted, American oil companies regularly delayed the declaration of new oil reserves by years and even decades. The result was a false impression that new oil was being found all the time. In fact, discoveries had peaked in 1936. Based on this observation, Mr. Hubbert predicted that American oil production would peak in 1969. He was wrong by one year. We briefly produced 10 million barrels a day in 1970 but have never hit that level since. Even with the addition of Prudhoe Bay, Alaska, American production has slipped to eight million barrels a day -- which is why we import 600f our oil. Across the oil industry, the uneasy feeling is growing that world production may be approaching its own "Hubbert's Peak." The last major field yielding more than a million barrels a day was found in Mexico in 1976. New discoveries peaked in 1960, and production outside the Middle East reached its high point in 1997. Meanwhile world demand continues to accelerate by 3% a year. Indonesia, once a major exporter, now imports its oil. The Saudis claim to have huge oil reserves. Do they really? Before an uneasy feeling grows into full-blown pessimism, however, one must consider the supposedly vast oil resources lying beneath Saudi Arabia. The Saudis possess 250f the world's proven reserves. They routinely proclaim that, for at least the next 50 years, they could easily double their current output of 10 million barrels a day. But is this true? Matthew R. Simmons, a Texas investment banker with a Harvard Business School degree and 20 years' experience in oil, has his doubts. In "Twilight in the Desert" (John Wiley & Sons, 422 pages, $24.95), Mr. Simmons argues that the Saudis may be deceiving the world and themselves. If only half of his claims prove to be true, we could be in for some nasty surprises. First, Mr. Simmons notes, all Saudi claims exist behind a veil of secrecy. In 1982, the Saudi government took complete control of Aramco (the Arabian American Oil Co.) after four decades of co-ownership with a consortium of major oil companies. Since then Aramco has never released field-by-field figures for its oil production. In fact, no OPEC member is very forthcoming. The cartel sets production quotas according to a country's reserves, so each member has reason to exaggerate. Meanwhile, OPEC nations are constantly cheating one another by overproducing, so none wants to publish official statistics. As a result, the world's most reliable source for OPEC production is a little company called Petrologistics, located over a grocery store in Geneva. Conrad Gerber, the principal, claims to have spies in every OPEC port. For all we know, Mr. Gerber is making up his numbers, but everyone -- including the Paris-based International Energy Agency -- takes him seriously, since OPEC produces nothing better. The Saudis, for their part, obviously enjoy their role as producer of last resort and feel content to let everyone think that they have things under control. Yet as Mr. Simmons observes: "History has frequently shown that once secrecy envelops the culture of either a company or a country, those most surprised when the truth comes out are often the insiders who created the secrets in the first place." Mr. Simmons became suspicious of Saudi claims after taking a guided tour of Aramco facilities in 2003. To penetrate the veil, he turned to the electronic library of the Society of Petroleum Engineers, which regularly publishes technical papers by field geologists. After downloading and studying more than 200 reports by Aramco personnel, Mr. Simmons came up with his own portrait of Saudi Arabia's oil resources. It is not a pretty picture. Almost 900f Saudi production comes from six giant fields, all of them discovered before 1967. The "king" of this grouping -- the 2000-square-mile Ghawar field near the Persian Gulf -- is the largest oil field in the world. But if Saudi geology follows the pattern found elsewhere, it is unlikely that any new fields lie nearby. Indeed, Aramco has prospected extensively outside the Ghawar region but found nothing of significance. In particular, the Arab D stratum -- the source rock of the Ghawar field -- has long since eroded in other parts of the Arabian Peninsula. The six major fields, having all produced at or near capacity for almost 40 years, are showing signs of age. All require extensive water injection to maintain their current flow. Based on these observations, Mr. Simmons doubts that Aramco can increase its output to anywhere near the level it claims. In fact, he believes that Saudi production may have already peaked. Is he right? Mr. Simmons's critics say that, by relying on technical papers, he has biased his survey, since geologists like to concentrate on problem wells the way that doctors focus on sick patients. Still, the experience in America and the rest of the world shows that oil fields don't last forever. Prudhoe Bay, which was producing 1.2 million barrels a day five years after being brought on line in 1976, is now down to less than 400,000. The mystery of Saudi oil capacity bears an eerie resemblance to Saddam Hussein's apparent belief that his scientists had developed weapons of mass destruction. Who are the deceivers and who is the deceived? No one yet knows the answers. But at least Matthew Simmons is asking the questions. (Wall Street Journal, June 28, 2005) From shovland at mindspring.com Sun Sep 11 19:41:31 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Sun, 11 Sep 2005 12:41:31 -0700 (GMT-07:00) Subject: [Paleopsych] A New Day in America Message-ID: <2679861.1126467691511.JavaMail.root@mswamui-backed.atl.sa.earthlink.net> A New Day in America The odds are pretty good that in the coming elections the Republicans will return to minority party status. While the Republican years have been good for the top 20% of Americans, they have not been so good for the rest of us. The mass of Americans has probably been too patient and too slow to learn that right-wing rule is not good for them, but I think they are finally beginning to understand that. It is too late for the Republicans to do much to change that view. The remaining items on their agenda will only reinforce the impression that they intend to benefit the few at the expense of the many. The Republicans will be leaving us with a massive hangover of problems caused by their intent and by their neglect. Our national balance sheet is a wreck. In order to repair it we will have to repeal Bush???s tax cuts and possibly increase taxes on the top 20% in order to recover the money was improperly given to them. Both individuals and corporations agree that the health care crisis is getting worse. The people and companies who are presently entrusted with this vital public utility are not solving the problem, and it is time to consider a European solution to this problem. We do not know for sure whether the energy price run up is due to greed or due to competition for a declining resource. We do know that we are paying a lot more at the gas pump and we are all wondering what our winter heating bills will look like. Collectively and as individuals we will have to get as serious about this as we were in the 1970???s. It is possible that we have gone past the point of no return, but I am not giving up and I hope that you won???t either. Our best hope is to act together to make things better. Steve Hovland San Francisco September 11, 2005 From checker at panix.com Sat Sep 17 01:27:32 2005 From: checker at panix.com (Premise Checker) Date: Fri, 16 Sep 2005 21:27:32 -0400 (EDT) Subject: [Paleopsych] WP: Victor Davis Hanson: Why We Must Stay in Iraq Message-ID: Victor Davis Hanson: Why We Must Stay in Iraq http://www.washingtonpost.com/wp-dyn/content/article/2005/09/02/AR2005090202678_pf.html 5.9.4 Vietnam is once again in the air. Last month's antiwar demonstrations in Crawford, Tex., have been heralded as the beginning of an antiwar movement that will take to the streets like the one of 30 years ago. Influential pundits -- in the manner of a gloomy Walter Cronkite after the Tet offensive -- are assuring us that we can't win in Iraq and that we have no option but a summary withdrawal. We may even have a new McGovern-style presidential "peace" candidate in Wisconsin Sen. Russ Feingold. America's most contentious war is being freely evoked to explain the "quagmire" we are supposedly now in. Vietnam is an obvious comparison given the frustration of asymmetrical warfare and savage enemies who escape our conventional power. But make no mistake, Iraq is not like Vietnam, and it must not end like Vietnam. Despite our tragic lapses, leaving now would be a monumental mistake -- and one that we would all too soon come to regret. If we fled precipitously, moderates in the Middle East could never again believe American assurances of support for reform and would have to retreat into the shadows -- or find themselves at the mercy of fascist killers. Jihadists would swell their ranks as they hyped their defeat of the American infidels. Our forward strategy of hitting terrorists hard abroad would be discredited and replaced by a return to the pre-9/11 tactics of a few cruise missiles and writs. And loyal allies in Eastern Europe, the United Kingdom, Australia and Japan, along with new friends in India and the former Soviet republics, would find themselves leaderless in the global struggle against Islamic radicalism. The specter of Vietnam will also turn on those who embrace it. Iraq is not a surrogate theater of the Cold War, where national liberationists, fueled by the romance of radical egalitarianism, are fortified by nearby Marxist nuclear patrons. The jihadists have an 8th-century agenda of gender apartheid, religious intolerance and theocracy. For all its pyrotechnics, the call for a glorious return to the Dark Ages has found no broad constituency. Nor is our army in Iraq conscript, but volunteer and professional. The Iraqi constitutional debate is already light-years ahead of anything that emerged in Saigon. And there is an exit strategy, not mission creep -- we will consider withdrawal as the evolution to a legitimate government continues and the Iraqi security forces grow. But the comparison to Vietnam may be instructive regarding another aspect -- the aftershocks of a premature American departure. Leaving Vietnam to the communists did not make anyone safer. The flight of the mid-1970s energized U.S. enemies in Iran, Cambodia, Afghanistan and Central America, while tearing our own country apart for nearly a quarter-century. Today, most Americans are indeed very troubled over the war in Iraq -- but mostly they are angry about not winning quickly, rather than resigned to losing amid recriminations. We forget that once war breaks out, things usually get far worse before they get better. We should remember that 1943, after we had entered World War II, was a far bloodier year than 1938, when the world left Hitler alone. Similarly, 2005 may have brought more open violence in Iraq than was visible during Saddam's less publicized killings of 2002. So it is when extremists are confronted rather than appeased. But unlike the time before the invasion, when we patrolled Iraq's skies while Saddam butchered his own with impunity below, there is now a hopeful future for Iraq. It is true that foreign terrorists are flocking into the country, the way they earlier crossed the Pakistani border into Afghanistan to fight with the Taliban, and that this makes the short-term task of securing the country far more difficult. But again, just as there were more Nazis and fascists out in the open in 1941 than before the war, so too there were almost none left by 1946. If we continue to defeat the jihadists in Iraq -- and the untold story of this war is that the U.S. military has performed brilliantly in killing and jailing tens of thousands of them -- their cause will be discredited by the stick of military defeat and the carrot of genuine political freedom. All this is not wishful thinking. The United States has an impressive record of military reconstruction and democratization following the defeat of our enemies -- vs. the abject chaos that followed when we failed to help fragile postwar societies. After World War II, Germany, Italy and Japan (American troops are still posted in all three) proved to be success stories. In contrast, an unstable post-WWI Weimar Germany soon led to something worse than Kaiser Wilhelm. After the Korean War, South Korea survived and evolved. South Vietnam, by contrast, ended up with a Stalinist government, and the world watched the unfolding tragedy of the boat people, reeducation camps and a Southeast Asian holocaust. Present-day Kabul has the most enlightened constitution in the Middle East. Post-Soviet Afghanistan -- after we ceased our involvement with the mujaheddin resistance -- was an Islamic nightmare. So we fool ourselves if we think that peace is the natural order of things, and that it follows organically from the cessation of hostilities. It does not. Leave Iraq and expect far worse tribal chaos and Islamic terrorism than in Mogadishu or Lebanon; finish the task and there is the real chance for something like present-day Turkey or the current calm of federated Kurdistan. Have we forgotten that Iraq before the invasion was not just another frightening Middle East autocracy like Syria or Libya, but a country in shambles -- not, as some will say, because of international sanctions, but thanks to one of the worst regimes on the planet, with a horrific record of genocide at home and regional aggression abroad? As the heart of the ancient caliphate, Iraq symbolized the worst aspects of pan-Arab nationalism and posed the most daunting obstacle for any change in the Middle East. Thus al Qaedists and ex-Baathists alike are desperate to drive us out. They grasp that should a democratic Iraq emerge, then the era of both Islamic theocracies and fascist autocracies elsewhere in the region may also be doomed. Our presence in Iraq is one of the most principled efforts in a sometimes checkered history of U.S. foreign policy. Yes, there is infighting among the Kurds, the Shiites and the Sunnis, but this is precisely because Saddam Hussein pitted the sects against each other for 30 years in order to subjugate them, while we are now trying to unite them so that they might govern themselves. The United States has elevated the formerly despised and exploited Shiites and Kurds to equal status with the Sunnis, their former rulers. And from our own history we know that such massive structural reform is always messy, dangerous -- and humane. So, too, with other changes. It is hard to imagine that Syria would have withdrawn from Lebanon without American resolve in both Afghanistan and Iraq. Nor would either Pakistan's A.Q. Khan or Libya's Moammar Gaddafi have given up on plans to nuclearize the Middle East. Saddam's demise put pressure on HosniMubarak to entertain the possibility of democratic reform in Egypt. These upheavals are, in the short term, controversial and volatile developments whose ultimate success hinges only on continued American resolve in Iraq. There is no other solution to either Islamic terrorism of the sort that hit us on Sept. 11, 2001, nor the sort of state fascism that caused the first Gulf War, than the Bush administration's easily caricatured effort to work for a third democratic choice beyond either dictatorship or theocracy. We know that not because of pre-9/11 neocon pipedreams of "remaking the Middle East," but because for decades we tried almost everything else in vain -- from backing monarchs in the Gulf who pumped oil and dictators in Pakistan and Egypt who promised order, to "containing" murderous autocrats like Saddam and ignoring tyrannous theocrats like the Taliban. Yes, the administration must account to the American people for the radically humanitarian sacrifices of American lives we are making on behalf of the freedom of Kurds and Shiites. It must remind us that we are engaging murderers of a sort not seen since the Waffen SS and the suicide killers off Okinawa. And it must tell us that victory is our only option and explain in detail how and why we are winning. The New York Times recently deplored the public's ignorance of American heroes in Iraq. In fact, there are thousands of them. But in their eagerness to view Iraq through the fogged lens of Vietnam, the media themselves are largely responsible for the public's shameful lack of interest. A few days ago, while the networks were transfixed by Cindy Sheehan (or was it Aruba?), the United States military, in conjunction with Iraqi forces, was driving out jihadists from Mosul -- where the terrorists are being arrested and killed in droves. Lt. Col. Erik Kurilla of the 1st Battalion, 24th Infantry Regiment, who had worked for months to create an atmosphere of mutual understanding on the city's streets, was severely wounded as he led his men to clear out a terrorist hideaway. The jihadist who shot him -- who had recently been released from Abu Ghraib -- was not killed, but arrested and given medical care by U.S. surgeons. Not long before he was wounded, Lt. Col. Kurilla had delivered a eulogy for three of his own fallen men. Posted on a military Web site, it showed that he, far better than most of us, knows why America is there: "You see -- there are 26 million people in Iraq whose freedom we are fighting for, against terrorists and insurgents that want a return to power and oppression, or worse, a state of fundamentalist tyranny. Some of whom we fight are international terrorists who hate the fact that in our way of life we can choose who will govern us, the method in which we worship, and the myriad other freedoms we have. We are fighting so that these fanatical terrorists do not enter the sacred ground of our country and we have to fight them in our own backyard." Amen. Author's e-mail: [2]author at victorhanson.com Victor Davis Hanson is a military historian at Stanford University's Hoover Institution and the author of the forthcoming "A War Like No Other" (Random House). From checker at panix.com Sat Sep 17 01:27:38 2005 From: checker at panix.com (Premise Checker) Date: Fri, 16 Sep 2005 21:27:38 -0400 (EDT) Subject: [Paleopsych] Book World: Class Struggles Message-ID: Class Struggles Book World, 5.9.4 http://www.washingtonpost.com/wp-dyn/content/article/2005/09/01/AR2005090101762_pf.html Stories from the front lines of American schools reveal the world beneath policy debates. By Eric Hoover In 1983, a national panel of education experts released the report that launched a thousand headaches. The document, "A Nation at Risk: The Imperative for Educational Reform," warned that public schools were foundering. The nation's jaw dropped, and politicians promised improvements. Two decades later, they're still promising. But the bickering over reforms is ceaseless. Take the No Child Left Behind Act, the controversial federal law requiring schools to show annual progress on state tests taken by students in grades 3 through 8. Supporters say the get-tough program promotes high standards and accountability; critics say the plan is too rigid and out of step with reality. Who's right? And how do such big questions relate to struggles in school systems near you? Satisfying answers rarely come from politicians and wonks, who dwell in a fog of slogans and statistics. But welcome are those authors who find the pulse of human drama in the education trenches. The experiences of students, parents, teachers and administrators in American schools make compelling stories, full of heroes, villains and conflicts. A School House Divided A girl named Pineapple poses the question that haunts Jonathan Kozol's The Shame of the Nation: The Restoration of Apartheid Schooling in America (Crown, $25). "What's it like," the black sixth grader asks the white author, "over there where you live?" Like other students in this sweeping report, Pineapple attends a public school where minorities make up nearly 100 percent of the enrollment. Her curiosity about whites, who attend schools in an unknowable "over there," speaks to the racial divide that Brown v. Board of Education attempted to bridge a half-century ago. Kozol, a best-selling education writer, argues convincingly that de facto segregation endures in urban school systems from Seattle to the South Bronx. His firsthand reporting reveals districts in which schools are separate and unequal. He relays insights from poor students who learn in buildings where ceilings leak, rats scurry, and toilets don't flush. In these ramshackle places, which often lack enough books, desks and qualified teachers, the drumbeat of school-accountability measures sounds hollow. In an effective series of anecdotes, Kozol asserts that standards-based reforms turn poor schools -- with the fewest resources to teach the skills those tests measure -- into mindless educational factories. He warns that high-stakes tests threaten to turn low-income students into "examination soldiers" who do not so much acquire knowledge as regurgitate facts. He provides statistics that suggest the much-touted reforms have failed to close the so-called achievement gap between white and minority students. And he cites data showing the gaps between per-pupil spending in predominantly white urban school districts and districts that serve mostly minority students. In a chapter called "Deadly Lies," the author predicts that until students from different economic backgrounds attend schools of equal quality and resources, No Child Left Behind will not shrink but expand "the vast divide between two separate worlds of future cognitive activity, political sagacity, social health and economic status, and the capability of children of minorities to thrive." A call for activism, The Shame of the Nation firmly grounds school-reform issues in the thorny context of race and concludes that the nation has failed to deliver the promise of Brown. Power to the Parents Bribes, lies and scandals are part of education's ugly underbelly, Joe Williams reveals in Cheating Our Kids: How Politics and Greed Ruin Education (St. Martin's, $24.95, forthcoming in October). Williams, a veteran education reporter, makes full use of his journalistic skills in this blistering analysis of public-school politics. (Friends of teachers unions, take cover.) Vivid anecdotes about administrators skimming from school budgets and teachers-of-the-year getting fired because of their expensive seniority support his case that the goals of education bureaucrats often conflict with the interests of students. "As a society," Williams writes, "we are dismissing the needs of individual students to protect a romantic notion of public education whose very core is consumed with meeting the needs of adults first and foremost." Occasionally, these valid structural critiques of "the system" lapse into broad-brush criticisms of the "education cartel." But his frustrations, grounded in accounts of bureaucracy run amok, echo those of many parents. Even so, Cheating Our Kids hits an inspirational note with its instructive explanation of how parents, business leaders and activists from both ends of the political spectrum helped bring school choice to Milwaukee in the 1990s, allowing low-income families to send their children to private schools at the public's expense. The tale proves that dedicated citizens who demand a better education for their children can move the mountains known as politicians. Unconventional Wisdom Everybody knows that reducing class sizes in public schools improves the quality of education. But where did they get that idea? Not from Jay P. Greene's Education Myths: What Special Interest Groups Want You to Believe about Our Schools -- And Why It Isn't So (Rowman & Littlefield, $24.95). Greene, a senior fellow at the Manhattan Institute, a conservative think tank, challenges 18 popular assumptions in this accessible, data-driven polemic. The attacks come fast and furious against popular beliefs about class sizes, graduation rates and underperforming schools. Greene argues that public schools receive adequate funding, countering Kozol's "anecdotal reasoning" that there are spending gaps between urban and suburban schools. He also argues persuasively that voucher programs do not harm public schools, as some critics of school-choice contend. His arguments stick close to the numbers compiled from numerous education studies, and, generally, Greene makes strong cases that would keep even education-policy gurus on their toes. Still, all the numbers in the world won't end the debate over what's true. Just ask your favorite teacher what he or she thinks about the elaborate statistical analyses behind the following statement from Education Myths : "It is simply not the case that teachers are less richly rewarded for their work than those in similar professions." Daydream Believer In Crash Course: Imagining a Better Future for Public Education (Riverhead, $24.95), Chris Whittle, the maverick businessman who became an education insider, describes his vision for American schools in painstaking detail. Whittle is the former owner of Esquire magazine and the founder of Edison Schools, a company that manages 157 public schools in 19 states and educates 70,000 students. Naturally, the author promotes splicing the private-sector's DNA (think free-market competition) into the traditional education system. Whittle's blueprint calls for radical new curricula, massive educational research-and-development efforts, and better training and pay for teachers and principals. He imagines students studying independently, freed from the constraints of regimented class schedules. "We are still operating in a type of Charles Dickens mindset," Whittle writes, "believing that these young, half-civilized things called children must be literally whipped into shape, if not by a stick then by a never-ending schedule." The detailed business strategies in Crash Course may cause drowsiness in some casual readers, and the 37-page leap into the year 2030 may puzzle others. But the scale of Whittle's imagination and his disarming optimism make this a refreshing companion to gloomier education tomes. Pragmatism 101 My Freshman Year: What a Professor Learned by Becoming a Student (Cornell Univ., $24) is the true tale of an anthropology professor who became a fly on the dorm-room wall. Rebekah Nathan (a pseudonym for Cathy Small of Northern Arizona University, recently unmasked by the New York Sun) enrolled as an undergraduate student at the university where she teaches, moved in with her subjects and took classes for two semesters. Her goal was to understand the mysteries of modern students, including why they snooze in classes and skip assigned readings. A few distracting scholarly digressions aside, Nathan engagingly observes that many students care little about intellectual matters and see their university as a career greenhouse. No revelations there. But that campus life is no "Animal House" may come as a surprise. Juggling classes, assignments and jobs demands survival skills, the professor discovers. The key to sanity: "controlling college by shaping schedules, taming professors, and limiting workload." My Freshman Year provides some keen insights into the causes of students' fierce pragmatism. For one thing, debt often drives their career aspirations and, in turn, their choice of majors and extracurricular pursuits. Colleges, Nathan argues, must adapt to those 21st-century realities: "Educational policy . . . cannot afford to rely on inaccurate or idealized versions of what students are." But understanding students is not the same as sympathizing with them. Nathan's vow to lighten students' loads by assigning them less reading sounds like blasphemy to this bookworm. My Freshman Year provides a long list of what ails college students, but a short list of remedies. ? Eric Hoover is a senior editor at the Chronicle of Higher Education. From checker at panix.com Sat Sep 17 01:27:44 2005 From: checker at panix.com (Premise Checker) Date: Fri, 16 Sep 2005 21:27:44 -0400 (EDT) Subject: [Paleopsych] CHE: Liberalism: the Fuel of Empires? Message-ID: Liberalism: the Fuel of Empires? The Chronicle of Higher Education, 5.9.2 http://chronicle.com/weekly/v52/i02/02a01901.htm 2 political scientists at Princeton help revive debate on how European liberals of centuries past viewed colonialism and imperialism By DAVID GLENN It is one of the most troubling puzzles in the history of political thought: Why were some of Europe's early liberal theorists -- the people who imagined and promoted tolerance, universal suffrage, the rule of law, and minimal government -- also enthusiastic supporters of European colonization, conquest, and empire in Asia and Africa? John Stuart Mill, author of On Liberty and The Subjection of Women, spent 25 years working for the British East India Company in the mid-19th century. He believed that India and other "barbarous" nations "have not got beyond the period during which it is likely to be to their benefit that they should be conquered and held in subjection by foreigners." Alexis de Tocqueville, among the century's most sophisticated proponents of democracy, argued during the 1840s that it was urgently necessary for France to subjugate and colonize Algeria. Through much of the 20th century, political theorists and intellectual historians largely ignored that element of classical liberals' thought, focusing instead on their abstract arguments for liberty or their campaigns for domestic reform. (Tocqueville's voluminous writing on Algeria was virtually forgotten in the English-speaking world.) And when these liberals' pro-imperialist arguments were acknowledged, they were sometimes dismissed as simple hypocrisy. More recently, some left-wing scholars have argued that -- far from hypocrisy -- the liberals' imperialist adventures reveal something essential about liberalism itself. The Enlightenment's calls for universal human liberty, according to this argument, have always contained a Eurocentric and potentially racist understanding of what human societies should look like. Such discussions are no longer confined to the margins of postcolonial studies. With the end of the cold war's international order -- not to mention the U.S. invasion of Iraq -- the question of empire has moved close to the heart of legal and political theorists' preoccupations. The past 15 years have seen a flourishing of sophisticated explorations of liberalism, conquest, and international justice. No Simple Formulas Two of the most visible exponents of this new wave in empire studies are Jennifer Pitts and Sankar Muthu, who met as graduate students at Harvard University a decade ago and who are now assistant professors of politics at Princeton University. Along the way, they got married. In Enlightenment Against Empire (Princeton University Press, 2003), Mr. Muthu examined the brief period in the late 18th century when several prominent liberal theorists -- notably Denis Diderot and Johann Gottfried von Herder -- were skeptical toward, and in some cases actively campaigned against, European colonialism. Ms. Pitts's new book, A Turn to Empire: The Rise of Imperial Liberalism in Britain and France (Princeton, 2005), explores the very different mood of the mid-19th century, when most leading liberals, Mill and Tocqueville among them, sat comfortably on the imperialist bandwagon. As those divergent projects suggest, Ms. Pitts and Mr. Muthu are not offering simple formulas for decoding intellectual history. Liberal theory, they argue, contains the seeds of both pro-imperialist and anti-imperialist arguments. "There's no necessary connection between liberalism and empire," Mr. Muthu says. "Whether a liberal thinker had a positive or a negative conception of empire depends on a whole range of other factors." The two young scholars propose that a close reading of liberalism's encounter with empire can help us make sense of certain sticky problems that have always confronted liberal political philosophy. "There isn't a strong theoretical source within liberalism for making claims about who should be included and who should be excluded from a given political community," says Ms. Pitts. "And that creates all kinds of problems for liberal theory in the context of debates about migration and other foreign-policy questions." Political philosophers are now joining forces with intellectual historians, legal scholars, and other social scientists in new attempts to wrestle with those questions. The work of Ms. Pitts and Mr. Muthu is near the center of those discussions, says Iris Marion Young, a professor of political science at the University of Chicago, who organized a 2004 conference on empire at which the two Princeton scholars spoke. "In history and literary studies, work on colonialism and postcolonialism has been going on for 20 years," Ms. Young says. "But in philosophy and political theory, this kind of investigation, asking the questions that Pitts and Muthu ask, has only begun to happen recently. There's a general feeling that it's about time that we started to look at these things." Paternal Liberals Are moral arrogance and contempt for cultural "backwardness" built into liberalism's DNA? In a celebrated 1994 essay in the Times Literary Supplement, in London, the political theorist Bhikhu Parekh argued that the tendency toward imperialism runs very deep in the liberal tradition. Parekh argued that not only Mill, but also contemporary liberals like Joseph Raz, John Rawls, and Ronald Dworkin, have promoted a political vision that is "missionary, ethnocentric, and narrow, dismissing nonliberal ways of life and thought as primitive and in need of the liberal civilizing mission." The following year, Uday Singh Mehta, a professor of political science at Amherst College, published Liberalism and Empire: A Study in Nineteenth-Century British Liberal Thought (University of Chicago Press). He argued that liberal thinkers have habitually (but not universally) fallen into the trap of treating unfamiliar cultures as if they were simply several steps behind the West in an inevitable march of economic and cultural progress. Mill, for example, was fond of using metaphors of childhood when discussing India. In one essay, he wrote of "the successive states of our society through which the advanced guard of our species has passed, and the filiation of these states on one another -- how each grew out of the preceding and was the parent of the following state." Mill's universalism, Ms. Pitts says, "takes a particular idea of what full human flourishing is, which is very much based on European culture, and projects it as the endpoint for all societies." Mill's 18th-century liberal predecessors, by contrast, offered a subtler and more nuanced account of societies' cultural and economic differences. "[Adam] Smith and his contemporaries would look at unusual or apparently disturbing practices and ask, Why might that exist in a particular society?" Ms. Pitts says. "There was much more interpretive generosity when dealing with alien cultures." The shift from the relative generosity of Smith's era to the more arrogant posture of Mill's generation is a question that animates Ms. Pitts's book. One answer is that the liberals of the earlier period generally felt more beleaguered and were less emotionally invested in their societies' institutions. In the French case, especially, Mr. Muthu points out that pre-revolutionary liberals "were deeply critical of their own societies, and of what might be described as European civilization. The last thing that they would have envisioned as just would be a wholesale effort to spread those institutions abroad. A thinker like Diderot thought of his own society as being deeply morally corrupt, monarchical, ruled by a hereditary nobility and a hypocritical church. Why would you want to spread that around the world?" Matters of National Pride By the middle of the 19th century, however, liberal reforms and popular upheavals had made both France and especially Britain more democratic and less corrupt. Those shifts lifted what Ms. Pitts calls "the civilizational self-confidence" of that era's liberals -- which in turn made it easier for them to endorse efforts to impose European governance on the rest of the world. Tocqueville's case was more complicated than that of Mill, who maintained a sunny optimism about imperialism's progress. Even when he harshly criticized particular British practices in colonial India, he argued that they were essentially innocent mistakes. Tocqueville, on the other hand, was highly attentive to the violent repression and corruption that accompanied French settlement in Algeria. He continued to support it until nearly the end of his life, however, partly because he believed that, if not France, some other European country would colonize northern Africa. "Tocqueville had deep doubts that empires were civilizing projects, and he acknowledged their inevitable brutality," Ms. Pitts says, "and yet he still forcefully advocated for Algerian colonization. That's somewhat chilling." Neither Mill nor Tocqueville endorsed the theories of biological racism that were commonplace in the 19th century. In that sense, their thought remained distinctly liberal. In 1850, Ms. Pitts recounts in her book, Mill condemned Thomas Carlyle's "damnable" arguments for black inferiority, especially the notion "that one kind of human beings are born servants to another kind." She adds, however, that the racist ideas of the 19th century probably did affect the air that Mill and Tocqueville breathed. Adam Smith had written during a period of widespread "religious ideas about human unity," Ms. Pitts says, but "the prevalence of the biological ideas in this later period meant that differences among human groups were much more emphasized than human uniformity. "Difference was so heavily emphasized that even if one opposed the biological arguments, thinkers like Mill would talk about the differences among human groups as very deep-seated -- not biologically, but culturally." Going Global Ms. Pitts and Mr. Muthu are expanding their studies of liberalism's ambiguous dance with empire. She is at work on a book on early debates about the foundations of international law, and he is studying 18th-century anxieties about global commerce. "Around 1800," Mr. Muthu says, "antislavery activists began to point out that global commerce ties consumers and producers together, and that consumers could be complicit in exploitation. That's the sort of argument that interests me." He is especially curious to assess how and why a few intellectuals extended that consumer-based argument into a general dread of global commerce, while other reformers saw globalization as potentially positive. Neither Ms. Pitts nor Mr. Muthu expected to work this territory when they entered graduate school at Harvard in the early 1990s. In an early course, Ms. Pitts was intrigued by the fact that Edmund Burke, a relatively conservative thinker who was skeptical of democracy, was a vocal and passionate critic of Britain's imperial designs on India. She was later heavily influenced by Richard Tuck, a British scholar who joined Harvard's faculty in 1997. (Mr. Tuck was trained at the University of Cambridge, which is a hotbed of historians of political thought.) Mr. Muthu, for his part, arrived at Harvard knowing that he wanted to study Enlightenment thinkers' implicit beliefs about anthropology and human differences. "But the more I worked on 18th-century political thinkers' writings on those questions," he says, "the more obvious it became that many of the arguments were crafted for explicitly political ends -- that is, either to support or to oppose imperial and commercial projects abroad." He realized that he needed to look at the question of empire. The two scholars' work has also been shaped, of course, by their friendship. They became a couple less than two months after meeting each other. "We've been students of the history of political thought together for essentially the entire time that we've been doing work in the field," Mr. Muthu says. "People often ask, What's it like to be a couple, and both doing work in this kind of area? And, in a way, it's difficult for me to answer that question. This is the way that it's always been." "I'm sure we influence each other's views on political theory and political thought in all sorts of ways that I would not be able to reconstruct, that I'm not consciously aware of," he adds. "But we also just naturally tend toward the same sorts of questions," Ms. Pitts says. "I often wonder whether that's the case," Mr. Muthu says, glancing at Ms. Pitts across their living-room couch, "or whether that's the result of the fact that we end up influencing each other. It's very difficult to know what's cause and what's effect." The two have never done any collaborative writing together, but Ms. Pitts says that they would like to do so someday. "It's extremely helpful, and I very much take it for granted," Mr. Muthu says, "that when I'm working on some set of issues, that I can turn to Jennifer and ask her opinion of something." "Or just when I feel like I've run into a brick wall, and I need some fresh infusion of ideas, to hand the draft over," Ms. Pitts says. Both of them feel lucky to have wound up in the same department. (Alongside their book projects, they are shifting into a slightly slower gear. Their first child was born in June, and they will each take a semester of leave this year.) They hope that their historical work can -- without losing sight of nuance -- shed light on contemporary battles. "When I've taught these texts to undergraduates," says Mr. Muthu, "they're sometimes shocked at what they see as the deep affinities between contemporary arguments over Iraq, for instance, and the kinds of arguments made about conquest and intervention 200 years ago. One of the things that I try to do in these courses is to show that there's been a long and intriguing history to these kinds of dilemmas that face citizens and policy makers. These kinds of struggles and deep philosophical differences can be found in past centuries as well. "These are not recent developments." From checker at panix.com Sat Sep 17 01:27:50 2005 From: checker at panix.com (Premise Checker) Date: Fri, 16 Sep 2005 21:27:50 -0400 (EDT) Subject: [Paleopsych] Science Daily: Fitness trumps cholesterol as key to heart health Message-ID: Fitness trumps cholesterol as key to heart health http://qnc.queensu.ca/story_loader.php?id=43184cb927d49 5.9.2 (Kingston, ON) ? Being physically fit can dramatically reduce men?s deaths from heart disease ? even when their cholesterol rates are high, says Queen?s researcher Peter Katzmarzyk. His new study to be published Tues. Sept. 6 by Circulation: Journal of the American Heart Association shows that, regardless of their cholesterol level, men can cut by half their risk of dying from cardiovascular disease if they are physically fit. Other Queen?s members of the team, from the School of Physical and Health Education, are Chris Ardern and Ian Janssen. Researchers Timothy Church and Steven Blair from the Cooper Institute Centres for Integrated Health Research in Dallas, Texas, are also on the team. The primary aim of the study was to analyze the effectiveness of last year?s modifications to the guidelines from the U.S. National Cholesterol Education Program Adult Treatment Panel III (NCEP ATP III) for lowering bad (LDL) cholesterol to predict death from cardiovascular diseases. ?We wanted to find out if the new guidelines could identify men at risk for cardiovascular disease,? says Dr. Katzmarzyk. ?We confirmed that the guidelines do accurately identify men at risk not only of disease, but also at risk of cardiovascular death. We also discovered that fitness is important across the board ? at every level of cholesterol.? Results also suggest that within a given risk category, physical fitness is associated with a greater than 50-per-cent lower risk of mortality. In this study, physical fitness was four to five, 30-minute segments of activity per week: equivalent to walking 130 to 138 minutes per week. Researchers analyzed the cardiovascular risk factors and cardio-respiratory fitness of 19,125 men ages 20 to 79, who were treated at a preventive medicine clinic from 1979 -1995, prior to the revised treatment guidelines. Using the new ATP III classifications: ? 58 per cent of the men would have met the criteria for being ?at or below LDL (bad) cholesterol goal?; ? 18 per cent would have met the criteria for ?therapeutic lifestyle change? ? meaning diet, physical activity and weight management could lower LDL; and ? 24 per cent would have met the criteria for ?drug consideration? for lowering LDL. There were 179 deaths from cardiovascular disease over more than 10 years of follow-up. Overall, compared to men who met the acceptable LDL level under the revised guidelines: ? Men who met the criteria for therapeutic lifestyle intervention had twice the risk of cardiovascular disease death; and ? Men eligible for aggressive cholesterol-lowering therapy had almost seven-times the risk. ?Lowering the threshold for consideration of cholesterol-lowering drug therapy for those at high risk will ultimately save lives and also have important implications for the healthcare system,? says Dr. Katzmarzyk . The research was partly funded by the U.S. National Institutes of Health. Contacts: Nancy Dorrance, Queen?s News & Media Services, 613.533.2869 Lorinda Peterson, Queen?s News & Media Services, 613.533.3234 Attention broadcasters: Queen?s has facilities to provide broadcast quality audio and video feeds. For television interviews, we can provide a live, real-time double ender from Kingston fibre optic cable. Please call for details. From checker at panix.com Sat Sep 17 01:42:10 2005 From: checker at panix.com (Premise Checker) Date: Fri, 16 Sep 2005 21:42:10 -0400 (EDT) Subject: [Paleopsych] Los Angeles Times: Men, women and Darwin Message-ID: Men, women and Darwin http://www.latimes.com/features/health/la-he-envtpsych29aug29,0,6625346,print.story?coll=la-home-health RULES OF ATTRACTION Men, women and Darwin Can evolutionary psychology take the mystery out of how we meet and mate? By Julia M. Klein Special to The Times August 29, 2005 THREE years ago, Robert Kurzban spotted an advertisement for a service called HurryDate, offering an evening of three-minute meetings with 25 potential dates. Kurzban was intrigued -- but not because he was looking for romance. As an evolutionary psychologist at the University of Pennsylvania, he thought speed dating could afford him a rare chance to study how people behave in real dating situations. With the agreement of the company, Kurzban and a colleague surveyed the HurryDaters about a range of topics including religious background and their desire for children. Their fundamental questions: Did participants select the people most like themselves? Or did most of them prize similar traits -- such as appearance or high income -- and try to get the best deal they could in the mating market? What the researchers discovered was that men and women chose their dates on the basis of "generally agreed upon mate values," the mating market hypothesis. Another finding: Both sexes relied mainly on physical attractiveness, largely disregarding factors such as income and social status. "HurryDate participants are given three minutes in which to make their judgments," the psychologists wrote in a paper published in the May issue of the science journal Evolution and Human Behavior, "but they mostly could be made in three seconds." The HurryDate research is one example of the everyday applications of evolutionary psychology, an interdisciplinary field that is influential and controversial. Other recent studies of human mating have explored issues such as the male preference for dating subordinates, why women have extramarital affairs and what trade-offs both sexes are willing to make in choosing partners. Evolutionary psychology sees the mind as a set of evolved psychological mechanisms, or adaptations, that have promoted survival and reproduction. One branch of evolutionary psychology focuses on the distinct mating preferences and strategies of men and women. For example, because our male ancestors were easily able to sire numerous children at little cost to their fitness, the theory says, they were inclined to short-term mating with multiple partners. In choosing mates, they gravitated toward youth and physical attractiveness -- markers of fertility and health. By contrast, females, for whom conception meant pregnancy and the need to care for a child, were more selective, searching for long-term commitments from males with the resources and willingness to invest in them and their offspring. * Theory's evolution Support for this theory came from a landmark study by psychologist David M. Buss and colleagues in the 1980s, involving 37 cultures and 10,047 individuals. Buss, now professor of psychology at the University of Texas in Austin, found marked similarities across cultures, including a female preference for men with resources and status that persisted even when women had considerable resources of their own. Overall, women valued financial resources in a mate twice as much as men did. "Up until that time, everyone believed that these things were very tethered to individual cultures and that cultures were infinitely variable," said Buss, whose more recent books have described the utility of jealousy and the universality of homicidal impulses. Buss' survey continues to influence research on human mating. But some scientists and social scientists remain skeptical, saying evolutionary psychologists tend to neglect the role of learning and culture and to overemphasize genetics. Melvin Ember, an anthropologist and president of the Human Relations Area Files, a Yale-affiliated research organization, says that "focusing on universals" fails to explain either individual or cultural variation. Jaak Panksepp, a neuroscientist at Falk Center for Molecular Therapeutics at Northwestern University, has chided evolutionary psychologists for ignoring recent neurological findings about human and mammalian brains. Despite the objections, the field of evolutionary psychology is growing. In recent years, Darwinian feminists and others have developed a more nuanced view of the complexities of female behavior. Women, it seems, aren't quite as monogamous as their partners might wish. They too sometimes pursue short-term mating strategies, though not everyone agrees on why. Randy Thornhill, professor of biology at the University of New Mexico, said he has discovered that women, in an unconscious bid for better genes, will choose "extra-pair copulation" -- that is, have affairs -- with men who are more attractive (though perhaps less likely to commit) than their long-term mates. Other research indicates that women make different choices at different points in their menstrual cycle, opting for better-looking, more symmetrical and more masculine-appearing men when they are at their most fertile. In short-term relationships, physical attractiveness is a priority for women, just as it is for men, according to a study by psychologists Norman P. Li and Douglas T. Kenrick that is slated to appear sometime next year in the journal of Personality and Social Psychology. Trying to draw a distinction between "luxuries" and "necessities," the researchers gave men and women varied "mating budgets" and, in a series of tests, asked them to construct their ideal mate, using such qualities as looks, social status, creativity, and kindness. For one-night stands and affair partners, both women and men sought physical attractiveness above all else. For long-term mates, the expected sex differences emerged: Men kept preferring attractiveness, and women opted for social status, as well as warmth and trustworthiness. But after their minimum requirements for these necessities were met, both sexes chose well-rounded partners over those with the very best looks or the highest status. In other words, "Men are not complete pigs, and women are not complete gold-diggers," said Li, assistant professor of psychology at the University of Texas at Austin. This makes good evolutionary sense, considering that to father a child, Li said, "you don't need the most beautiful woman in the world." At the same time, women "don't need the richest man in the world to guarantee reproductive success. You just need somebody who's not a bum, basically." * Trading up In practice, Li said, people's budgets in the mating market are determined by what they themselves have to offer. "So a guy who is extremely high status or very wealthy can trade up for a more physically attractive partner," he said. And "women trying to make themselves more physically attractive so they can get a higher quality mate are not completely misguided." It is also true, Li said, that very smart and successful women will have a harder time finding partners. "It seems that men want somebody intelligent enough so that they can recognize the man's brilliance," he said, "but not necessarily enough to challenge them -- or so smart that they find someone else more interesting." John Marshall Townsend, professor of anthropology at the Maxwell School at Syracuse University, says that women's status requirements often complicate their search for a mate. Townsend showed a group of female medical students, law students and professionals pictures of men dressed in different ways -- wearing, for instance, a fast-food uniform or a designer suit and Rolex watch. He also gave participants descriptions of each man's social status. The results were decisive. "Here's Mr. Hottie, but if he's in the wrong costume, and given the wrong status description, then she won't go out with him, much less go to bed with him or marry him," said Townsend. "You could put Cary Grant in a Burger King outfit, and he looks dorky." If women do occasionally date "down" in terms of social status, Townsend said, "that would be out of desperation." By contrast, he says, men are likely to date any physically attractive woman. When it comes to marriage, "guys are not completely insensitive to social class," but, he said, they're "not looking for socioeconomic gain." Another recent study, by Stephanie L. Brown of the University of Michigan's Institute for Social Research and Brian P. Lewis of Syracuse University in New York, suggested that men prefer long-term relationships with subordinates rather than co-workers or supervisors. By contrast, women showed no significant preference for socially dominant men. The reason for this result, Lewis hypothesized, is that men think they would "have more control over the behavior" of female subordinates, including being able to ensure female monogamy, and thus the paternity of any children. "Female infidelity is a severe reproductive threat to males only when investment is high," as it is in long-term relationships, the authors write. Some evolutionary psychologists think gender differences can be overstated. In "The Mating Mind: How Sexual Choice Shaped the Evolution of Human Nature," Geoffrey Miller suggests that the human mind evolved, much like the elaborate peacock's tail, primarily as a way of attracting partners of both sexes. His book argues that traits such as musical and artistic ability played no clear role in helping human beings survive, but instead enhanced their reproductive success. For Miller, assistant professor of psychology at the University of New Mexico, intellect and creativity are, well, sexy. "Guys are not picky about short-term mating, which is why we don't read about IQ scores in Penthouse magazine," he said. But when it comes to long-term relationships, he said, "There's good evidence that guys are as picky as women about the mental traits of partners." In the context of speed dating, where quick impressions count, HurryDate president Adele Testani says she was not surprised to learn that both sexes were most choosy about physical attractiveness. Although participants invariably ask each other about their careers, Testani said, "it really is all about that face-to-face chemistry and connection and attraction." She added: "You're certainly not going to find out if you're going to marry the person" in a few minutes. Kurzban said the "rich visual information" supplied by HurryDate encounters may help men and women get over the first hurdle of appearance, before other factors, such as social status, become relevant. In the end, said Li, men and women tend to strive for the best partner their own attributes can buy. "Falling in love," he said, "is basically a process where both sides feel they're getting a good deal." From checker at panix.com Sat Sep 17 01:42:18 2005 From: checker at panix.com (Premise Checker) Date: Fri, 16 Sep 2005 21:42:18 -0400 (EDT) Subject: [Paleopsych] NYT Op-Ed: Parental Supervision Required Message-ID: Parental Supervision Required New York Times Op-Ed, 5.9.7 http://www.nytimes.com/2005/09/07/opinion/07sittenfeld.html By CURTIS SITTENFELD Philadelphia IN 1989, when I was 13 and living in Cincinnati, I waged a one-girl campaign to persuade my mother and father to let me attend Groton School in Massachusetts. Despite my parents' ambivalence about boarding school, they ultimately acquiesced, I went, and I received a very good education - not all of it academic. More than a decade later, I couldn't resist setting my first novel at a boarding school. Now at readings, I'm asked if I'd send my own child away to school, and I say no. My naked hypocrisy isn't the only reason I feel apologetic in these moments. It's also because the person who asks the question usually is middle-aged and gives off a certain preppy whiff - perhaps he's wearing seersucker pants, or maybe her voice has that assured, WASP-y thickness - and it seems highly likely that my questioner already is or soon will be a boarding-school parent. But it turns out I'm not alone: an increasing number of parents are deciding against boarding school. Enrollment at private day schools has grown 15 percent in the past decade, while enrollment at boarding schools has grown only 2.7 percent. Overall boarding school enrollment dropped from about 42,000 in the late 1960's to 39,000 in the last school year - even though, according to the Census Bureau, the population of 14- to 17-year-olds was more than 1.5 million higher in 2004 than in 1968. Reporting on this, The Wall Street Journal attributed the shift away from boarding school to a trend of greater parental involvement, which translates into parents reluctant to be apart from their children. This is, evidently, the same reason some parents are now accompanying their teenagers to boarding school; these mothers and fathers literally move, sometimes cross-country, to be close to the campuses of the boarding schools their children attend. While the new breed of super-involved parent strikes me as slightly creepy (having worked as a private-school teacher, I've also seen parents whose idea of "involvement" is doing their children's homework for them), I don't think the conclusion they've come to is the wrong one. Among the reasons I wouldn't send my own child to boarding school is that being around one's adolescent peers 24 hours a day doesn't seem particularly healthy. It makes the things that already loom large in high school - grades, clothes, sports, heartache, acne - loom even larger. Going home at night provides physical distance from the relentlessness of all teenagers, all the time, and, ideally, parents provide perspective. Although they might be dorky, parents know an important lesson about everything from serious hazing to the embarrassment of dropping a lunch tray in a crowded cafeteria: This, too, shall pass. Certainly teachers provide an adult perspective at boarding schools, but it's a very unusual teacher who influences an adolescent as much as the average parent does. Furthermore, while many boarding school teachers knock themselves out on students' behalf not just by teaching but also by coaching and running dorms, they're undermined by lesser teachers who, rather than guiding students out of teenage pettiness, seem themselves to get sucked down into it. There is on every boarding school campus some variation on the doofus teacher who, if he's not actually buying beer to ingratiate himself with the popular senior guys, sure seems to wish he could. The self-containment of boarding schools can create terrariums of privilege in which students develop a skewed sense of money and have a hard time remembering that, in fact, it is not normal to go skiing in Switzerland just because it's March, or to receive an S.U.V. in celebration of one's 16th birthday. At, for example, Choate Rosemary Hall - one of many boarding schools starting classes this or next week - room, board and tuition for 2005-2006 is $35,360. If, as Choate's Web site explains, 27 percent of students receive financial aid, that means the other 73 percent come from families that are, by just about any standards except perhaps their own, very rich. Even when these schools hold chapel services espousing humility and service to others, it's the campus facilities - the gleaming multimillion-dollar gymnasium, say - that can send a louder message. It's hard not to wonder: in a world of horrifying inequities, at what point do these lavishly maintained campuses go from enriching and bucolic to just obscene? Can a student living on such a campus be blamed if, logically working backward, she starts to think her access to such bounty must exist because she deserves it? It is this line of thought, I suspect, that gives rise to the noxious attitude of entitlement and snobbishness that is simultaneously less common than pop-culture depictions of boarding school would have you believe and also not that hard to find. FOR me, the question isn't why parents wouldn't send a child to boarding school as much as why they would. Unless there are either severe problems at home or flat-out terrible local schools, I don't see the point. Even in the case of terrible schools, I'm not convinced that parents can't significantly augment their children's education. Among the advantages of boarding school are opportunities for independence, academic stimulation, small classes, peer companionship and the aforementioned campus beauty - but every single one of these opportunities is available at dozens of liberal arts colleges, so why not just wait a few years until the student will better appreciate such gifts and save $140,000? Besides, then there's no risk of college feeling anticlimactic, as it can for boarding school graduates. Of course, none of this is what I thought when I was 13. I thought then, and I still think, that boarding school seemed interesting. It's a place where bright, talented adolescents rub up against each other figuratively and literally. Our culture is fascinated by the rich and the young, and elite boarding schools are a place where the two intersect. That doesn't mean you'll automatically be better off if you attend one, but it does make it unsurprising that they've retained a hold in the popular imagination. It's not that I see boarding schools as evil. I just don't see them as necessary, and despite their often self-congratulatory rhetoric, I don't see them as noble - certainly no more so than public schools. At the same time, I recognize the hubris in declaring how I'll raise my as-yet-nonexistent children, and probably nothing makes it likelier that I will send them to boarding school than publicly vowing I won't. I'm not planning on it, but life is hard to predict and perhaps at a parents' weekend 20 years from now, standing on the sidelines of a verdant lacrosse field, I'll be the one wearing seersucker. Curtis Sittenfeld is the author of the novel "Prep." From checker at panix.com Sat Sep 17 01:42:31 2005 From: checker at panix.com (Premise Checker) Date: Fri, 16 Sep 2005 21:42:31 -0400 (EDT) Subject: [Paleopsych] NYT: From the Air, Scientists Comb a Ruined Coastline for Clues and Lessons Message-ID: From the Air, Scientists Comb a Ruined Coastline for Clues and Lessons New York Times, 5.9.2 By CORNELIA DEAN ???The six-seater Cessna was flying low and slow along the Gulf Coast. Two coastal researchers, straining against their seatbelts, were leaning out a gaping hole where the port side door had been removed, photographing overwashed sand, piles of seagrass and new inlets, the marks left on the region's barrier islands by Hurricane Katrina. ??? But then the plane banked sharply, and the researchers aimed their cameras landward, toward Waveland, Miss., or what was left of it. ???''My God. My God,'' said Robert S. Young as he caught his first sight of what would be miles of desolate landscape where houses, stores, churches, hotels and other structures used to be. ??? Neighborhood after neighborhood appeared to have been raked clear, with nothing but concrete slabs to say where buildings had stood. Far into the woods, boards and beams and other remnants of these buildings lay in piles, a wrack line of debris left where the storm had dropped it. ??? In Gulfport and Biloxi, wrecked casino barges lay across city streets or leaned against high-rise buildings, some tilted on their sides, all with windows blown out. Where railroad and highway bridges once ran, there were only platoons of pylons marching into the water. Here and there a truck or S.U.V. moved slowly down sand-choked roads, but there were few other signs of life. ??? ''I have been on the scene of every major hurricane'' since Hurricane Hugo, said Dr. Young, a geologist at Western Carolina University who has been studying coastal storms since 1989, when he was a graduate student assigned to measure that storm's impact. ''This is the worst I have ever seen.'' ??? Since Hurricane Katrina struck, much of the nation's attention has been on New Orleans, where overtopped and breached levees stranded tens of thousands of residents and left much of the city under water. ??? But the scientists on the plane, Dr. Young and Andrew S. Coburn, associate director of the Duke University Program for the Study of Developed Shorelines, are looking instead at the Gulf Coast, where the storm came ashore. ??? They and other researchers are mapping changes in beach formations, patterns of damage, debris fields, wind and water flow, and inland and offshore topography for clues about why this storm was so destructive, and so deadly. ??? The work is part of scientists' continuing efforts to engage coastal officials and policy makers on how to make the coast safer -- and on whether some parts of the coast can ever be made safe enough. ??? ''We have never learned that lesson,'' said Abby Sallenger, a scientist with the United States Geological Survey who has studied storm effects on the East and Gulf coasts for years. Dr. Sallenger does not advocate wholesale retreat from the coast, he said, but ''the coastal research community should come together and come to some conclusions about where it is safer to go.'' ??? ''What's happened before is you come back and you not only rebuild, you rebuild bigger,'' he continued, but ''there are some places where you should think twice about putting up a pup tent.'' ??? Scientists who want this point to be heard, and who think Hurricane Katrina may be an ideal opportunity to make it, are eager to obtain as much good data as possible about the storm and its effects. ??? For example, Shea Penland, director of the Pontchartrain Institute for Environmental Studies at the University of New Orleans, said analysis of debris fields would be illuminating. ''Debris tells you how high the water was, how fast it was moving, where it was going, where were the soft spots,'' he said. ??? Researchers from Florida International University and the University of Florida are studying data from 10 portable towers, fortified to survive winds of more than 200 miles per hour and equipped with instruments to measure wind speed and direction at ground level and at 15 and then 30 feet above the ground. ??? Much real-time data transmission was halted during the storm when cellphone systems failed, said Stephen P. Leatherman, who heads the International Hurricane Research Center at F.I.U. But instruments on the towers stored their data and researchers should be able to obtain detailed measurements, he said. ??? Scientists like Dr. Sallenger are also trying to improve estimates of likely hurricane impact by going beyond barometric pressure and wind speeds, the basis of the Saffir-Simpson hurricane scale, to include factors like the elevation of the landscape and the shape of the beach where a storm might strike. ??? Advancing this goal was one of the reasons Dr. Young and Dr. Coburn took to the air. Their chartered plane took off early Friday afternoon from the Pensacola Aviation Center here, and they spent the next four hours flying as far west as Grand Island, La., 180 miles away. ??? There was very little air traffic, and that was good, said the pilot, Gene Giles, better known as Skip, because it was not clear that air traffic control was operating normally. ??? When the small plane crossed paths with military helicopters, as happened two or three times, the researchers held on as Mr. Giles wigwagged to signal he had seen them. At one point, the plane banked sharply to avoid a C-130 that was coming in over the Gulf, heading for a landing at Biloxi. ??? One of the first places the researchers photographed was Dauphin Island, a 15-mile strip of sand at the mouth of Mobile Bay. Dauphin Island is regularly breached by coastal storms, and its mainland bridge was knocked out in Hurricane Frederick in 1979. Many coastal researchers regard it as an outstanding example of unwise coastal development. ??? ''We paid $38 million for a new bridge,'' after the hurricane, said Orrin H. Pilkey Jr., director of Dr. Coburn's program, who has been an ardent advocate of retreat from hazardous coastlines since his parents experienced Hurricane Camille on this coast in 1969. ??? ''There was a brief period when they debated not rebuilding the bridge,'' he said in a telephone interview, ''but they did, and development skyrocketed on.'' ??? Even on its east end, where elevations are relatively high and the island's dune system is relatively robust, there were clear signs of storm damage. A row of groins stood out in the water, the beach they were meant to protect having eroded away behind them. A sea wall of rock riprap was out to sea. The ocean had overtopped it. ??? But it was on the west end of Dauphin Island that devastating storm damage was most apparent. ??? This part of the island had long since been without protective dunes. Some houses were simply gone, leaving nothing but pilings. ??? In places, the island's one road was obliterated. And everywhere, there were huge fans of sand on the island's back side, sand that had washed over from its ocean beach, leaving stranded houses seemingly wading on pilings into the surf. ??? On a developed barrier, like Dauphin Island, this sand movement is a disaster. ''It's as if the island is sliding out from under the houses,'' Dr. Young observed as the plane flew over them. ??? But on undeveloped islands, it is a normal and advantageous response to rising sea levels -- as sand washes toward the back of the island it is, in effect, retreating away from the ocean. ??? Sand that washes from the ocean sides of the islands is deposited in their marshy bayside fringes, where it is left in large fans. Eventually, this sand is colonized by beach grass and other plants. ??? New marshes form on its landward fringes. The island will look the same; it will just be farther landward. ??? Dr. Young and Dr. Coburn could see this process at work as the plane flew over Petit Bois, Horn and other uninhabited islands of Gulf Islands National Seashore -- more evidence, Dr. Young said, that damage to barrier islands is largely a matter of interference by people, ''a human tragedy,'' as he put it. ??? ''It's amazing how much better natural shorelines look after a storm than developed shorelines,'' he said. ??? Once the plane reached Waveland, Miss., however, there was little to feel good about. ??? When a storm like this makes landfall, pushing a wall of water with it, nothing along the beachfront will survive the assault, even structures built to withstand high winds. ''Nothing can withstand waves,'' Dr. Pilkey said. ??? Under assault by a storm surge, most structures simply fall apart, and their debris batters buildings behind them, ''missile-ing,'' Dr. Pilkey called it. ??? ''Stuff is thrown back into the next house and it doesn't have a chance,'' he said. ??? Eventually the debris piles up and forms a kind of dike, which breaks the action of storm water and offers some protection to structures behind it. ??? Dr. Coburn and Dr. Young have seen this kind of wrack line of debris before. But this time, they said, it was unusually far inland, testimony to the hurricane's strength. ??? When coastal scientists survey storm damage, they often speak cynically about people who have chosen to build their houses in harm's way. This time, for Dr. Coburn and Dr. Young, it was different. ??? ''It's hard to be callous,'' Dr. Young said as Mr. Giles sent the plane on another sweep along what remained of the Gulfport, Miss., shoreline. ''People lived here. People's lives are scattered around down there.'' ??? Dr. Sallenger, of the Geological Survey, said he hoped that as events unfolded in coastal Mississippi and nearby, the realization of what occurred there would bring scientists and policymakers together. ??? ''There are a lot of smart people from the research guys to the engineers to the people who build these things,'' he said. As people consider how and what to rebuild, he said, ''Let's just do it better.'' From checker at panix.com Sun Sep 18 01:24:04 2005 From: checker at panix.com (Premise Checker) Date: Sat, 17 Sep 2005 21:24:04 -0400 (EDT) Subject: [Paleopsych] NYT: High-Tech Flood Control, With Nature's Help Message-ID: High-Tech Flood Control, With Nature's Help New York Times, 5.9.6 By WILLIAM J. BROAD ???On a cold winter night in 1953, the Netherlands suffered a terrifying blow as old dikes and seawalls gave way during a violent storm. ??? Flooding killed nearly 2,000 people and forced the evacuation of 70,000 others. Icy waters turned villages and farm districts into lakes dotted with dead cows. ???Ultimately, the waters destroyed more than 4,000 buildings. ??? Afterward, the Dutch -- realizing that the disaster could have been much worse, since half the country, including Amsterdam and Rotterdam, lies below sea level -- vowed never again. ??? After all, as Tjalle de Haan, a Dutch public works official, put it in an interview last week, ''Here, if something goes wrong, 10 million people can be threatened.'' ??? So at a cost of some $8 billion over a quarter century, the nation erected a futuristic system of coastal defenses that is admired around the world today as one of the best barriers against the sea's fury -- one that could withstand the kind of storm that happens only once in 10,000 years. ??? The Dutch case is one of many in which low-lying cities and countries with long histories of flooding have turned science, technology and raw determination into ways of forestalling disaster. ??? London has built floodgates on the Thames River. Venice is doing the same on the Adriatic. ??? Japan is erecting superlevees. Even Bangladesh has built concrete shelters on stilts as emergency havens for flood victims. ??? Experts in the United States say the foreign projects are worth studying for inspiration about how to rebuild New Orleans once the deadly waters of Hurricane Katrina recede into history. ??? ''They have something to teach us,'' said George Z. Voyiadjis, head of civil and environmental engineering at Louisiana State University. ''We should capitalize on them for building the future here.'' ??? Innovations are happening in the United States as well. California is experimenting with ''smart'' levees wired with nervous systems of electronic sensors that sound alarms if a weakening levee threatens to open a breach, giving crews time to make emergency repairs. ??? ''It's catching on,'' said William F. Kane, president of Kane GeoTech Inc., a company in Stockton, Calif., that wires levees and other large structures with failure sensors. ''There's a lot of potential for this kind of thing.'' ??? While scientists hail the power of technology to thwart destructive forces, they note that flood control is a job for nature at least as much as for engineers. Long before anyone built levees and floodgates, barrier islands were serving to block dangerous storm surges. Of course, those islands often fall victim to coastal development. ??? ''You'll never be able to control nature,'' said Rafael L. Bras, an environmental engineer at the Massachusetts Institute of Technology who consults on the Venetian project. ''The best way is to understand how nature works and make it work in our favor.'' ??? In humanity's long struggle against the sea, the Dutch experience in 1953 was a grim milestone. The North Sea flood produced the kind of havoc that became all too familiar on the Gulf Coast last week. When a crippled dike threatened to give way and let floodwaters spill into Rotterdam, a boat captain -- like the brave little Dutch boy with the quick finger -- steered his vessel into the breach, sinking his ship and saving the city. ??? ''We were all called upon to collect clothes and food for the disaster victims,'' recalled Jelle de Boer, a Dutch high school student at the time who is now an emeritus professor of geology at Wesleyan University. ''Cows were swimming around. They'd stand when they could, shivering and dying. It was a terrible mess.'' ??? The reaction was intense and manifold. Linking offshore islands with dams, seawalls and other structures, the Dutch erected a kind of forward defensive shield, drastically reducing the amount of vulnerable coastline. Mr. de Haan, director of the water branch of the Road and Hydraulic Engineering Institute of the Dutch Ministry of Transport, Public Works and Water Management, said the project had the effect of shortening the coast by more than 400 miles. ??? For New Orleans, experts say, a similar forward defense would seal off Lake Pontchartrain from the Gulf of Mexico. That step would eliminate a major conduit by which hurricanes drive storm surges to the city's edge -- or, as in the case of Katrina, through the barriers. ??? The Dutch also increased the height of their dikes, which now loom as much as 40 feet above the churning sea. (In New Orleans, the tallest flood walls are about half that size.) The government also erected vast complexes of floodgates that close when the weather turns violent but remain open at other times, so saltwater can flow into estuaries, preserving their ecosystems and the livelihoods that depend on them. ??? The Netherlands maintains large teams of inspectors and maintenance crews that safeguard the sprawling complex, which is known as Delta Works. The annual maintenance bill is about $500 million. ''It's not cheap,'' Mr. de Haan said. ' 'But it's not so much in relation to the gross national product. So it's a kind of insurance.'' ??? The 1953 storm also pounded Britain. Along the Thames, flooding killed more than 300 people, ruined farmland and frightened Londoners, whose central city narrowly escaped disaster. ??? The British responded with a plan to better regulate tidal surges sweeping up the Thames from the North Sea. Engineers designed an attractive barrier meant to minimize interference with the river's natural flow. It went into service in 1982 at Woolwich, about 10 miles east of central London. ??? Normally, its semicircular gates lie flush to the riverbed in concrete supporting sills, creating no obstacle to river traffic. When the need arises, the gates pivot up, rising as high as a five-story building to block rising waters. The authorities have raised the Thames barrier more than 80 times. ??? In Venice, the precipitating event was a 1966 flood that caused wide damage and economic loss. The upshot was an ambitious plan known as the Moses Project, named after the biblical parting of the Red Sea. Its 78 gargantuan gates would rest on the floor of the Adriatic Sea and rise when needed to block dangerous tidal surges. ??? Long debate over the project's merits repeatedly delayed the start of construction until May 2003. Opponents claim that the $4.5 billion effort will prove ineffective while threatening to kill the fragile lagoon in which Venice sits. In theory, the gates are to be completed by 2010. ??? ''People fight doing things like this,'' said Dr. Bras, of M.I.T. ''But when disaster strikes you realize how important it is to think ahead.'' ??? Planners did just that in Bangladesh after a 1991 hurricane created huge storm surges that killed more than 130,000 people. World charities helped build hundreds of concrete shelters on stilts, which in recent storms have saved thousands of lives. ??? In Japan, a continuous battle against flooding in dense urban areas has produced an effort to develop superlevees. Unlike the customary mounds of earth, sand and rock that hold back threatening waters, they are broad expanses of raised land meant to resist breaks and withstand overflows. ??? The approach being tried in California relies on a technology known as time-domain reflectometry. It works on the same principle as radar: a pulse of energy fired down a coaxial cable bounces back when it reaches the end or a distortion, like a bend or crimp. ??? Careful measurement of the echoes traveling back along the cable can disclose serious distortions and danger. Dr. Kane, of Kane GeoTech, has installed such a system in the Sacramento River delta, along a levee that is threatening to fail. ??? Could such a system have saved New Orleans? ''It would have given them more information,'' said Charles H. Dowding, a top expert on the technology at Northwestern University. ''The failure of a levee would have been detected.'' But experts say it is still unclear whether such a warning would have been enough to prevent the catastrophic breaches. ??? Dr. Bras says sensor technologies for detecting levee failure hold much promise. But he adds that less glamorous approaches, like regular maintenance, may be even more valuable, since prevention is always the best cure. ??? ''We have to learn that things have to be reviewed, revised, maintained and repaired as needed,'' he said. ''To see a city like New Orleans suffer such devastation -- some of that was preventable.'' ??? He added that no matter how ambitious the coastal engineering, no matter how innovative and well maintained, the systems of levees, seawalls and floodgates were likely to suffer sporadic failures. ??? ''Nature will throw big things at us once in a while,'' he said. ''There's always the possibility that nature will trump us.'' From checker at panix.com Mon Sep 19 01:14:46 2005 From: checker at panix.com (Premise Checker) Date: Sun, 18 Sep 2005 21:14:46 -0400 (EDT) Subject: [Paleopsych] Darwin Awards Newsletter, 6 September 2005 Message-ID: Darwin Awards Newsletter, 6 September 2005 [Thanks to Laird for this.] --------------------------------------------+---+-+---+-+-+-+-+ The Darwin Awards salute the improvement of the human genome by honoring those who contrive to remove themselves from it. This honor is generally bestowed posthumously. --------------------------------------------+---+-+---+-+-+-+-+ Darwin Award: Surprise Attack Surprise -- CONFIRMED 3 January 2005, St. Maurice, Switzerland It was the first week of a weapons refresher course, and Swiss Army Grenadier Detachment 20/5 had just finished training with live ammo. The shooting instructor ordered the soldiers to secure their weapons for a break. The 24-year-old second lieutenant, in charge of this detachment, decided this would be a good time to demonstrate a knife attack on a soldier. Wielding his bayonet, he leaped toward one of his men, achieving complete surprise. But earlier that week, the soldiers had been drilled to release the safety catch and ready their guns for firing in the shortest possible time. The surprised soldier, seeing his lieutenant leaping toward him with a knife, snapped off a shot to protect himself from the attack. The lesson could not have been more successful: the soldier had saved himself and protected the rest of the detachment from a surprise attack. The lieutenant might have wished to commend his soldier on his quick action and accurate marksmanship. Unfortunately, he had been killed with one shot. Reference: Blick --------------------------------------------+---+-+---+-+-+-+-+ Darwin Award: Damned if You Do... -- CONFIRMED 6 September 2004, Romania A Pitesti man with a metal ring stuck on his penis was being sought by doctors, after he fled the hospital consumed by panic. The unidentified 42-year-old claimed he had put the ring on his penis after losing a bet during a drinking game at a pub. He was subsequently unable to remove the ring. Embarrassment kept him from seeking immediate medical help, but after two days, unbearable pain overcame unbearable shame, and he took his smelly and discolored penis in for treatment. Doctors told him that gangrene had set in, and his life was in danger. The blood supply had been cut off for too long, and there was nothing they could do but remove his penis, so that the necrosis did not spread to the rest of his body. The manhunt was ongoing. "There is no way he can escape going under the knife," said a doctor. "He must come back to the hospital and accept this." The man???s only consolation is a guaranteed Darwin Award, one way or the other! Reference: Daily Record (UK), Ananova --------------------------------------------+---+-+---+-+-+-+-+ Darwin Award: Playing with Elephants -- CONFIRMED 28 January 2005, Pendang, Thailand It's no secret that elephants are big. Elephants eat hundreds of pounds of food a day just to maintain their weight. Indian elephants are nine feet tall at the shoulder, and the males have tusks that extend over three feet. They're so powerful that in Southeast Asia they are used to haul massive tree trunks with their tusks, work performed by heavy equipment in other countries. It's also no secret that teasing an animal can make it mad. Teasing a nine-foot-tall animal that can carry a tree with its three-foot tusks may not be a good idea. Yet that was the very idea that formed in Prawat's head, when he saw a herd of five performing elephants chained to trees outside a Buddhist temple. While the owner waited inside for an entertainment permit, Prawat, a 50-year-old rubber-tapper, offered sugar cane to one of the ever-hungry elephants... then pulled it away. Then he did it again. And again. And again. The game was great fun for Prawat, but the elephant quickly tired of it. The last time Prawat withdrew the treat, the elephant swung his massive tusks and gored him through the stomach. He died on the way to Alor Star Hospital. Reference: The Star (Kuala Lumpur) --------------------------------------------+---+-+---+-+-+-+-+ Honorable Mention: Oops, Did It Again -- CONFIRMED 31 July 2005, Darwin, Australia A 30-year-old resident of this aptly named town of about 60,000, nestled in the Northern Territories on the Sea of Timor, just wanted to go home. But he was thwarted by two circumstances. First, he lived in an upper-level unit in a high-rise apartment building, and second, he had locked his keys in the apartment. It was around 4 a.m. Some people do their best thinking in the wee hours of the morning, but our protagonist is not one of them. He concluded that his best course of action was to scale the outside of the building. He managed to climb a short distance before he fell. Luckily, a parked car was beneath him to cushion the blow with its roof. He pulled himself off the shattered windshield and, unwilling to give up after one small setback, again set out to scale the wall. This time he reached the third floor before falling. He was less fortunate than before, as he landed on his head, yet also more fortunate, as this knocked him unconscious and saved him from a third attempt. He survived the fall, and was taken to Royal Darwin Hospital for treatment. Lest outsiders get the wrong idea of Darwin, Australia, we include a comment from a sergeant on the Darwin Police force: "It doesn't happen every day," he said. Reference: The Australian, Gold Coast Bulletin --------------------------------------------+---+-+---+-+-+-+-+ Honorable Mention: Catching the Boat -- CONFIRMED 28 September 2003, Vancouver, Canada William, a 36-year-old carpenter, hoped to become a stunt man. He had a brilliant plan. During the Vancouver Film Festival, movie people jetted in from all over the world. He would bungee from the Lions Gate Bridge, gracefully descend to the deck of a passing cruise ship, and disengage from the bungee cable as smoothly as James Bond, to the awe of the ship's passengers. Producers would marvel at his work, and discuss over cocktails who would hire him for their next film. Stunt men have the advantage of working with stunt coordinators, who carefully plot out each acrobatic feat with unerring accuracy. But William was a do-it-yourself man. He planned for over two years, checking the height of the tides, boat schedules, and deck layouts. He even lined up sponsors and recruited assistants. But, as it turned out, he could have used a stunt coordinator. The stunt began perfectly. William took a swan dive off the bridge, trailing the bungee cord behind him. He felt it grow taut as it stretched and began to slow his descent. The tennis court of the cruise ship drew nearer. And nearer. And nearer... He slammed into the deck, hurtled into a volleyball net, bounced against a deck railing, and found himself flying once more into the air, watching the cruise ship sail away. Although he had failed to make his James Bond entrance, "people on the boat loved it," he told a reporter. "They were screaming, yelling, waving." A witness, however, described the reaction as "shrieks of horror." William dangled above the water for a few minutes, confirming that no bones were broken, and making a mental note to use a shorter bungee cord next time. A water taxi positioned itself beneath him, and he gracefully descended to its deck, and smoothly disengaged from the bungee cable. William is still waiting to hear from the movie producers. Reference: AP, cnn.com --------------------------------------------+---+-+---+-+-+-+-+ Personal Account: Watch Where You're Going I hired several laborers to prepare two garden areas for me. They needed some supplies, so I showed them the location of ice water and the bathroom, and left to obtain the supplies. Upon my return, I noticed an ambulance in front of my home, along with two police cars. The police informed me that the neighbor had called 911 to report a naked man screaming and running around the yard next door. As it turned out, one of the laborers had needed to answer the call of nature. Rather than use the bathroom I had shown him, he went into the woods behind our house, dropped his trousers, and squatted down -- right on top of a huge nest of hornets. He was released from the hospital after about a week, having learned a very painful and nearly fatal lesson: always watch where you're going. --------------------------------------------+---+-+---+-+-+-+-+ Personal Account: Brake Care Summer 2001, USA I am a keen mountain-biker, and was the proud owner of a fairly expensive mountain bike. My bike was fitted with 'V' brakes, which are extremely effective, though prone to squealing. My dear brother decided to have a ride on my bike one day, while I was out. He noticed the squealing as he cycled down the hill we live on, towards the invariably busy crossroads at the bottom. Being a helpful sort, he headed back home and proceeded to pour a generous amount of 3-in-1 oil onto the brakes, before once more setting off down the hill. The oil worked! The only reported squealing came from my brother, as he slammed into the side of a moving VW Beetle. To this day he sports an impressive scar running from his eye socket to just past his ear. And yes, the bike was totaled. From checker at panix.com Mon Sep 19 01:14:51 2005 From: checker at panix.com (Premise Checker) Date: Sun, 18 Sep 2005 21:14:51 -0400 (EDT) Subject: [Paleopsych] Independent: Parts of the United States are as poor as the Third World Message-ID: Parts of the United States are as poor as the Third World http://news.independent.co.uk/world/politics/article311066.ece Parts of the United States are as poor as the Third World, according to a shocking United Nations report on global inequality. Claims that the New Orleans floods have laid bare a growing racial and economic divide in the US have, until now, been rejected by the American political establishment as emotional rhetoric. But yesterday's UN report provides statistical proof that for many - well beyond those affected by the aftermath of Hurricane Katrina - the great American Dream is an ongoing nightmare. The document constitutes a stinging attack on US policies at home and abroad in a fightback against moves by Washington to undermine next week's UN 60th anniversary conference which will be the biggest gathering of world leaders in history. The annual Human Development Report normally concerns itself with the Third World, but the 2005 edition scrutinises inequalities in health provision inside the US as part of a survey of how inequality worldwide is retarding the eradication of poverty. It reveals that the infant mortality rate has been rising in the US for the past five years - and is now the same as Malaysia. America's black children are twice as likely as whites to die before their first birthday. The report is bound to incense the Bush administration as it provides ammunition for critics who have claimed that the fiasco following Hurricane Katrina shows that Washington does not care about poor black Americans. But the 370-page document is critical of American policies towards poverty abroad as well as at home. And, in unusually outspoken language, it accuses the US of having "an overdeveloped military strategy and an under-developed strategy for human security". "There is an urgent need to develop a collective security framework that goes beyond military responses to terrorism," it continues. " Poverty and social breakdown are core components of the global security threat." The document, which was written by Kevin Watkins, the former head of research at Oxfam, will be seen as round two in the battle between the UN and the US, which regards the world body as an unnecessary constraint on its strategic interests and actions. Last month John Bolton, the new US ambassador to the UN, submitted 750 amendments to the draft declaration for next week's summit to strengthen the UN and review progress towards its Millennium Development Goals to halve world poverty by 2015. The report launched yesterday is a clear challenge to Washington. The Bush administration wants to replace multilateral solutions to international problems with a world order in which the US does as it likes on a bilateral basis. "This is the UN coming out all guns firing," said one UN insider. "It means that, even if we have a lame duck secretary general after the Volcker report (on the oil-for-food scandal), the rest of the organisation is not going to accept the US bilateralist agenda." The clash on world poverty centres on the US policy of promoting growth and trade liberalisation on the assumption that this will trickle down to the poor. But this will not stop children dying, the UN says. Growth alone will not reduce poverty so long as the poor are denied full access to health, education and other social provision. Among the world's poor, infant mortality is falling at less than half of the world average. To tackle that means tackling inequality - a message towards which John Bolton and his fellow US neocons are deeply hostile. India and China, the UN says, have been very successful in wealth creation but have not enabled the poor to share in the process. A rapid decline in child mortality has therefore not materialised. Indeed, when it comes to reducing infant deaths, India has now been overtaken by Bangladesh, which is only growing a third as fast. Poverty could be halved in just 17 years in Kenya if the poorest people were enabled to double the amount of economic growth they can achieve at present. Inequality within countries is as stark as the gaps between countries, the UN says. Poverty is not the only issue here. The death rate for girls in India is now 50 per cent higher than for boys. Gender bias means girls are not given the same food as boys and are not taken to clinics as often when they are ill. Foetal scanning has also reduced the number of girls born. The only way to eradicate poverty, it says, is to target inequalities. Unless that is done the Millennium Development Goals will never be met. And 41 million children will die unnecessarily over the next 10 years. Decline in health care Child mortality is on the rise in the United States For half a century the US has seen a sustained decline in the number of children who die before their fifth birthday. But since 2000 this trend has been reversed. Although the US leads the world in healthcare spending - per head of population it spends twice what other rich OECD nations spend on average, 13 per cent of its national income - this high level goes disproportionately on the care of white Americans. It has not been targeted to eradicate large disparities in infant death rates based on race, wealth and state of residence. The infant mortality rate in the US is now the same as in Malaysia High levels of spending on personal health care reflect America's cutting-edge medical technology and treatment. But the paradox at the heart of the US health system is that, because of inequalities in health financing, countries that spend substantially less than the US have, on average, a healthier population. A baby boy from one of the top 5 per cent richest families in America will live 25 per cent longer than a boy born in the bottom 5 per cent and the infant mortality rate in the US is the same as Malaysia, which has a quarter of America's income. Blacks in Washington DC have a higher infant death rate than people in the Indian state of Kerala The health of US citizens is influenced by differences in insurance, income, language and education. Black mothers are twice as likely as white mothers to give birth to a low birthweight baby. And their children are more likely to become ill. Throughout the US black children are twice as likely to die before their first birthday. Hispanic Americans are more than twice as likely as white Americans to have no health cover The US is the only wealthy country with no universal health insurance system. Its mix of employer-based private insurance and public coverage does not reach all Americans. More than one in six people of working age lack insurance. One in three families living below the poverty line are uninsured. Just 13 per cent of white Americans are uninsured, compared with 21 per cent of blacks and 34 per cent of Hispanic Americans. Being born into an uninsured household increases the probability of death before the age of one by about 50 per cent. More than a third of the uninsured say that they went without medical care last year because of cost Uninsured Americans are less likely to have regular outpatient care, so they are more likely to be admitted to hospital for avoidable health problems. More than 40 per cent of the uninsured do not have a regular place to receive medical treatment. More than a third say that they or someone in their family went without needed medical care, including prescription drugs, in the past year because they lacked the money to pay. If the gap in health care between black and white Americans was eliminated it would save nearly 85,000 lives a year. Technological improvements in medicine save about 20,000 lives a year. Child poverty rates in the United States are now more than 20 per cent Child poverty is a particularly sensitive indicator for income poverty in rich countries. It is defined as living in a family with an income below 50 per cent of the national average. The US - with Mexico - has the dubious distinction of seeing its child poverty rates increase to more than 20 per cent. In the UK - which at the end of the 1990s had one of the highest child poverty rates in Europe - the rise in child poverty, by contrast, has been reversed through increases in tax credits and benefits. From checker at panix.com Mon Sep 19 01:15:10 2005 From: checker at panix.com (Premise Checker) Date: Sun, 18 Sep 2005 21:15:10 -0400 (EDT) Subject: [Paleopsych] AP: Genes Show Signs Brain Still Evolving Message-ID: Genes Show Signs Brain Still Evolving http://news.yahoo.com/s/ap/20050908/ap_on_sc/brain_evolution_2&printer=1;_ylt=ArCMI_HmG3UdlbdYqWrHtjpxieAA;_ylu=X3oDMTA3MXN1bHE0BHNlYwN0bWE- By LAURAN NEERGAARD, AP Medical WriterThu Sep 8, 5:01 PM ET The human brain may still be evolving. So suggests new research that tracked changes in two genes thought to help regulate brain growth, changes that appeared well after the rise of modern humans 200,000 years ago. That the defining feature of humans -- our large brains -- continued to evolve as recently as 5,800 years ago, and may be doing so today, promises to surprise the average person, if not biologists. "We, including scientists, have considered ourselves as sort of the pinnacle of evolution," noted lead researcher Bruce Lahn, a University of Chicago geneticist whose studies appear in Friday's edition of the journal Science. "There's a sense we as humans have kind of peaked," agreed Greg Wray, director of Duke University's Center for Evolutionary Genomics. "A different way to look at is it's almost impossible for evolution not to happen." Still, the findings also are controversial, because it's far from clear what effect the genetic changes had or if they arose when Lahn's "molecular clock" suggests -- at roughly the same time period as some cultural achievements, including written language and the development of cities. Lahn and colleagues examined two genes, named microcephalin and ASPM, that are connected to brain size. If those genes don't work, babies are born with severely small brains, called microcephaly. Using DNA samples from ethnically diverse populations, they identified a collection of variations in each gene that occurred with unusually high frequency. In fact, the variations were so common they couldn't be accidental mutations but instead were probably due to natural selection, where genetic changes that are favorable to a species quickly gain a foothold and begin to spread, the researchers report. Lahn offers an analogy: Medieval monks would copy manuscripts and each copy would inevitably contain errors -- accidental mutations. Years later, a ruler declares one of those copies the definitive manuscript, and a rush is on to make many copies of that version -- so whatever changes from the original are in this presumed important copy become widely disseminated. Scientists attempt to date genetic changes by tracing back to such spread, using a statistical model that assumes genes have a certain mutation rate over time. For the microcephalin gene, the variation arose about 37,000 years ago, about the time period when art, music and tool-making were emerging, Lahn said. For ASPM, the variation arose about 5,800 years ago, roughly correlating with the development of written language, spread of agriculture and development of cities, he said. "The genetic evolution of humans in the very recent past might in some ways be linked to the cultural evolution," he said. Other scientists urge great caution in interpreting the research. That the genetic changes have anything to do with brain size or intelligence "is totally unproven and potentially dangerous territory to get into with such sketchy data," stressed Dr. Francis Collins, director of the National Human Genome Research Institute. Aside from not knowing what the gene variants actually do, no one knows how precise the model Lahn used to date them is, Collins added. Lahn's own calculations acknowledge that the microcephalin variant could have arisen anywhere from 14,000 to 60,000 years ago, and that the uncertainty about the ASPM variant ranged from 500 to 14,000 years ago. Those criticisms are particularly important, Collins said, because Lahn's testing did find geographic differences in populations harboring the gene variants today. They were less common in sub-Saharan African populations, for example. That does not mean one population is smarter than another, Lahn and other scientists stressed, noting that numerous other genes are key to brain development. "There's just no correlation," said Duke's Wray, calling education and other environmental factors more important for intelligence than DNA anyway. The work was funded by the Howard Hughes Medical Institute. From checker at panix.com Mon Sep 19 01:14:57 2005 From: checker at panix.com (Premise Checker) Date: Sun, 18 Sep 2005 21:14:57 -0400 (EDT) Subject: [Paleopsych] Vanderbilt U.: Odd behavior and creativity may go hand-in-hand Message-ID: Odd behavior and creativity may go hand-in-hand http://www.exploration.vanderbilt.edu/news/news_schizotypes.htm 5.9.6 By Melanie Moran Often viewed as a hindrance, having a quirky or socially awkward approach to lifemay be the key to becoming a great artist, composer or inventor. New research on individuals with schizotypal personalities ? people characterized by odd behavior and language but who are not psychotic or schizophrenic ? offers the first neurological evidence that they are more creative than either normal or fully schizophrenic individuals, and rely more heavily on the right sides of their brains than the general population to access their creativity. The work by Vanderbilt psychologists Brad Folley and Sohee Park was published online last week by the journal Schizophrenia Research. "The idea that schizotypes have enhanced creativity has been out there for a long time but no one has investigated the behavioral manifestations and their neural correlates experimentally," Folley says. "Our paper is unique because we investigated the creative process experimentally and we also looked at the blood flow in the brain while research subjects were undergoing creative tasks." Folley and Park conducted two experiments to compare the creative thinking processes of schizotypes, schizophrenics and normal control subjects. In the first experiment, the researchers showed research subjects a variety of household objects and asked them to make up new functions for them. The results showed that the schizotypes were better able to creatively suggest new uses for the objects, while the schizophrenics and average subjects performed similarly to one another. "Thought processes for individuals with schizophrenia are often very disorganized, almost to the point where they can?t really be creative because they cannot get all of their thoughts coherent enough to do that," Folley observes. "Schizotypes, on the other hand, are free from the severe, debilitating symptoms surrounding schizophrenia and also have an enhanced creative ability." Courtesy of Park Lab As a way to measure their creativity, research subjects were shown a variety of everyday objects, such as a spool of thread and a fork, as well as more ambiguous objects, such as a cocktail jigger and cheese grater, and then were asked to make up new functions for them. In the second experiment, the three groups again were asked to identify new uses for everyday objects as well as to perform a basic control task while the activity in their prefrontal lobes was monitored using a brain scanning techniques called near-infrared optical spectroscopy. The brain scans showed that all groups used both brain hemispheres for creative tasks, but that the activation of the right hemispheres of the schizotypes was dramatically greater than that of the schizophrenic and average subjects, suggesting a positive benefit of schizotypy. "In the scientific community, the popular idea that creativity exists in the right side of the brain is thought to be ridiculous, because you need both hemispheres of your brain to make novel associations and to perform other creative tasks," Folley says. "We found that all three groups, schizotypes, schizophrenics and normal controls, did use both hemispheres when performing creative tasks. But the brain scans of the schizotypes showed a hugely increased activation of the right hemisphere compared to the schizophrenics and the normal controls." Courtesy of Park Lab This diagram outlines how the divergent thinking task was carried out. The subjects were first shown a target object. They were then asked to identify other objects, also shown on the screen, that were similar in color to the target by pressing the numbers on a keyboard that corresponded to the objects. The subjects were then asked to identify which of the other objects could be "used" with the target by pressing the appropriate keys. At the end of the sequence, the participants were asked to verbally explain their responses to the researcher to verify their decision making during the "uses" phase of the experiment. The color-matching task served as a control to help the researchers distinguish between times when the subjects were simply putting the objects in categories and when they were actually devising new uses for them. The researchers believe that the results offer support for the idea that schizotypes and other psychoses-prone populations draw on the left and right sides of their brains differently than the average population, and that this bilateral use of the brain for a variety of tasks may be related to their enhanced creativity. In support of this theory, Folley points to research by Swiss neuroscientist Peter Brugger who found that everyday associations, such as recognizing the car key on your keychain, and verbal abilities are controlled by the left hemisphere while novel associations, such as finding a new use for a object or navigating a new place, are controlled by the right hemisphere. Brugger hypothesized that schizotypes should make novel associations faster because they are better at accessing both hemispheres ? a prediction that was verified in a subsequent study. His theory can also explain research which shows that a disproportional number of schizotypes and schizophrenics are neither right nor left hand dominant, but instead use both hands for a variety of tasks, suggesting that they recruit both sides of their brains for a variety of tasks more so than the average person. "The lack of specialization for certain tasks in brain hemispheres could be seen as a liability, but the increased communication between the hemispheres actually could provide added creativity," Folley says. Folley, who is in the process of completing his dissertation at Vanderbilt, is currently pursuing a clinical internship and research at the University of California Los Angeles. Park is an associate professor of psychology and an investigator in the Vanderbilt Kennedy Center for Research on Human Development. The work was supported by grants from the National Institute of Mental Health and the National Institute of Child Health and Human Development. "Verbal creativity and schizotypal personality in relation to prefrontal hemispheric laterality: A behavioral and near-infrared optical imaging study" Schizophrenia Research http://www.sciencedirect.com/science?_ob=ArticleURL&_aset=V-WA-A-W-WA-MsSAYVA-UUA-U-AAWDDYDYBY-AAWCBZYZBY-BUZYVWUWE-WA-U&_rdoc=2&_fmt=summary&_udi=B6TC2-4GY8942-1&_coverDate=08%2F24%2F2005&_cdi=5158&_orig=search&_st=13&_sort=d&view=c&_acct=C000006878&_version=1&_urlVersion=0&_userid=86629&md5=609a0343032e679f4267cd15c48a60ea (subscription required) Sohee Park?s home page http://www.psy.vanderbilt.edu/faculty/sohee/soheepersonalpage.htm Brad Folley?s home page http://www.psy.vanderbilt.edu/faculty/sohee/brad.htm ----- End forwarded message ----- -- Eugen* Leitl leitl ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From checker at panix.com Mon Sep 19 01:15:45 2005 From: checker at panix.com (Premise Checker) Date: Sun, 18 Sep 2005 21:15:45 -0400 (EDT) Subject: [Paleopsych] SFBG: Censored!: Project Censored presents the 10 biggest stories the mainstream media ignored over the past year. Message-ID: Censored!: Project Censored presents the 10 biggest stories the mainstream media ignored over the past year. San Francisco Bay Guardian News http://www.sfbg.com/39/49/cover_censored.html By Camille T. Taiara JUST FOUR DAYS before the 2004 presidential election, a prestigious British medical journal published the results of a rigorous study by Dr. Les Roberts, a widely respected researcher. Roberts concluded that close to 100,000 people had died in the invasion and occupation of Iraq. Most were noncombatant civilians. Many were children. But that news didn't make the front pages of the major newspapers. It wasn't on the network news. So most voters knew little or nothing about the brutal civilian impact of President George W. Bush's war when they went to the polls. That's just one of the big stories the mainstream news media ignored, blacked out, or underreported over the past year, according to Project Censored, a media watchdog group based at California's Sonoma State University. Every year project researchers scour the media looking for news that never really made the news, publishing the results in a book, this year titled Censored 2006. Of course, as Project Censored staffers painstakingly explain every year, their "censored" stories aren't literally censored, per se. Most can be found on the Internet, if you know where to look. And some have even received some ink in the mainstream press. "Censorship," explains project director Peter Phillips, "is any interference with the free flow of information in society." The stories highlighted by Project Censored simply haven't received the kind of attention they warrant, and therefore haven't made it into the greater public consciousness. "If there were a real democratic press, these are the kind of stories they would do," says Sut Jhally, professor of communications at the University of Massachusetts and executive director of the Media Education Foundation. The stories the researchers identify involve corporate misdeeds and governmental abuses that have been underreported if not altogether ignored, says Jhally, who helped judge Project Censored's top picks. For the most part, he adds, "stories that affect the powerful don't get reported by the corporate media." Can a story really be "censored" in the Internet age, when information from millions of sources whips around the world in a matter of seconds? When a single obscure journal article can be distributed and discussed on hundreds of blogs and Web sites? When partisans from all sides dissect the mainstream media on the Web every day? Absolutely, Jhally says. "The Internet is a great place to go if you already know that the mainstream media is heavily biased" and you actively search out sites on the outer limits of the Web, he notes. "Otherwise, it's just another place where they try to sell you stuff. The challenge for a democratic society is how to get vital information not only at the margins but at the center of our culture." Not every article or source Project Censored has cited over the years is completely credible; at least one this year is pretty shaky (see sidebar). But most of the stories that made the project's top 10 were published by more reliable sources and included only verifiable information. And Project Censored's overall findings provide valuable insights into the kinds of issues the mainstream media should be paying closer attention to. 1. Bush administration moves to eliminate open government While the Bush administration has expanded its ability to keep tabs on civilians, it's been working to make sure the public - and even Congress - can't find out what the government is doing. One year ago, Rep. Henry A. Waxman (D-Calif.) released an 81-page analysis of how the administration has administered the country's major open government laws. His report found that the feds consistently "narrowed the scope and application" of the Freedom of Information Act, the Presidential Records Act, and other key public information legislation, while expanding laws blocking access to certain records - even creating new categories of "protected" information and exempting entire departments from public scrutiny. When those methods haven't been enough, the Bush administration has simply refused to release records - even when the requester was a Congressional subcommittee or the Government Accountability Office, the study found. A few of the potentially incriminating documents Bush and Co. have refused to hand over to their colleagues on Capitol Hill include records of contacts between large energy companies and Vice President Dick Cheney's energy task force; White House memos pertaining to Saddam Hussein's, shall we say, "elusive" weapons of mass destruction; and reports describing torture at Abu Ghraib. The report's findings were so dramatic as to indicate "an unprecedented assault on the laws that make our government open and accountable," Waxman said at a Sept. 14, 2004, press conference announcing the report's release. Given the news media's intrinsic interest in safeguarding open government laws, one would think it would be plenty motivated to publicize such findings far and wide. However, most Americans remain oblivious to just how much more secretive - and autocratic - our leaders in the White House have become. Source: "New Report Details Bush Administration Secrecy" press release, Karen Lightfoot, Government Reform Minority Office, posted on www.commondreams.org, Sept. 14, 2004. 2. Media coverage fails on Iraq: Fallujah and the civilian death toll Decades from now, the civilized world may well look back on the assaults on Fallujah in April and November 2004 and point to them as examples of the United States' and Britain's utter disregard for the most basic wartime rules of engagement. Not long after the "coalition" had embarked on its second offensive, UN High Commissioner for Human Rights Louise Arbour called for an investigation into whether the Americans and their allies had engaged in "the deliberate targeting of civilians, indiscriminate and disproportionate attacks, the killing of injured persons, and the use of human shields," among other possible "grave breaches of the Geneva Conventions ... considered war crimes" under federal law. More than 83 percent of Fallujah's 300,000 residents fled the city, Mary Trotochaud and Rick McDowell, staffers with the American Friends Service Committee, reported in AFSC's Peacework magazine. Men between the ages of 15 and 45 were refused safe passage, and all who remained - about 50,000 - were treated as enemy combatants, according to the article. Numerous sources reported that coalition forces cut off water and electricity, seized the main hospital, shot at anyone who ventured out into the open, executed families waving white flags while trying to swim across the Euphrates or otherwise flee the city, shot at ambulances, raided homes and killed people who didn't understand English, rolled over injured people with tanks, and allowed corpses to rot in the streets and be eaten by dogs. Medical staff and others reported seeing people, dead and alive, with melted faces and limbs, injuries consistent with the use of phosphorous bombs. But you wouldn't know any of this unless you'd come across a rare report by one of an even rarer number of independent journalists - or known which obscure Web site to log onto for real information. Of course, the media blackout extends far beyond Fallujah. The US military's refusal to keep an Iraqi death count has been mirrored by the mainstream media, which systematically dodges the question of how many Iraqi civilians have been killed. Les Roberts, an investigator with the Johns Hopkins Bloomberg School of Public Health, conducted a rigorous inquiry into pre- and post-invasion mortality in Iraq, sneaking into Iraq by lying flat on the bed of an SUV and training observers on the scene. The results were published in the Lancet, a prestigious peer-reviewed British medical journal, on Oct. 29, 2004 - just four days prior to the US presidential elections. Roberts and his team (including researchers from Columbia University and from Al-Mustansiriya University, in Baghdad) concluded that "the death toll associated with the invasion and occupation of Iraq is probably about 100,000 people, and may be much higher." The vast majority of those deaths resulted from violence - particularly aerial bombardments - and more than half of the fatalities were women or children, they found. The State Department had relied heavily on studies by Roberts in the past. And when Roberts, using similar techniques, calculated in 2000 that about 1.7 million had died in the Congo as the result of almost two years of armed conflict, the news media picked up the story, the United Nations more than doubled its request for aid to the Congo, and the United States pledged an additional $10 million. This time, silence - interrupted only by the occasional critique dismissing Roberts's report. The major television news shows, Project Censored found, never mentioned it. Sources: "The Invasion of Fallujah: A Study in the Subversion of Truth," Mary Trotochaud and Rick McDowell, Peacework, Dec. 2004-Jan. 2005; "US Media Applauds Destruction of Fallujah," David Walsh, www.wsws.org (World Socialist Web site), Nov. 17, 2004; "Fallujah Refugees Tell of Life and Death in the Kill Zone," Dahr Jamail, New Standard, Dec. 3, 2004; "Mortality before and after the 2003 Invasion of Iraq," Les Roberts, Riyadh Lafta, Richard Garfield, Jamal Khudhairi, and Gilbert Burnham, Lancet, Oct. 29, 2004; "The War in Iraq: Civilian Casualties, Political Responsibilities," Richard Horton, Lancet, Oct. 29, 2004; "Lost Count," Lila Guterman, Chronicle of Higher Education, Feb. 4, 2005; "CNN to Al Jazeera: Why Report Civilian Deaths?" Fairness and Accuracy in Reporting, April 15, 2004, and Asheville Global Report, April 22-28, 2004. 3. Another year of distorted election coverage Last year Project Censored foretold the potential for electoral wrongdoing in the 2004 presidential campaign: The "sale of electoral politics" made number six in the list of 2003-04's most underreported stories. The mainstream media had largely ignored the evidence that electronic voting machines were susceptible to tampering, as well as political alliances between the machines' manufacturers and the Republican Party. Then came Nov. 2, 2004. Bush prevailed by 3 million votes - despite exit polls that clearly projected Kerry winning by a margin of 5 million. "Exit polls are highly accurate," Steve Freeman, professor at the University of Pennsylvania's Center for Organizational Dynamics, and Temple University statistician Josh Mitteldorf wrote in In These Times. "They remove most of the sources of potential polling error by identifying actual voters and asking them immediately afterward who they had voted for." The eight-million-vote discrepancy was well beyond the poll's recognized, less-than-1-percent margin of error. And when Freeman and Mitteldorf analyzed the data collected by the two companies that conducted the polls, they found concrete evidence of potential fraud in the official count. "Only in precincts that used old-fashioned, hand-counted paper ballots did the official count and the exit polls fall within the normal sampling margin of error," they wrote. And "the discrepancy between the exit polls and the official count was considerably greater in the critical swing states." Inconsistencies were so much more marked in African American communities as to renew calls for racial equity in our voting system. "It is now time to make counting that vote a right, not just casting it, before Jim Crow rides again in the next election," wrote Rev. Jesse Jackson and Greg Palast in the Seattle Post-Intelligencer. Sources: "A Corrupt Election," Steve Freeman and Josh Mitteldorf, In These Times, Feb. 15, 2005; "Jim Crow Returns to the Voting Booth," Greg Palast and Rev. Jesse Jackson, Seattle Post-Intelligencer, Jan. 26, 2005; "How a Republican Election Supervisor Manipulated the 2004 Central Ohio Vote," Bob Fitrakis and Harvey Wasserman, www.freepress.org, Nov. 23, 2004. 4. Surveillance society quietly moves in It's a well-known dirty trick in the halls of government: If you want to pass unpopular legislation that you know won't stand up to scrutiny, just wait until the public isn't looking. That's precisely what the Bush administration did Dec. 13, 2003, the day American troops captured Saddam Hussein. Bush celebrated the occasion by privately signing into law the Intelligence Authorization Act - a controversial expansion of the PATRIOT Act that included items culled from the "Domestic Security Enhancement Act of 2003," a draft proposal that had been shelved due to public outcry after being leaked. Specifically, the IAA allows the government to obtain an individual's financial records without a court order. The law also makes it illegal for institutions to inform anyone that the government has requested those records, or that information has been shared with the authorities. "The law also broadens the definition of 'financial institution' to include insurance companies, travel and real-estate agencies, stockbrokers, the US Postal Service, jewelry stores, casinos, airlines, car dealerships, and any other business 'whose cash transactions have a high degree of usefulness in criminal, tax, or regulatory matters' " warned Nikki Swartz in the Information Management Journal. According to Swartz, the definition is now so broad that it could plausibly be used to access even school transcripts or medical records. "In one fell swoop, this act has decimated our rights to privacy, due process, and freedom of speech," Anna Samson Miranda wrote in an article for LiP magazine titled "Grave New World" that documented the ways in which the government already employs high-tech, private industry, and everyday citizens as part of a vast web of surveillance. Miranda warned, "If we are too busy, distracted, or apathetic to fight government and corporate surveillance and data collection, we will find ourselves unable to go anywhere - whether down the street for a cup of coffee or across the country for a protest - without being watched." Sources: "PATRIOT Act's Reach Expanded Despite Part Being Struck Down," Nikki Swartz, Information Management Journal, March/April 2004; "Grave New World," Anna Samson Miranda, LiP, Winter 2004; "Where Big Brother Snoops on Americans 24/7," Teresa Hampton and Doug Thompson, www.capitolhillblue.com, June 7, 2004. 5. US uses tsunami to military advantage in Southeast Asia The American people reacted to the tsunami that hit the Indian Ocean last December with an outpouring of compassion and private donations. Across the nation, neighbors got together to collect food, clothing, medicine, and financial contributions. Schoolchildren completed class projects to help the cause. Unfortunately, the US government didn't reflect the same level of altruism. President Bush initially offered an embarrassingly low $15 million in aid. More important, Project Censored found that the US government exploited the catastrophe to its own strategic advantage. Establishing a stronger military presence in the area could help the United States keep closer tabs on China - which, thanks to its burgeoning economic and military muscle, has emerged as one of this country's greatest potential rivals. It could also fortify an important military launching ground and help consolidate control over potentially lucrative trade routes. The United States currently operates a base out of Diego Garcia - a former British mandate in the Chagos Archipelago (about halfway between Africa and Indonesia), but the lease runs out in 2016. The isle is also "remote and Washington is desperate for an alternative," veteran Indian journalist Rahul Bedi wrote. "Consequently, in the name of relief, the US revived the Utapao military base in Thailand it had used during the Vietnam War [and] reactivated its military cooperation agreements with Thailand and the Visiting Forces Agreement with the Philippines," Bedi reported. Last February the State Department mended broken ties with the notoriously vicious and corrupt Indonesian military - although human rights observers charged the military with withholding "food and other relief from civilians suspected of supporting the secessionist insurgency, the Free Aceh Movement," Jim Lobe reported for the Inter Press Service. Sources: "US Turns Tsunami into Military Strategy," Jane's Foreign Report, Feb. 15, 2005; "US Has Used Tsunami to Boost Aims in Stricken Area," Rahul Bedi, Irish Times, Feb. 8, 2005; "Bush Uses Tsunami Aid to Regain Foothold in Indonesia," Jim Lobe, Inter Press Service, Jan. 18, 2005. 6. The real oil-for-food scam Last year, right-wingers in Congress began kicking up a fuss about how the United Nations had allegedly allowed Saddam Hussein to rake in $10 billion in illegal cash through the Oil for Food program. Headlines screamed scandal. New York Times columnist William Safire referred to the alleged UN con game as "the richest rip-off in world history." But those who knew how the program had been set up and run - and under whose watch - were not swayed. The initial accusations were based on a General Accounting Office report released in April 2004 and were later bolstered by a more detailed report commissioned by the CIA. According to the GAO, Hussein smuggled $6 billion worth of oil out of Iraq - most of it through the Persian Gulf. Yet the UN fleet charged with intercepting any such smugglers was under direct command of American officers, and consisted overwhelmingly of US Navy ships. In 2001, for example, 90 of its vessels belonged to the United States, while Britain contributed only 4, Joy Gordon wrote in a December 2004 article for Harper's magazine. Most of the oil that left Iraq by land did so through Jordan and Turkey - with the approval of the United States. The first Bush administration informally exempted Jordan from the ban on purchasing Iraqi oil - an arrangement that provided Hussein with $4.4 billion over 10 years, according to the CIA's own findings. The United States later allowed Iraq to leak another $710 million worth of oil through Turkey - "all while US planes enforcing no-fly zones flew overhead," Gordon wrote. Scott Ritter, a UN weapons inspector in Iraq during the first six years of economic sanctions against the country, unearthed yet another scam: The United States allegedly allowed an oil company run by Russian foreign minister Yevgeny Primakov's sister to purchase cheap oil from Iraq and resell it to US companies at market value - purportedly earning Hussein "hundreds of millions" more. "It has been estimated that 80 percent of the oil illegally smuggled out of Iraq under 'oil for food' ended up in the United States," Ritter wrote in the UK Independent. Sources: "The UN Is Us: Exposing Saddam Hussein's Silent Partner," Joy Gordon, Harper's, December 2004; "The Oil for Food 'Scandal' Is a Cynical Smokescreen," Scott Ritter, UK Independent, Dec. 12, 2004. 7. Journalists face unprecedented dangers to life and livelihood Last year was the deadliest year for reporters since the International Federation of Journalists began keeping tabs in 1984. A total of 129 media workers lost their lives, and 49 of them - more than a third - were killed in Iraq. In short, nonembedded journalists have now become familiar victims of US military actions abroad. "As far as anyone has yet proved, no commanding officer ever ordered a subordinate to fire on journalists as such," Weissman wrote in an update for Censored 2006. But what can be shown is a pattern of tacit complicity, side by side with a heavy-handed campaign to curb journalists' right to roam freely. The Pentagon has refused to implement basic safeguards to protect journalists who aren't embedded with coalition forces, despite repeated requests by Reuters and media advocacy organizations. The US military exonerated the army of any wrongdoing in its now-infamous attack on the Palestine Hotel - which, as the Pentagon knew, functioned as headquarters for about 100 media workers - when coalition forces rolled into Baghdad on April 8, 2003. To date, US authorities have not disciplined a single officer or soldier involved in the killing of a journalist, according to Project Censored. Meanwhile, the interim government the United States installed in Iraq raided and closed down Al-Jazeera's Baghdad offices almost as soon as it took power and banned the network from doing any reporting in the country. In November the interim government ordered news organizations to "stick to the government line on the US-led offensive in Fallujah or face legal action," in an official command sent out on interim prime minister Eyad Allawi's letterhead and quoted in a November report by independent reporter Dahr Jamail. And both American and interim government forces detained numerous journalists in and around Fallujah that month, holding them for days. Sources: "Dead Messengers: How the US Military Threatens Journalists," Steve Weissman, www.truthout.org, Feb. 28, 2005; "Media Repression in 'Liberated' Land," Dahr Jamail, Inter Press Service, Nov. 18, 2004. 8. Iraqi farmers threatened by Bremer's mandates Historians believe it was in the "fertile crescent" of Mesopotamia, where Iraq now lies, that humans first learned to farm. "It is here, in around 8500 or 8000 B.C., that mankind first domesticated wheat, here that agriculture was born," Jeremy Smith wrote in the Ecologist. This entire time, "Iraqi farmers have been naturally selecting wheat varieties that work best with their climate ... and cross-pollinated them with others with different strengths. "The US, however, has decided that, despite 10,000 years practice, Iraqis don't know what wheat works best in their own conditions." Smith was referring to Order 81, one of 100 directives penned by L. Paul Bremer III, the US administrator in Iraq, and left as a legacy by the American government when it transferred operations to interim Iraqi authorities. The regulation sets criteria for the patenting of seeds that can only be met by multinational companies like Monsanto or Syngenta, and it grants the patent holder exclusive rights over every aspect of all plant products yielded by those seeds. Because of naturally occurring cross-pollination, the new scheme effectively launches a process whereby Iraqi farmers will soon have to purchase their seeds rather than using seeds saved from their own crops or bought at the local market. Native varieties will be replaced by foreign - and genetically engineered - seeds, and Iraqi agriculture will become more vulnerable to disease as biological diversity is lost. Texas A&M University, which brags that its agriculture program is a "world leader" in the use of biotechnology, has already embarked on a $107 million project to "reeducate" Iraqi farmers to grow industrial-sized harvests, for export, using American seeds. And anyone who's ever paid attention to how this has worked elsewhere in the global South knows what comes next: Farmers will lose their lands, and the country will lose its ability to feed itself, engendering poverty and dependency. On TomPaine.com, Greg Palast identified Order 81 as one of several authored by Bremer that fit nicely into the outlines of a US "Economy Plan," a 101-page blueprint for the economic makeover of Iraq, formulated with ample help from corporate lobbyists. Palast reported that someone inside the State Department leaked the plan to him a month prior to the invasion. Smith put it simply: "The people whose forefathers first mastered the domestication of wheat will now have to pay for the privilege of growing it for someone else. And with that the world's oldest farming heritage will become just another subsidiary link in the vast American supply chain." Sources: "Iraq's New Patent Law: A Declaration of War Against Farmers," Focus on the Global South and Grain, Grain, October 2004; "Adventure Capitalism," Greg Palast, www.tompaine.com, Oct. 26, 2004; "US Seeking to Totally Re-engineer Iraqi Traditional Farming System into a US Style Corporate Agribusiness," Jeremy Smith, Ecologist, Feb. 4, 2005. 9. Iran's new oil trade system challenges US currency The Bush administration has been paying a lot more attention to Iran recently. Part of that interest is clearly Iran's nuclear program - but there may be more to the story. One bit of news that hasn't received the public vetting it merits is Iran's declared intent to open an international oil exchange market, or "bourse." Not only would the new entity compete against the New York Mercantile Exchange and London's International Petroleum Exchange (both owned by American corporations), but it would also ignite international oil trading in euros. "A shift away from US dollars to euros in the oil market would cause the demand for petrodollars to drop, perhaps causing the value of the dollar to plummet," Brian Miller and Celeste Vogler of Project Censored wrote in Censored 2006. "Russia, Venezuela, and some members of OPEC have expressed interest in moving towards a petroeuro system," he said. And it isn't entirely implausible that China, which is "the world's second largest holder of US currency reserves," might eventually follow suit. Although China, as a major exporter of goods to the United States, has a vested interest in helping shore up the American economy and has even linked its own currency, the yuan, to the dollar, it has also become increasingly dependent on Iranian oil and gas. "Barring a US attack, it appears imminent that Iran's euro-dominated oil bourse will open in March, 2006," Miller and Vogler continued. "Logically, the most appropriate US strategy is compromise with the EU and OPEC towards a dual-currency system for international oil trades." But you won't hear any discussion of that alternative on the six o'clock news. Source: "Iran Next US Target," William Clark, www.globalresearch.ca, Oct. 27, 2004. 10. Mountaintop removal threatens ecosystem and economy On Aug. 15 environmental activists created a human blockade by locking themselves to drilling equipment, obstructing the National Coal Corp.'s access to a strip mine in the Appalachian mountains 40 miles north of Knoxville. It was just the latest in a protracted campaign that environmentalists say has national implications but that's been ignored by the media outside the immediate area. Under contention is a technique wherein entire mountaintops are removed using explosives to access the coal underneath - a practice that is nothing short of devastating for the local ecosystem, but which could become much more widespread. As it stands, 93 new coal plants are in the works nationwide, according to Project Censored's findings. "Areas incredibly rich in biodiversity are being turned into the biological equivalent of parking lots," wrote John Conner of the Kat?ah branch of Earth First! - which has been throwing all its energies into direct action campaigns to block the project - in Censored 2006. "It is the final solution for 200-million-year-old mountains." Source: "See You in the Mountains: Kat?ah Earth First! Confronts Mountaintop Removal," John Conner, Earth First!, November-December, 2004. E-mail Camille T. Tiara at camille at sfbg.com. For the 15 runner-up stories, go to www.sfbg.com. From checker at panix.com Mon Sep 19 01:15:02 2005 From: checker at panix.com (Premise Checker) Date: Sun, 18 Sep 2005 21:15:02 -0400 (EDT) Subject: [Paleopsych] NYT: Disasters Waiting to Happen Message-ID: Disasters Waiting to Happen http://www.nytimes.com/2005/09/11/business/11disaster.html By LOUIS UCHITELLE MORE than a thousand miles of levees stretch east from San Francisco Bay. They protect the cities and the farmland in the Sacramento-San Joaquin delta of California and keep salt from the bay out of the drinking water of millions of people. But those levees are deteriorating, experts say, raising the odds of a Katrina-like disaster for the nation's most populous state. The delta and its maze of levees are high on the list of public infrastructure considered to be subpar. That list - which also includes highways, dams, ports and bridges - is growing as government outlays for repair lose out to budget cutting. The American Society of Civil Engineers, whose 137,000 members are involved in virtually every public works project undertaken in the United States, says that $1.6 trillion must be spent over the next five years to prevent further deterioration. Only $900 billion is now earmarked. Absent the additional spending, said Lawrence Roth, deputy executive director of the society, "every natural disaster is going to be more destructive than it needs to be." Infrastructure deteriorates in more than one way. There is lack of maintenance: roughly 13,000 highway fatalities each year, for example, are a result of inadequate maintenance of aging highways, the civil engineers say. And there is overuse: the levees in California, many of them built by farmers to convert marshes to farmland, now must be strengthened to prevent the destruction of newly built communities. "This is the fastest-growing region in California, and the bulk of that growth is taking place on the flood plains," said Jeffrey F. Mount, a geology professor and the director of the Watershed Center at the University of California, Davis. "What we are doing is creating our own New Orleans." To Professor Mount, there is too much deterioration of public infrastructure for the next disaster to be thought of as unheralded. Even before Hurricane Katrina hit the Gulf Coast, for example, there was wide public awareness of the potential crisis that New Orleans faced from its inadequate levees. Yet nothing happened. Why? Repairing or rebuilding infrastructure to protect the public is expensive, and the nation is spending less. Total government spending on public infrastructure - that is, spending by states and municipalities as well as by federal agencies - amounted to roughly 2.25 percent of the gross domestic product in the early 1950's. It rose to about 3 percent in the Eisenhower era of interstate highway construction and in the Kennedy and Johnson years. Through the 1980's and 1990's, however, infrastructure investment fell well below 2 percent, and is now just below that level. AS spending lost ground, standards changed. Current criteria rely more on a cost-versus-benefit formula, in contrast to the grander visions of those earlier years. Does the benefit from a public investment exceed the cost? Laced into that calculation is a gamble: benefits that rarely occur are not counted. A levee system in New Orleans capable of withstanding a Category 4 hurricane like Katrina is not counted because such storms are fairly rare. John Paul Woodley, the assistant secretary of the Army for civil works, says he struggles with this standard. He has jurisdiction over the Army Corps of Engineers, which takes the lead in building and maintaining public works along the nation's waterways, including thousands of miles of levees. For the first time, Mr. Woodley is pushing to postpone some projects so that more can be spent on others. "I have more works in progress than the budget can fund at an efficient level, and that is a problem," Mr. Woodley said in an interview. "I have two options then. I can rigorously prioritize and decide that some will be quickly finished and others postponed, or I can spread the money across all the projects, deferring the completion dates for all of them but postponing none. Our 2006 budget calls for suspension of a large number of projects in order to concentrate on a small number that will be completed efficiently." In California's delta, that approach translates into $20 million this year to improve the levees that protect Sacramento, the state capital, at the confluence of the American and Sacramento Rivers, and to strengthen the Folsom Dam, upstream from the city. But virtually no federal money is earmarked for other nearby levees that protect smaller communities, cropland and the conduits that bring drinking water to Southern California. California's state government struggles with that issue. The Schwarzenegger administration has proposed more spending on the levees than the Legislature has been willing to approve. Katrina raised awareness of the benefits to be reaped, but even before Katrina, a 1997 flood damaged or destroyed more than 30,000 homes and businesses, and a rupture in a levee last year resulted in $100 million worth of damage and repairs. "It is possible that we are not spending enough on levees, and it is also possible that the people moving in behind those levees are not being charged enough - for flood insurance, for example," said Joshua D. Angrist, an economist at the Massachusetts Institute of Technology. "If they were required to pay for flood insurance, they might not build there." Although the delta and its levees are gaining attention now, after Hurricane Katrina, there are near-disasters that never come to public attention. The Army Corps of Engineers, for example, shut down the Greenup Locks and Dam on the Ohio River, upstream from Louisville, Ky., two years ago for what the corps thought would be two weeks of routine maintenance. Closer examination, once the work started, revealed much more deterioration than anticipated and the locks remained closed for eight weeks, during which coal could not move up the river on barges to power plants that supply electricity to the Midwest. "We came very close to not having enough coal to power those plants," Mr. Woodley said. The coal did get through before stockpiles were exhausted, so in this case the public never noticed. But power failures would undoubtedly have produced plenty of public outcry. Locks on the Mississippi and Ohio Rivers shut down more often than Mr. Woodley would like. "If I had more money," he said, "I could reduce these shutdowns to a level that I might consider satisfactory." In this age of rationed public spending, the deterioration of vital public works is increasing, and with it the potential for disaster. The American Society of Civil Engineers, for example, reports that while federally owned dams are in good condition, more than 3,500 dams maintained by states and local governments are "unsafe" - and the number is rising. "There are a lot of these small dams, and they can do a lot of damage," said Douglas Holtz-Eakin, director of the Congressional Budget Office. "The Johnstown Flood is a famous example of a small dam failing and doing a lot of damage." (That flood, in Johnstown, Pa., in 1889, took more than 2,200 lives.) Something similar is happening with drinking water systems. Instead of an outlay of $11 billion annually, the amount the civil engineers consider necessary to replace aging pipelines and other facilities, federal spending is only $850 million. Highways and bridges also suffer from a shortfall. The highway bill signed by the president this summer stipulates an outlay of $286.4 billion over five years for both new construction and maintenance. Various Republicans, including Mr. Woodley, consider $318 billion as the minimum needed just for maintenance. States are filling some of the gap. Bridge tolls are rising in San Francisco, for example, to help finance a new eastern span for the San Francisco-Oakland Bay Bridge that is more earthquake resistant than the portion of the bridge damaged in the 1989 earthquake. "In these cases, people make a decision about the level of risk they want the infrastructure to withstand," said Ellen Hanak, a research fellow at the Public Policy Institute of California. And if the risk is considered great enough, they fund it. Assessing risks, the Army Corps of Engineers chose to spend $30 million this year to build or enlarge 57 miles of levee along the Mississippi River near New Orleans - but on the west bank, across from the city. Did the construction ameliorate the damage from Katrina? Mr. Woodley does not think so. "My initial impression," he said, "is that it protects an area not struck by the surge." From checker at panix.com Mon Sep 19 01:15:52 2005 From: checker at panix.com (Premise Checker) Date: Sun, 18 Sep 2005 21:15:52 -0400 (EDT) Subject: [Paleopsych] NS: Ant logic makes sense in space Message-ID: Ant logic makes sense in space http://www.newscientistspace.com/article.ns?id=mg18725165.200&print=true * 10 September 2005 A spacecraft skin is being developed that assesses the severity of any damage it suffers from space debris and other impacts. The project, which is inspired by the behaviour of ants, is seen as the first step towards a self-repairing craft. The team at CSIRO, Australia's national research organisation, is working with NASA on the project and has so far created a model skin made up of 192 separate cells. Behind each cell is an impact sensor and a processor equipped with algorithms that allow it to communicate only with its immediate neighbours. Just as ants secrete pheromones to help guide other ants to food, the CSIRO algorithms leave digital messages in cells around the system, indicating for instance the position of the boundary around a damaged region. The cell's processor can use this information to route data around the affected area. The team hopes to refine the system so it can distinguish between different types of damage, such as corrosion and sudden impacts, which might require a rapid repair job. Other groups are developing impact sensor systems controlled by a centralised processor. But such systems would fail if the area containing the processor were damaged. So a distributed system could be much more reliable, says Bill Prosser of NASA's Nondestructive Evaluation Sciences Branch in Langley, Virginia. NASA's ultimate aim is to create what it calls Ageless Aerospace Vehicles, which can detect, diagnose and fix damage (Robotics and Autonomous Systems, DOI: 10.1016/j.robot.2005.06.003). Related Articles * [8]Electronic skin to give robots human-like touch * [9]http://www.newscientist.com/article.ns?id=dn7849 * 16 August 2005 * [10]Robot skin stretches to the task * [11]http://www.newscientist.com/article.ns?id=dn4302 * 22 October 2003 * [12]Space babies * [13]http://www.newscientist.com/article.ns?id=mg16922763.800 * 03 February 2001 Weblinks * [14]CSIRO * [15]http://www.csiro.au/ * [16]NASA * [17]http://www.nasa.gov/home/index.html?skipIntro=1 * [18]Nondestructive Evaluation Sciences, NASA * [19]http://nesb.larc.nasa.gov/ From checker at panix.com Sun Sep 18 01:26:33 2005 From: checker at panix.com (Premise Checker) Date: Sat, 17 Sep 2005 21:26:33 -0400 (EDT) Subject: [Paleopsych] BBC: Age prejudice 'ubiquitous in UK' Message-ID: Age prejudice 'ubiquitous in UK' http://news.bbc.co.uk/2/hi/science/nature/4220228.stm Get to 49 and it seems you could be "over the hill" - at least that's how many of us perceive it. Forty-nine is the age that people, on average, in a Kent University study stated that youth came to an end. The UK-wide survey has thrown up some interesting facts on how we view people who are older or younger than us, and just how far our prejudice extends. And it is clear that teenagers as well as pensioners can sometimes feel put down because of their age. "We shouldn't forget that one of the important targets of ageism is young people. They feel very aggrieved about the stereotypes that portray them as nasty yobs who are drunk all the time," Dominic Abrams, a professor of social psychology at Kent, said. His study was conducted for the charity Age Concern. Some of the results have been released here at the British Association's Festival of Science. They come from detailed interviews with 1,843 people over the age of 16, and they appear to show that age prejudice is ubiquitous in British society. More people (29%) reported suffering age discrimination than any other form of discrimination. "Ageism is the most pervasive form of prejudice in Britain today," Dominic Abrams said. "Ageism is the form of prejudice experienced most commonly by people in the UK and that seems to be true pretty much across gender, ethnicity, religion, disability - people of all types experience ageism, and indeed people of all ages experience ageism." The study reveals just how strongly perception of ageing is related to the age of the perceiver, and - to a degree - by the sex of the perceiver, too. For example, the arrival of old age recedes into the distance as one gets older. So, if you are a 24-year-old man, you think old age arrives at 55; but if you are a 62-year-old woman, you consider youth to end at 57. The fact that criteria "float" makes the task of detecting and tackling ageism particularly challenging, according to Professor Abrams, and has to be considered when developing strategies to undo the usual stereotypes - that old people are "doddery but dear" or that young people are "shallow and callous". Some other findings from the interviews show: From age 55 onwards, people are nearly twice as likely to have experienced age prejudice than any other form of discrimination Nearly 30% of people believe there is more prejudice against the old than five years ago, and that this will continue to get worse One third of people think that the demographic shift towards an older society will make life worse in terms of standards of living, security, health, jobs and education One in three respondents said they viewed the over-70s as incompetent and incapable. One key point is that a half of all people under the age of 24 have no friends over 70, and vice versa. And the data shows that those without intergenerational friendships are also more likely to hold negative beliefs about the competence of people over 70. "Inter-group contact and positive relationships across the generations seem to be an important mechanism for combating ageist stereotypes," Professor Abrams said. From checker at panix.com Sun Sep 18 01:25:00 2005 From: checker at panix.com (Premise Checker) Date: Sat, 17 Sep 2005 21:25:00 -0400 (EDT) Subject: [Paleopsych] WSJ: Katrina, Juliana and Wilhelmina Message-ID: And here's some historical background. Katrina, Juliana and Wilhelmina http://www.opinionjournal.com/extra/?id=110007241 ACROSS THE POND Katrina, Juliana and Wilhelmina Lessons from the Dutch deluge of 1953. BY SIMON ROZENDAAL Sunday, September 11, 2005 12:01 a.m. EDT Many Dutchmen, shocked by the devastation caused in the U.S. by Hurricane Katrina, were reminded of what happened to our own country more than 50 years ago. On Feb. 1, 1953, the southwestern part of the Netherlands was struck by a flood of biblical proportions. The Dutch levee system collapsed in 500 places. There was nowhere to hide. More than 1,800 people drowned, together with tens of thousands of cattle and other animals. Some 4,000 houses were destroyed, and 40,000 were severely damaged. About 100,000 people had to evacuate, out of a population of around 12 million. The Dutch had suffered catastrophic floods before, but the deluge of 1953 was a different kind. Just consider that twice as many people were killed in the flood as during the infamous German bombing of Rotterdam in 1940. The nation was stunned. Older Dutchmen from the southwestern islands still get tears in their eyes when they talk about how they lost loved ones during what is simply called "the disaster." The Dutch reaction was: Never again. The government decided to give the southwestern and most vulnerable part of the country the best possible protection. Eleven massive dams, sea walls and sluices were created in waters that sometimes look more like a sea than a river. The hydraulic wall built in the vast Oosterschelde, for instance, is 5.6 miles long and rests on 65 concrete pillars about 43 yards tall. Its sluice-gate doors are usually open to protect the special habitat (partly seawater, partly freshwater) behind it, and are only closed when floods are imminent. Another wall, the Maeslantbarrier that completed the protection system, consists of two hollow doors--as long as the Eiffel Tower in Paris is tall, and four times as heavy--which are lying in docks on the banks of the Nieuwe Waterweg. In the event of extreme bad weather the docks are filled with water, and the gates float and are turned into the Nieuwe Waterweg where they seal off the river. In that way this barrier protects the city of Rotterdam and its surroundings, where about the same number of people live as did in greater New Orleans. This complex system of dams and barriers--called the Delta plan--is a technological achievement comparable maybe in its complexity and ambition to the American Apollo project that put a man on the moon. After all, the Delta plan was designed to protect the Netherlands from flood conditions that happen only once every 10,000 years! New Orleans, on the other hand, was protected only against hurricanes that occur every 50 years. The total cost of the Delta plan, which began in 1953 and was only completed a couple of years ago, amounted to $5 billion. There are other interesting similarities as well as striking differences between the Netherlands in 1953 and New Orleans in 2005. First, the geography. The Netherlands is an estuary. It was shaped by the sedimentation of three huge rivers--the Rhine, the Meuse and the Scheldt. Like New Orleans, two-thirds of the Netherlands is below sea level. The meteorological conditions were also similar, that is to say unique--a combination of spring tides, gale-force winds and deep depressions. At high tide, the sea level at Hoek van Holland usually is 31 inches above average; on the night of the disaster it was 150 inches above average. The storm also lasted unusually long: 33 hours without letup. As in the American Gulf states, the Dutch levee system had been neglected. It was not long after World War II; the Netherlands had just lost its colony, Indonesia; and the Cold War diverted money and attention. Yet the disaster was not unexpected. Experts had calculated that the sea could rise up to 13 feet, and six months before the storm, the well-respected Dutch engineer Johan van Veen warned that a terrible tragedy could happen. A major difference between 1953 and 2005, however, was the level of awareness. Even outside the U.S., people had information that Hurricane Katrina was headed for New Orleans. Most Dutch, rather poor in 1953, only had radio in those days. Telephones were rare, TV sets a curiosity. Two Dutch researchers, Uri Rosenthal and Geesje Saeijs, concluded in a 2003 study that the failure of the alarm system was the biggest fault of 1953. The Dutch meteorological service actually predicted the storm. There was a warning system, but only three of the more than 1,000 Dutch water boards, which for centuries used to take care of the dikes, had a subscription to it. The Netherlands was still rather religious in those days, and not much public activity was permitted on the seventh day of Creation. So the radio simply stopped broadcasting at midnight on Saturday, just before the storm gained strength. The population and local authorities were, therefore, utterly unprepared for what hit them on Sunday morning. Repopulating in some areas took up to two years. Several months after the flood, people still had to identify recently found corpses. The small holes in the dikes were fixed within weeks, but the big ones were still open half-a-year later and required special efforts. One hole near Rotterdam was closed by parking a ship in it, and several holes were repaired with the help of enormous Phoenix caissons, originally designed for the invasion of Normandy in 1944 and transported from Britain. Quite remarkable was the absence of naming and shaming. Nowadays, the Dutch Parliament routinely holds inquiries, but after the flood of 1953 all political parties (except for the Communists) stressed that it was not useful to look for "scapegoats," as one leading politician of those days called it. The disaster was, according to one member of the Dutch government, an "act of God." It took 30 to 40 years before researchers concluded that the big flood was also partly man-made. Dikes had been neglected, flood gates that should have been closed remained open, civil servants who were warned by the meteorological service slept through the storm--there was a general lack of both local and national leadership. A notable exception was the royal family. Within one day of the disaster, Queen Juliana and her mother, Queen Mother Wilhelmina, visited drowned areas, wading through the water with rubber boots. Many who had lost everything recounted in newspaper stories how important these symbolic gestures were. Even the left-wing newspaper Het Vrije Volk wrote: "The queen is everywhere these days. . . . Just by being there she gives hope." Mr. Rozendaal is the science writer for Elsevier, a Dutch weekly newsmagazine. From checker at panix.com Sun Sep 18 01:26:41 2005 From: checker at panix.com (Premise Checker) Date: Sat, 17 Sep 2005 21:26:41 -0400 (EDT) Subject: [Paleopsych] NYT: Brain May Still Be Evolving, Studies Hint Message-ID: Brain May Still Be Evolving, Studies Hint New York Times, 5.9.9 http://www.nytimes.com/2005/09/09/science/09brain.html [This is merely the first of such studies. Future ones will go into more taboo areas directly.] By [3]NICHOLAS WADE Two genes involved in determining the size of the human brain have undergone substantial evolution in the last 60,000 years, researchers say, leading to the surprising suggestion that the brain is still undergoing rapid evolution. The discovery adds weight to the view that human evolution is still a work in progress, since previous instances of recent genetic change have come to light in genes that defend against disease and confer the ability to digest milk in adulthood. It had been widely assumed until recently that human evolution more or less stopped 50,000 years ago. The new finding, reported in today's issue of Science by Bruce T. Lahn of the University of Chicago, and colleagues, could raise controversy because of the genes' role in determining brain size. New versions of the genes, or alleles as geneticists call them, appear to have spread because they enhanced brain function in some way, the report suggests, and they are more common in some populations than others. But several experts strongly criticized this aspect of the finding, saying it was far from clear that the new alleles conferred any cognitive advantage or had spread for that reason. Many genes have more than one role in the body, and the new alleles could have been favored for some other reason, these experts said, such as if they increased resistance to disease. Even if the new alleles should be shown to improve brain function, that would not necessarily mean that the populations where they are common have any brain-related advantage over those where they are rare. Different populations often take advantage of different alleles, which occur at random, to respond to the same evolutionary pressure, as has happened in the emergence of genetic defenses against malaria, which are somewhat different in Mediterranean and African populations. If the same is true of brain evolution, each population might have a different set of alleles for enhancing function, many of which remain to be discovered. The Chicago researchers began their study with two genes, known as microcephalin and ASPM, that came to light because they are disabled in a disease called microcephaly. People with the condition are born with a brain much smaller than usual, often with a substantial shrinkage of the cerebral cortex, that seems to be a throwback to when the human brain was a fraction of its present size. Last year, Dr. Lahn, one of a select group of researchers supported by the Howard Hughes Medical Institute, showed that a group of 20 brain-associated genes, including microcephalin and ASPM, had evolved faster in the great ape lineage than in mice and rats. He concluded that these genes might have had important roles in human evolution. As part of this study, he noticed that microcephalin and ASPM had an unusual pattern of alleles. With each gene, one allele was much more common than all the others. He and his colleagues have now studied the worldwide distribution of the alleles by decoding the DNA of the two genes in many different populations. They report that with microcephalin, a new allele arose about 37,000 years ago, although it could have appeared as early as 60,000 or as late as 14,000 years ago. About 70 percent of people in most European and East Asian populations carry this allele of the gene, but it is much rarer in most sub-Saharan Africans. With the other gene, ASPM, a new allele emerged 14,100 to 500 years ago, the researchers favoring a midway date of 5,800 years. The allele has attained a frequency of about 50 percent in populations of the Middle East and Europe, is less common in East Asia, and is found at low frequency in some sub-Saharan Africa peoples. The Chicago team suggests that the new microcephalin allele may have arisen in Eurasia or as the first modern humans emigrated from Africa some 50,000 years ago. They note that the ASPM allele emerged about the same time as the spread of agriculture in the Middle East 10,000 years ago and the emergence of the civilizations of the Middle East some 5,000 years ago, but say that any connection is not yet clear. Dr. Lahn said there might be a fair number of genes that affect the size of the brain, each making a small difference yet one that can be acted on by natural selection. "It's likely that different populations would have a different makeup of these genes, so it may all come out in the wash," he said. In other words, East Asians and Africans probably have other brain-enhancing alleles, not yet discovered, that have spread to high frequency in their populations. He said he expected that more such allele differences between populations would come to light, as have differences in patterns of genetic disease. "I do think this kind of study is a harbinger for what might become a rather controversial issue in human population research," Dr. Lahn said. But he said his data and other such findings "do not necessarily lead to prejudice for or against any particular population." A greater degree of concern was expressed by Francis S. Collins, director of the National Human Genome Research Institute. Dr. Collins said that even if the alleles were indeed under selection, it was still far from clear why they had risen to high frequency, and that "one should resist strongly the conclusion that it has to do with brain size, because the selection could be operating on any other not yet defined feature." He said he was worried about the way these papers will be interpreted. Sarah Tishkoff, a geneticist at the University of Maryland and a co-author of both studies, said the statistical signature of selection on the two genes was "one of the strongest that I've seen." But she, like Dr. Collins, said that "we don't know what these alleles are doing" and that specific tests were required to show that they in fact influenced brain development or were selected for that reason. Dr. Lahn acknowledges this point, writing in his article that "it remains formally possible that an unrecognized function of microcephalin outside of the brain is actually the substrate of selection." Another geneticist, David Goldstein of Duke University, said that the new study was "very well done," but that "it is a real stretch to argue for example that microcephalin is under selection and that that selection must be related to brain size or cognitive function." The gene could have risen to prominence through a random process known as genetic drift, Dr. Goldstein said. Richard Klein, an archaeologist who has proposed that modern human behavior first appeared in Africa because of some genetic change that promoted innovativeness, said the time of emergence of the microcephalin allele "sounds like it could support my idea." If the allele did support enhanced cognitive function, "it's hard to understand why it didn't get fixed at 100 percent nearly everywhere," he said. Dr. Klein suggested the allele might have spread for a different reason, that as people colonizing East Asia and Europe pushed north, they adapted to colder climates. Commenting on critics' suggestions that the alleles could have spread for reasons other than the effects on the brain, Dr. Lahn said he thought such objections were in part scientifically based and in part because of a reluctance to acknowledge that selection could affect a trait as controversial as brain function. The microcephalin and ASPM genes are known to be involved in determining brain size and so far have no other known function, he said. They are known to have been under strong selective pressure as brain size increased from monkeys to humans, and the chances seem "pretty good" the new alleles are continuing that, he said. Dr. Lahn said he had tested Dr. Goldstein's idea of alleles' spreading through drift and found it unlikely. From checker at panix.com Sun Sep 18 01:24:13 2005 From: checker at panix.com (Premise Checker) Date: Sat, 17 Sep 2005 21:24:13 -0400 (EDT) Subject: [Paleopsych] Independent: Ageism 'bigger problem than racism or sexism Message-ID: Ageism 'bigger problem than racism or sexism http://news.independent.co.uk/uk/this_britain/article310794.ece Ageism, affecting both the young and old, is the most widely experienced prejudice in Britain, according to the first major study into age discrimination. The survey found that ageism now eclipses racism, sexism and discrimination based on disability. The only group not to experience ageism are those people aged between 35 and 44 who are too old for negative youth stereotyping and too young for prejudice based on advancing years, the study found. Among the 43 per cent of the participants of the survey of 1,843 people who said they had experienced prejudice of some sort, 65 per cent said it included first-hand experience of age discrimination, said Dominic Abrams, professor of social psychology at the University of Kent. "Ageism is the form of prejudice that is experienced most commonly by people in the UK. It's the most pervasive form of prejudice; and that seems to be true pretty much across gender, across ethnicity, religion and disability - people of all types experience ageism," Professor Abrams told the Science Festival in Dublin. "Government legislation on equality and human rights needs to ensure that ageism is treated at least as seriously as all of the other forms of prejudice that it's tackling." The study found that both men and women suffer ageism and that their experience of it was greater even than the sexism experienced by women. Younger people also felt discriminated against because of their youth, although discrimination against elderly people was more widely felt, Professor Abrams said. "Age is in the eye of the beholder but age prejudice seems to be ubiquitous in British society. "More youthful is seen as more useful," he said. Those who took part in the study believed that older people are viewed as friendlier than younger people, while younger people were perceived as more competent and capable than older people. Professor Abrams said combating age discrimination would become increasingly important as the average age of the British population increased. From checker at panix.com Sun Sep 18 01:24:37 2005 From: checker at panix.com (Premise Checker) Date: Sat, 17 Sep 2005 21:24:37 -0400 (EDT) Subject: [Paleopsych] WP: Garreau: A Sad Truth: Cities Aren't Forever Message-ID: A Sad Truth: Cities Aren't Forever http://www.washingtonpost.com/wp-dyn/content/article/2005/09/09/AR2005090902448_pf.html A Sad Truth: Cities Aren't Forever Who, What, When, Where, Why? By Joel Garreau Sunday, September 11, 2005; B01 The city of New Orleans is not going to be rebuilt. The tourist neighborhoods? The ancient parts from the French Quarter to the Garden District on that slim crescent of relatively high ground near the river? Yes, they will be restored. The airport and the convention center? Yes, those, too. But the far larger swath -- the real New Orleans where the tourists don't go, the part that Katrina turned into a toxic soup bowl, its population of 400,000 scattered to the waves? Not so much. When Republican House Speaker Dennis Hastert said that it makes no sense to spend billions of federal dollars to rebuild a city that's below sea level, he added, "It looks like a lot of that place could be bulldozed." In the face of criticism, he hurried to "clarify" his remarks. But according to Washington lore, such a flap occurs when someone inadvertently tells the truth. New Orleans has had a good run for 287 years, but even before Katrina hit, the city was on the wane, as its steadily dropping population figures for decades have shown. All the brave rhetoric about the indomitable human spirit notwithstanding, we may want to consider some realities. As much as it causes heartache to those of us who love New Orleans -- the whole place, not just the one of myth and memory -- cities are not forever. Look at Babylon, Carthage, Pompeii. Certainly, as long as the Mississippi River stays within its manmade banks, there will be a need for the almost 200 miles of ports near its mouth. But ports no longer require legions of workers. In the 21st century, a thriving port is not the same thing as a thriving city, as demonstrated from Oakland to Norfolk. The city of New Orleans has for years resembled Venice -- a beloved tourist attraction but not a driver of global trade. Does the end of New Orleans as one of America's top 50 cities represent a dilemma of race and class in America? Of course. There are a lot of black and poor people who are not going to return to New Orleans any more than Okies did to the Dust Bowl. What the city of New Orleans is really up against, however, is the set of economic, historic, social, technological and geological forces that have shaped fixed settlements for 8,000 years. Its necessity is no longer obvious to many stakeholders with the money to rebuild it, from the oil industry, to the grain industry, to the commercial real estate industry, to the global insurance industry, to the politicians. If the impetus does not come from them, where will it come from? New Orleans, politically defined, is the 180.6 square miles making up Orleans Parish. (In Louisiana a "parish" is comparable to a county.) This place is roughly three times the size of the District of Columbia, though in 2004 it was less populated and its head count was dropping precipitously. The original reason for founding La Nouvelle-Orl?ans in 1718 was the thin crescent of ground French trappers found there. Hence the name "Crescent City." Elevated several feet above the Mississippi mud, it was the last semi-dry natural landing place before the open waters of the Gulf of Mexico. That crescent today is where you find all the stuff that attracts tourists, from the French Quarter, to the Central Business District (the "American Quarter") with the convention center and the Superdome, to the Garden District and Uptown. This area is roughly comparable to Washington from Adams Morgan through K Street to Georgetown and Foxhall Road. That tourist crescent is relatively intact. (Only two of the 1,500 animals at the Audubon Zoo died.) But it is only perhaps 10 percent of the city. The rest to the north of the river -- as distinct from the Algiers district on the south bank, which has always been something of an afterthought -- is under as much as 25 feet of water. For the last 90 years, this vast bulk of the city has required mammoth pumps to clear the streets every time it rains. This is where you'd find working folk -- cops, teachers and nurses -- with bathtub madonnas and colored Christmas tree lights. It's also where you would find areas of soul-destroying poverty, part of the shredding fabric of a city that had a poverty rate of 23 percent. Planners have warned for years that this area would be destroyed if the levees were ever breached. Yet, as novelist Anne Rice wrote of her native city a week ago: "The living was good there. The clock ticked more slowly; people laughed more easily; people kissed; people loved; there was joy. Which is why so many New Orleanians, black and white, never went north. They didn't want to leave a place where they felt at home in neighborhoods that dated back centuries . . . . They didn't want to leave a place that was theirs." Sentiment, however, won't guide the insurance industry. When it looks at the devastation here, it will evaluate the risk from toxicity that has leached into the soil, and has penetrated the frames of the buildings, before it decides to write new insurance -- without which nothing can be rebuilt. Distinct from Orleans Parish is the rest of metropolitan New Orleans, with a population of 850,000 -- twice that of the "city." These parishes, including Jefferson, St. Tammany, St. Bernard, St. Charles, St. John, Plaquemines and St. James, were hard hit. There was four feet of water in some expensive living rooms in Metairie. But they were not scenes of comparable devastation. Also distinct from the city are the region's ports, lining 172 miles of both banks of the Mississippi, as well as points on the Gulf. For example, the largest in the Western Hemisphere is the 54-mile stretch of the Port of South Louisiana. It is centered on La Place, 20 miles upriver from New Orleans. It moved 199 million tons of cargo in 2003, including the vast bulk of the river's grain. That is more than twice as much as the Port of New Orleans, according to the American Association of Port Authorities. The Port of Baton Rouge, almost as big as the Port of New Orleans, was not damaged. Also, downstream, there is the LOOP -- the Louisiana Offshore Oil Port out in the Gulf that handles supertankers requiring water depths of 85 feet. These ports are just a few of the biggest. Illustrating how different the Port of New Orleans is from the city, its landline phones were back in business a week ago, says Gary LaGrange, the port's president and CEO. "The river is working beautifully," he reports, and "the terminal's not that bad." Throughout the world, you see an increasing distinction between "port" and "city." As long as a port needed stevedores and recreational areas for sailors, cities like New Orleans -- or Baltimore or Rotterdam -- thrived. Today, however, the measure of a port is how quickly it can load or unload a ship and return it to sea. That process is measured in hours. It is the product of extremely sophisticated automation, which requires some very skilled people but does not create remotely enough jobs to support a city of half a million or so. The dazzling Offshore Oil Port, for example, employs only about 100 people. Even the specialized Port of New Orleans, which handles things like coffee, steel and cruise boats, only needs 2,500 people on an average day, LaGrange says. The Warehouse District was being turned into trendy condos. Compare that to the tourism industry, which employs about 25,000 people in the arts, entertainment, recreation, accommodation and food sectors -- some 5 percent of the city's former population, according to the census. New Orleans's economy is vividly illustrated by its supply of white-collar jobs. Its Central Business District has not added a new office building since 1989, according to Southeast Real Estate Business. It has 13.5 million square feet of leasable office space -- not much bigger than Bethesda/Chevy Chase, where rents are twice as high. The office vacancy rate in New Orleans is an unhealthy 16 percent and the only reason it isn't worse is that 3 million square feet have been remade as hotels, apartments and condominiums. There are no national corporations with their headquarters in New Orleans. There are regional headquarters of oil companies such as Chevron and ConocoPhillips, but their primary needs are an airport, a heliport and air conditioning. Not much tying them down. In the Central Business District you will also find the offices of the utilities you'd expect, such as the electricity company Entergy. But if you look for major employers in New Orleans, you quickly get down to the local operations of the casino Harrah's, and Popeye's Fried Chicken. Hardly a crying demand for a commercial entrepot. This is not the first time that harsh realities have reshaped cities along the Gulf of Mexico. The historic analogy for New Orleans is Galveston. For 60 years in the 1800s, that coastal city was the most advanced in Texas. It had the state's first post office, first naval base, first bakery, first gaslights, first opera house, first telephones, first electric lights and first medical school. Then came the hurricane of Sept. 8, 1900. As yet unsurpassed as the deadliest natural disaster in American history, it washed away at least 6,000 souls. Civic leaders responded with heroic determination, building a seawall seven miles long and 17 feet high. Homes were jacked up. Dredges poured four to six feet of sand under them. Galveston today is a charming tourist and entertainment destination, but it never returned to its old commercial glory. In part, that's because the leaders of Houston took one look at what the hurricane had wrought and concluded a barrier island might not be the best place to build the major metropolis that a growing east central Texas was going to need. They responded with an equally Lone-Star-scale project, the 50-mile-long Ship Channel. It made inland Houston a world port. In the wake of the Spindletop gusher that launched the Texas oil industry, Houston became the capital of the world petroleum industry. As the leaders of the "awl bidness" were fond of saying, "Don't matter if the oil is in Siberia or the South China Sea -- you buy your rig in Houston or dig for it with a silver spoon." Houston went on to become a finance, medical, university, biotech and now nanotech center. The first word from the surface of the moon was not "Galveston." It was "Houston?" What will New Orleans be known for in 100 years? How a city responds to disaster is shaped both by large outside forces and internal social cohesion. Chicago rebuilt to greater glory after the fire of 1871 destroyed its heart. San Franciscans so transformed their city after the earthquake and fire of 1906 that nine years later they proudly hosted the Panama-Pacific International Exposition to toast the Panama Canal and their own resurrection. Not long ago, I co-taught a team of George Mason University students in a semester-long scenario-planning course aimed at analyzing which global cities would be the winners and losers 100 years from now. The students were keenly aware of the impact that climate change might have on their calculations, among hundreds of other factors. Yet in the end they could not bring themselves to write off such water cities as New York and Tokyo. They simply wouldn't bet against the determination and imagination of New Yorkers and the Japanese. As someone put it at the time, "If it turned out New York needed dikes 200 feet high, you can just hear somebody saying, 'I know this guy in Jersey.' " Will such fortitude be found in New Orleans? In his 2000 book, "Bowling Alone," political scientist Robert Putnam measured social capital around the country -- the group cohesion that allows people to come together in times of great need to perform seemingly impossible feats together. He found some of the lowest levels in Louisiana. (More Louisianans agree with the statement "I do better than average in a fistfight" than people from almost anywhere else.) His data do not seem to be contradicted by New Orleans's murder rate, which is 10 times the national average. Not to mention the political candidates through the ages who, to little effect, have run on promises of cleaning up the corruption endemic to the government and police force. New Orleans is not called the Big Easy for nothing. This is the place whose most famous slogan is " Laissez les bons temps rouler" -- "Let the good times roll." I hope I'm wrong about the future of the city. But if the determination and resources to rebuild New Orleans to greater glory does not come from within, from where else will it come? Author's email : [2]garreauj at washpost.com Joel Garreau, a Post reporter and editor, is the author of "Edge City: Life on the New Frontier" (Doubleday). From checker at panix.com Sun Sep 18 01:24:45 2005 From: checker at panix.com (Premise Checker) Date: Sat, 17 Sep 2005 21:24:45 -0400 (EDT) Subject: [Paleopsych] Slate: Jack Shafer: Don't Refloat - The case against rebuilding the sunken city of New Orleans. Message-ID: Jack Shafer: Don't Refloat - The case against rebuilding the sunken city of New Orleans. http://www.slate.com/?id=2125810&nav=tap1/ What's to rebuild? Nobody can deny New Orleans' cultural primacy or its historical importance. But before we refloat the sunken city, before we think of spending billions of dollars rebuilding levees that may not hold back the next storm, before we contemplate reconstructing the thousands of homes now disintegrating in the toxic tang of the flood, let's investigate what sort of place Katrina destroyed. The city's romance is not the reality for most who live there. It's a [24]poor place, with about 27 percent of the population of 484,000 living under the poverty line, and it's a black place, where 67 percent are African-American. In 65 percent of families living in poverty, no husband is present. When you overlap this [25]New York Times map, which illustrates how the hurricane's floodwaters inundated 80 percent of the city, with this demographic [26]map from the Greater New Orleans Community Data Center, which shows where the black population lives, and [27]this one that shows where the poverty cases live, it's transparent whom Katrina hit the hardest. New Orleans' [28]public schools, which are 93 percent black, have failed their citizens. The state of Louisiana rates 47 percent of New Orleans schools as "Academically Unacceptable" and another 26 percent are under "Academic Warning." About [29]25 percent of adults have no high-school diploma. The police inspire so little trust that witnesses often [32]refuse to testify in court. University researchers enlisted the police in an [33]experiment last year, having them fire 700 blank gun rounds in a New Orleans neighborhood one afternoon. Nobody picked up the phone to report the shootings. Little wonder the city's homicide rate stands at 10 times the national average. This city counts 188,000 occupied dwellings, with about half occupied by renters and half by owners. The [34]housing stock is much older than the national average, with 43 percent built in 1949 or earlier (compared with 22 percent for the United States) and only 11 percent of them built since 1980 (compared with 35 for the United States). As we've observed, many of the flooded homes are modest to Spartan to ramshackle and will have to be demolished if toxic mold or fire don't take them first. New Orleans puts the "D" into dysfunctional. Only a sadist would insist on resurrecting this concentration of poverty, crime, and deplorable schools. Yet that's what New Orleans' cheerleaders--both natives and beignet-eating tourists--are advocating. They predict that once they drain the water and scrub the city clean, they'll restore New Orleans to its former "glory." Only one politician, Speaker of the House Dennis Hastert, dared question the wisdom of rebuilding New Orleans as it was, where it was. On Wednesday, Aug. 31, while meeting with the editorial board of the [35]Daily Herald of Arlington Heights, Ill., he cited the geographical insanity of rebuilding New Orleans. "That doesn't make sense to me. ... And it's a question that certainly we should ask." "It looks like a lot of that place could be bulldozed," Hastert added. For his candor and wisdom, Hastert was shouted down. Sen. Mary L. Landrieu, D-La., and others interpreted his remarks as evidence of the Republican appetite for destruction when it comes to disaster victims. But if you read the entire interview--reproduced [36]here courtesy of the Daily Herald--you might conclude that Hastert was speaking heresy, but he wasn't saying anything ugly or even Swiftian. Klaus Jacob seconded Hastert yesterday (Sept. 6) in a [37]Washington Post op-ed. A geophysicist by training, he noted that Katrina wasn't even a worst-case scenario. Had the storm passed a little west of New Orleans rather than a little east, the "city would have flooded faster, and the loss of life would have been greater." Nobody disputes the geographical and oceanographic odds against New Orleans: that the Gulf of Mexico is a perfect breeding ground for hurricanes; that re-engineering the Mississippi River to control flooding has made New Orleans more vulnerable by denying it the deposits of sediment it needs to keep its head above water; that the aggressive extraction of oil and gas from the area has undermined the stability of its land. "New Orleans naturally wants to be a lake," St. Louis University professor of earth and atmospheric sciences Timothy Kusky told [38]Time this week. "A city should never have been built there in the first place," he said to the [39]Atlanta Journal-Constitution. Why was it? Settlers built the original city on a curve of high flood land that the Mississippi River had deposited over eons, hence the nickname "Crescent City." But starting in the late 1800s and continuing into the early 20^th century, developers began clearing and draining swamps behind the crescent, even dumping landfill into Lake Pontchartrain to extend the city. To chart the aggressive reclamation, compare this map from [40]1798 with this one from [41]1908. Many of New Orleans' lower-lying neighborhoods, such as Navarre, the Lower Ninth Ward, Lake Terrace, and Pontchartrain Park, were rescued from the low-lying muck. The Lower Ninth Ward, clobbered by Katrina, started out as a [42]cypress swamp, and by 1950 it was only half developed, according to the Greater New Orleans Community Data Center. Even such "high" land as [43]City Park suffered from flooding before the engineers intervened. By the historical standards of the 400-year-old city, many of the heavily flooded neighborhoods are fresh off the boat. The call to rebuild New Orleans' levee system may be mooted if its evacuated residents decide not to return. The federal government, which runs the flood-insurance business, sold only [44]85,000 residential and commercial policies--this in a city of 188,000 occupied dwellings. Coverage is limited to $250,000 for building property and $100,000 for personal property. Because the insured can use the money elsewhere, there is no guarantee they'll choose to rebuild in New Orleans, which will remain extra-vulnerable until the levees are rebuilt. Few uninsured landlords and poor home owners have the wherewithal to rebuild--or the desire. And how many of the city's well-off and wealthy workers--the folks who provide the city's tax base--will return? Will the doctors, lawyers, accountants, and professors have jobs to return to? According to the Wall Street Journal, many businesses are expected to relocate completely. Unless the federal government adopts New Orleans as its ward and pays all its bills for the next 20 years--an unlikely to absurd proposition--the place won't be rebuilt. Barbara Bush will be denounced as being insensitive and condescending for saying [45]yesterday that many of the evacuees she met in the Astrodome would prefer to stay in Texas. But she probably got it right. The destruction wrought by Katrina may turn out to be "creative destruction," to crib from Joseph Schumpeter, for many of New Orleans' displaced and dispossessed. Unless the government works mightily to reverse migration, a positive side-effect of the uprooting of thousands of lives will to be to deconcentrate one of the worst pockets of ghetto poverty in the United States. Page One of today's [46]New York Times illustrates better than I can how the economic calculations of individuals battered by Katrina may contribute to the city's ultimate doom: In her 19 years, all spent living in downtown New Orleans, Chavon Allen had never ventured farther than her bus fare would allow, and that was one trip last year to Baton Rouge. But now that she has seen Houston, she is planning to stay. "This is a whole new beginning, a whole new start. I mean, why pass up a good opportunity, to go back to something that you know has problems?" asked Ms. Allen, who had been earning $5.15 an hour serving chicken in a Popeyes restaurant. New Orleans won't disappear overnight, of course. The French Quarter, the Garden District, West Riverside, Black Pearl, and other elevated parts of the city will survive until the ultimate storm takes them out--and maybe even thrive as tourist destinations and places to live the good life. But it would be a mistake to raise the American Atlantis. It's gone. ****** Apologies to Louis Armstrong, Fats Domino, Ernie K-Doe, Allen Toussaint, Tipitina's, Dr. John, Clarence "Frogman" Henry, Jelly Roll Morton, Jessie Hill, Lee Dorsey, the Meters, Robert Parker, Alvin Robinson, Joe "King" Oliver, Kid Stormy Weather, Huey "Piano" Smith, Aaron Neville and his brothers (falsetto is the highest expression of male emotion), Frankie Ford, Chris Kenner, Professor Longhair, Wynton Marsalis and family, Sidney Bechet, and Marshall Faulk. I await your hate mail at . (E-mail may be quoted unless the writer stipulates otherwise.) Related in Slate _________________________________________________________________ Click [47]here for a full roster of Slate's Katrina coverage. Find out how New Orleans ended up below sea level in the first place by reading this [48]Explainer. Earlier this year, Timothy Noah looked at the history of Dennis Hastert's attempts to [49]claim his independence. For those looking for a new place to live, David Plotz weighed the pros and cons of [50]online city guides in 1997. And who can forget when Hastert speculated that George Soros got his money from drug dealers, as Jack Shafer [51]wrote up in "Press Box." Jack Shafer is Slate's editor at large. What did you think of this article? Join the Fray, our reader discussion forum Remarks from the Fray: Despite all arguments, however reasonable, for not rebuilding New Orleans, it will be rebuilt. Why? Because it is not an arrangement, an idea, a rational proposal for housing its inhabitants. It is a three-hundred-year-old American city, a place whose long, haunting, and instructive history is borne into the future in the personal lives of the people who live there currently, people for many of whom the deep associations of place constitute meaning. And besides, the renaissance of New Orleans will be not just a boondoggle, but the Mother of All Boondoggles. Despite visionary notions already being floated -- New Orleans as an estuarine green space, a lovely green park at the mouth of the Mississippi, a restored and imaginatively engineered coastal wetlands, bulwark against the fury of Gulf storms -- nothing of the kind will happen. What will happen has already been rehearsed in lower Manhattan. If you followed closely the intricate waltz of the players in the reconstruction of the World Trade Center, the brutal back-room knife fights, the posturing and politics and deal-making and, in the end, the inevitable triumph of the dollar and its result -- the preposterous mediocrity well on its way toward construction -- well, now imagine the hustlers down south as the Really Big Dig gets under way. And who will win? I'm just guessing -- the oil guys, the petrochemical guys, the lumber guys, the developers and their protectors within the political Big Top. For these folks it really is The Big Easy. Y'all book early and come on down. --Longleaf (To reply, click [61]here) Perhaps those who return to New Orleans, a smaller, sparser New Orleans to be sure, can re-envision our city. Re-development should be encouraged in the topographically highest points, rubble from the destruction should be "greened" and then pulverized for landfill to help raise lots. Neighborhoods should be re-platted to encourage mixed use, mixed SES, pedestrian and streetcar friendly "old-fashioned" neighborhoods anchored by green spaces and shiny new public schools. Builders should be encouraged to incorporate salvaged architectural elements and traditional "Creole" design--like raised cottages with dormer windows and functional window shutters--that will help the city retain its unique visual charm, and withstand potential future hurricanes better than more modern styles (like slab foundation ranch homes) which are hopelessly ill suited to the climate. Restoration of the coastal wetlands is an imperative project, the continued neglect of which will have ongoing catastrophic repercussions for the entire nation. The successful restoration of this frail and gorgeous eco-system could make the city of New Orleans and the entire Gulf region safer in the future, and fuel the renaissance of tourism. Eco-tourism in Louisiana? Who'dve thunk it? ... --shellybell68 (To reply, click [62]here) ...You point out that New Orleans is uniquely vulnerable to hurricane damage. That is undisputed. But are not also any other cities on the Gulf Coast? Biloxi is not situated in a bowl, but it has been destroyed nonetheless. Do you advocate abandoning Biloxi? Would you advocate against rebuilding any city within striking distance of the hurricane breeding grounds? It is also undisputed that much of coastal California is uniquely vulnerable to earthquake damage. Would you advocate against rebuilding San Francisco when the next Big One hits? I don't think you would. The inevitability of another large earthquake in the San Francisco Bay Area is what spurs people to retrofit existing structures to withstand earthquakes. Similarly, if another big hurricane hit on New Orleans is inevitable, then this means not that New Orleans should be abandoned but that serious attention ought to be given to levee protection and coastal restoration - the issues that local officials have screaming about for years. --James (To reply, click [63]here) Why would anyone not want a newer home, with better infrastructure, in an area with better schools, away from assumptions and expectations that encourage failure? This is an opportunity for a great social experiment--allow people to choose where they want to go, free of cultural and economic restraints, and see how they make out. It's also an opportunity to restore the Mississippi delta by letting the river loose. The environmental benefits would be enormous. Why would the Left find this an offensive idea? ... --CDouglas (To reply, click [64]here) (9/8) References 24. http://www.gnocdc.org/orleans/income.html 25. http://www.nytimes.com/packages/html/national/2005_HURRICANEKATRINA_GRAPHIC/ 26. http://www.gnocdc.org/maps/race/maps.cfm?nbhd_code=n22&nbhd_encoded=Lower%20Ninth%20Ward 27. http://www.gnocdc.org/mapping/docs/Poverty.pdf 28. http://www.gnocdc.org/orleans/education.html 29. http://www.gnocdc.org/orleans/edattainment.html 30. http://www.slate.com/?id=2125810&nav=tap1/#ContinueArticle 31. http://ad.doubleclick.net/jump/slate.homepage/slate;kw=slate;sz=300x250;ord=9537? 32. http://www.nola.com/speced/cycleofdeath/index.ssf?/speced/cycleofdeath/silent.html 33. http://www.post-gazette.com/pg/05233/556827.stm 34. http://www.gnocdc.org/orleans/housing.html 35. http://www.dailyherald.com/search/searchstory.asp?id=89670 36. http://www.slate.com/id/2125810/sidebar/2125827/ 37. http://www.washingtonpost.com/wp-dyn/content/article/2005/09/05/AR2005090501034.html 38. http://www.time.com/time/magazine/article/0,9171,1101565,00.html 39. http://www.ajc.com/news/content/news/stories/0805/31natkatlevees.html 40. http://www.lib.utexas.edu/maps/historical/new_orleans_1798.jpg 41. http://www.lib.utexas.edu/maps/united_states/new_orleans_1908.jpg 42. http://www.gnocdc.org/orleans/8/22/snapshot.html 43. http://gnocdc.org/orleans/5/44/snapshot.html 44. http://sfgate.com/cgi-bin/article.cgi?file=/c/a/2005/09/02/MNGA9EHBL01.DTL&type=printable 45. http://www.nytimes.com/2005/09/07/national/nationalspecial/07barbara.html 46. http://www.nytimes.com/2005/09/07/national/nationalspecial/07home.html?ex=1283745600&en=e4fe54095332a66e&ei=5090&partner=rssuserland&emc=rss 47. http://www.slate.com/id/2125448/ 48. http://www.slate.com/id/2125229/nav/tap2/ 49. http://www.slate.com/id/2111723/ 50. http://www.slate.com/id/3331/ 51. http://www.slate.com/id/2106176/ From checker at panix.com Sun Sep 18 01:24:52 2005 From: checker at panix.com (Premise Checker) Date: Sat, 17 Sep 2005 21:24:52 -0400 (EDT) Subject: [Paleopsych] LAT: Joel Kotkin: A NEW New Orleans Message-ID: Joel Kotkin: A NEW New Orleans http://www.latimes.com/news/opinion/sunday/editorials/la-op-flood4sep04,0,3357784,print.story?coll=la-home-sunday-opinion 5.9.4 Forget crawfish ?touff?e -- look to ugly Houston for a vibrant economic model. Joel Kotkin, an Irvine Senior Fellow at the New America Foundation, is the author of "The City: A Global History" (Modern Library, 2005) BECAUSE THE OLD New Orleans is no more, it could resurrect itself as the great new American city of the 21st century. Or as an impoverished tourist trap. Founded by the French in 1718, site of the first U.S. mint in the Western United States, this one-time pride of the South, this one-time queen of the Gulf Coast, had been declining for decades, slowly becoming an antiquated museum. Now New Orleans must decide how to be reborn. Its choices could foretell the future of urbanism. The sheer human tragedy -- and the fact that the Gulf Coast is critical to the nation' s economy as well as the Republican Party's base -- guarantee that there will be money to start the project. Private corporations, churches and nonprofits will pitch in with the government. But what kind of city will the builders create on the sodden ruins? The wrong approach would be to preserve a chimera of the past, producing a touristic faux New Orleans, a Cajun Disneyland. Sadly, even before Hurricane Katrina's devastation, local leaders seemed convinced that being a "port of cool" should be the city's policy. Adopting a page from Richard Florida's "creative class" theory, city leaders held a conference just a month before the disaster promoting a cultural strategy as the primary way to bring in high-end industry. This would be the easy, bankable way to go now: Reconstruct the French Quarter, Garden District and other historic areas while sprucing up the convention center and other tourist facilities. This, however, would squander a greater opportunity. A tourism-based economy is no way to generate a broadly successful economy. For decades before this latest hurricane, public life, including the police force, were battered by corruption and eroded by inefficiency. Now Katrina has brought into public view the once-invisible masses of desperately poor people whom New Orleans' tourist economy and political system have so clearly failed. Although the number of hotel rooms in the city has grown by about 50% over the last few years, tourism produces relatively few high-wage jobs. It encourages people to learn extraordinary slide trombone technique, develop 100 exquisite recipes for crawfish and keep swarms of conventioneers happy -- none of which are easy or unimportant tasks. But this economy does little to nurture the array of skills that sustain a large and diverse workforce. Contrary to Florida's precepts, having a strong gay community, lively street culture, great food, tremendous music and lively arts have not been enough to lure the "creative class" to New Orleans. The city has been at best a marginal player in the evolving tech and information economy. Meanwhile, the tourism/entertainment industry is constantly under pressure from competitors. Once, being the Big Easy in the Bible Belt gave New Orleans a trademark advantage. But the spread of gambling along the Gulf has eroded that semi-sinful allure. Mississippi's flattened casinos, with their massive private investment, will almost certainly rise years ahead of New Orleans' touristic icons. For all these reasons, New Orleans should take its destruction as an opportunity to change course. There is no law that says a Southern city must be forever undereducated, impoverished, corrupt and regressive. Instead of trying to refashion what wasn't working, New Orleans should craft a future for itself as a better, more progressive metropolis. Look a few hundred miles to the west, at Houston -- a well-run city with a widely diversified economy. Without much in the way of old culture, charm or tradition, it has far outshone New Orleans as a beacon for enterprising migrants from other countries as well as other parts of the United States -- including New Orleans. Houston has succeeded by sticking to the basics, by focusing on the practical aspects of urbanism rather than the glamorous. Under the inspired leadership of former Mayor Bob Lanier and the current chief executive, Bill White, the city has invested heavily in port facilities, drainage, sanitation, freeways and other infrastructure. At least in part as a result of this investment, this superficially less-than-lovely city has managed to siphon industries -- including energy and international trade -- from New Orleans. With its massive Texas Medical Center, it has emerged as the primary healthcare center in the Caribbean basin -- something New Orleans, with Tulane University's well-regarded medical school, should have been able to pull off. Attention to fundamentals has always been important to cities. Hellenistic Alexandria was built in brick to reduce fire dangers that terrified ancient urbanites, and it lived off its huge new man-made harbor. Rome built stupendous, elaborate water systems and port facilities to support its huge population. Amsterdam and the Netherlands provide particularly relevant examples, as they offer great urban culture at or below sea level. For centuries the Dutch have coped with rising water levels with ingenious engineering. In this century, the most notable example was the determined response to the devastating 1953 North Sea storm, which killed more than 1,800 people. Responding with traditional efficiency, the Dutch built a massive system of dikes, completed in 1998, which has helped them to remain among the most economically and culturally vibrant regions in Europe. Giving priority to basic infrastructure may not appeal to those who would prefer to patch the structural problems and spend money on rebuilding New Orleans as a museum, or by adding splashy concert halls, art museums and other iconic cultural structures. Ultimately, the people of the New Orleans region will have to decide whether to focus on resuscitating the Big Easy zeitgeist -- which includes a wink-and-nod attitude toward corruption -- or to begin drawing upon inner resources of discipline, rigor and ingenuity. Some may argue that such a shift would diminish New Orleans' status in cultural folklore as a corrupt but charming waif. Yet that old ghost is probably already gone. Even a rebuilt, reconfigured Latin Quarter would no doubt seem more Anaheim than anti-bellum. In contrast, a new New Orleans -- a city with a thriving economy, a city of aspiration as well as memory -- would in time create its own cultural efflorescence, this time linked as much to the future as the past. This should be the goal of the great rebuilding process about to begin. From checker at panix.com Mon Sep 19 19:34:53 2005 From: checker at panix.com (Premise Checker) Date: Mon, 19 Sep 2005 15:34:53 -0400 (EDT) Subject: [Paleopsych] Walter H. Bradley: Why I am not a Mormon Message-ID: Walter H. Bradley: Why I am not a Mormon http://acts413.org/religions/mormon.htm "NOW WHEN THEY SAW THE BOLDNESS OF PETER AND JOHN, AND PERCEIVED THAT THEY WERE UNLEARNED AND IGNORANT MEN, THEY MARVELLED; AND THEY TOOK KNOWLEDGE OF THEM, THAT THEY HAD BEEN WITH JESUS." CHALLENGING PEOPLE TO THINK FOR THEMSELVES ABOUT LIFE'S MOST IMPORTANT QUESTIONS. WHY I AM NOT MORMON The day that Christ came into my heart, I covenanted with Him to accept the Bible as a guide in all things. I take for granted that my reader, if a Christian, has done the same. It was from the standard of the Word that I investigated Mormonism, seeking further light from God. I have no tirade to direct against Mormonism as a national menace, as do some. I am not jealous of her growing political power. I have no quarrel with her vast wealth and mercantile pursuits. I care not to hale forth the skeletons that may linger in the closets of her past. To the Christian, the question is not, how faulty are the lives of poor erring men? But, how closely are they trying to follow the Bible? Be their lives as faulty as they may, if they love the truth of God's Word, that truth will eventually sanctify their lives, and make them like the divine pattern. (John17:17). Judas was among the twelve, but that does not condemn Christianity. Satan and his host are transformed into ministers of righteousness (2 Corinthians. 11:14, 15), still his teaching is false. The Christian's sole question must be, "What saith the Scripture?" If they speak not according to this word, it is because there is no light in them." Isaiah 8:20. It is the Scripture alone that is "profitable for doctrine." 2 Timothy 3:15-17. In mentioning a few of the things that the Bible contradicts and condemns in Mormonism, I trust I may help some, and offend none but the son of perdition. GOD IS SUPERHUMAN The Bible is plain in portraying the super humanity of God. When the Almighty spoke to Moses from the burning bush, He proclaimed Himself to be Jehovah, the Self-existent One. This fundamental fact He reiterated and emphasized in many Scriptures. "Before Me there was no God formed, neither shall there be after Me. I, even I, am the Lord: and beside Me there is no Saviour." Isaiah 43:10, 11. "I am the first, and I am the last; and beside me there is no God. . . Is there a God beside me? Yea, there is no God; I know not any." Isaiah 44:6, 8. Then, as God, the Deity stands alone. Apart from the Godhead, no one ever was divine. God is the Creator, man the created, absolutely dependent upon God for every phase of his being. Any attributes of divinity man might ever have would only be imputed to him by God. Man did not have the power of creation (Psalm 100:3), and his very existence was conditional (Genesis 2:17). By sinning, Adam and all his family became mortal, subject to death, (Romans 5:12). The same as you and I, Adam had to experience a spiritual rebirth, and receive from Christ all he might ever hope to have or to be (1 Corinthians 15:22); and Adam's Creator said, "Beside me there is no Saviour." In contradiction to this, I find that Mormonism's highest authority makes God a mere man, the man Adam; and makes Adam the creator, the father of his own Saviour. I submit as proof of these startling teachings of the Latter-day Saints, only what may be found in the writings of the first presidency and the twelve apostles of the Mormon Church. That this is to the Mormon the highest of authority, witness the following: "It would seem altogether gratuitous and uncalled for on our part, to write a commendatory preface to the discourses of the First Presidency and The Twelve Apostles of this church. To the Saints their words are the words of God, their teaching fraught with heavenly wisdom, and their directions leading to the salvation and eternal lives. . . . The choicest fruit that can be culled from the tree of knowledge, suited to the taste of all who can appreciate such delicious food."--Preface to "Journal of Discourses," volumes 2, 4. Then surely I can gain a correct view of the doctrines of Mormonism by comparing the writings of these men with the Bible, which to me is the Word of God. SAYS ADAM IS GOD Place before you the Bible and the "Journal of Discourses." Open the Bible and read prayerfully the scriptures I have given. Now open the "Journal of Discourses," volume 1, page 50. Compare with the Bible what follows: "Now hear it, O inhabitant of earth, Jew and gentile, saint and sinner! When our father Adam came into the Garden of Eden, he came into it with a celestial body and brought Eve, one of his wives with him. He helped to make and organize this world. He is Michael, the archangel, the Ancient of Days about whom holy men have written and spoken. He is our Father and our God, and the only God with whom we have to do. Every man upon the earth, professing Christian or non-professing, must hear it, and will know it sooner or later." (Italics and capitals as in text). There can be no mistake here. Whereas God declares His superhuman divinity, Brigham Young declares the human divinity of the man Adam as the sole God of the human race. Not only does this teaching exalt a sinful man into the seat of God, but it hurls the Bible Jehovah from His heavenly throne. "God Himself was once as we are now and is an exalted man, and sits enthroned in yonder heavens! This is the great secret."--"Compendium," page 190, or 175. This comparison of Mormonism and the Bible is disastrous. No one can deny that the God Mormonism is a man, a created man, a man who sinned! And this is one reason why I am not a Mormon. ADAM Jesus' FATHER That Adam is the father of his own Saviour must follow logically from such teaching, and is unblushingly preached by the same prophet, seer, and revelator. Denying the Bible statement that "that which is conceived (margin "begotten") in her is of the Holy Ghost. And she shall bring forth a son, and thou shalt call His name Jesus" (Matthew 1:20, 21; Luke 1:35). President Young teaches: "When the Virgin Mary conceived the child Jesus, the Father had begotten him in His own likeness. He was not begotten by the Holy Ghost. And who is the Father? He is the first of the human family. Now remember from this time forth and forever that Jesus Christ was not begotten by the Holy Ghost." And that he, in his position as prophet, seer, and revelator, and first president of the Mormon Church, considered this teaching as of utmost importance, Brigham Young continues, "Now let all who may hear these doctrines pause long before they make light of them, or treat them with indifference, for they will prove their salvation or damnation."--"Journal of Discourses." Volume 1, pages 50, 51. God forbid that I should treat this with indifference, or pass it by lightly; for not only does it make the man Adam the creator of his own Saviour, but without question, it teaches that Jesus was not the pure and Holy Son of God, but was the illegitimate offspring of an adulterous liaison between the man Adam, the first of the human race, and Mary, the lawful wedded wife of Joseph. This is another reason why I am not a Mormon. Not content with so degrading our blessed Lord in His birth, this system we are studying makes Him a lawbreaker in His life. The law that Jesus gave Moses, and which He kept on earth (Leviticus 18:18; John 1:1-3; 15:10), forbade a man's having two sisters as wives at the same time. President Hyde, of the Mormon church, asks himself the following question: "Then you really mean to hold to the doctrine that the Saviour of this world was married? Do you mean to be understood so? And if so, do you mean to be understood that He had more than one wife?" His answer follows on page 82: "We say it was Jesus Christ who was married (at Cana) to be brought into the relation whereby He could see His seed before He was crucified. . .I do not despise to be called a son of Abraham if he had a dozen wives; or to be called a brother, a son, a child of the Saviour if He had Mary, Martha, and several others as wives." "Journal of Discourses," Volume 2, pages 81, 82. THE DOCTRINE OF THE HOLY PRIESTHOOD Even more derogatory to the glory of Jesus, and yet of paramount importance to the Mormon system, is the doctrine of the holy priesthood. It is to them what the sun is to light. Without it, the gospel has no virtue, the dead cannot be resurrected, the living baptized, or men and women be married in the sight of God. Witness the words of Charles W. Penrose, first counselor to Heber J. Grant, first president, prophet, seer, and revelator of Mormonism today: "The ordinances of the gospel referred to in the previous tracts of this series, cannot be effectually administered without divine authority. . . .Baptism, even if solemnized according to the form and pattern followed by the Saviour and His appointed servants, will be of no avail and will not bring remission of sins, unless the officiating minister has received authority from the Deity to act in the name of the Father and of the Son and of the Holy Ghost. . . . This Divine Authority was called the Holy Priesthood. . .and was established in the Christian Church by the Saviour Himself."--"Rays of Living Light," page 17. "There are in the church two Priesthoods, namely the Melchisedec and the Aaronic. . . All other authorities or offices in the Church are appendages to this Priesthood. . . .The Latter Day Saints have this Priesthood, with its authority, ordinances, and blessing. How they have obtained it is a very important question."--"Compendium," page 64, 65. Even if their own estimate of the absolute necessity of this priesthood to their system is accepted, yet that priesthood is found to be strictly anti-biblical and anti-christian. I shall deal with three aspects of the question--the Aaronic priesthood, the Melchisedec priesthood, and the authority of both. The Aaronic priesthood, beginning with the calling of Aaron himself, was distinctly limited as to both duration and place. "Thou and thy sons with thee shall keep your priest's office for everything of the altar, and within the veil." Numbers 18:7. There they were to minister. They were not anointed priests until after the sanctuary had been anointed. As long as God had on earth a temple or a sanctuary, they were to be priests there, and no one else could have any right to priesthood. Any other than a son of Aaron coming near would be put to death. In this sanctuary, they were to offer the sacrifices of "bulls and goats" (Hebrews 9:1-4), for "every high priest is ordained to offer gifts and sacrifices (Hebrews 8:3)." Without these sacrifices, they could not be priests. If the time should ever come when God should remove His presence from the Aaronic temple and sanctuary, and the blood of bulls and goats should cease to have sacrificial value, then Aaron, with no ministry, no temple, and no sacrifice, would no longer be a priest. The Bible surely declares that time to have been at the first advent of Jesus. The Aaronic sanctuary "was a figure for the time then present," "imposed on them until the time of reformation" (Hebrews 9:9, 10), "till the seed should come to whom the promise was made" (Galatians 3:19). Christ is the Seed. (Galatians 3:16) The Seed came. (Galatians 4:4). The true sacrifice was offered. (Hebrews 9:14) No longer had the blood of bulls and goats any value. (Hebrews 10:8-14). The veil that had hidden Aaron's sacred priesthood from common eyes was rent in twain. (Matthew 27:51). The time of reformation had come, "Christ being come an High Priest." (Hebrews 9:11). "When that which is perfect is come, then that which is in part shall be done away." 1 Corinthians 13:10. The limits, as to both time and place, that the Bible had prescribed for Aaron's work, were reached in Christ, and Aaron's priesthood had to cease by limitation. Thus to claim now to have the Aaronic priesthood, is to deny the coming, the death, the sacrifice, and the priesthood of Christ. Hence I cannot be a Mormon. AARON'S PRIESTHOOD ONLY AN OBJECT LESSON This work of Aaron's, which ended when the priesthood of Jesus began, was simply an object lesson to teach the Jews of Christ's priesthood (Hebrews 8:1-5), and Aaron's priesthood had to cease before Christ's could even begin (Hebrews 9:8). "We have a great high priest, that is passed into the heavens, Jesus the Son of God." Hebrews 4:14. "If therefore perfection were by the Leviticus priesthood. . . .what further need was there that another priest should rise after the order of Melchisedec, and not be called after the order of Aaron." Hebrews 7:11. "The priesthood being changed." Hebrews 7:12. It was not continued subordinate, not perpetuated on earth in a lesser degree, but changed; changed because we must have a priest, and Aaron was a priest no longer. Christ, our new priest succeeding Aaron, is the "one mediator between God and men, and not priests. As there could, therefore, be but one Melchisedec priest, and as Christ, on the oath of God, was that one forever (Hebrews 7:21), it follows with inexorable logic "this man, because He continueth ever, had a priesthood that passeth not from one to another." Hebrews 7:24, margin. Then in their claim to have the Melchisedec priesthood on earth today, Mormonism tacitly declares that Christ has again died, and is still dead, and that His priesthood has passed to them! Do you wonder why I cannot be a Mormon? JOHN THE BAPTIST DEAD Now on the third aspect, the "authority," as Counselor Penrose says, "How they have obtained it is a very important question." It is indeed a very great question. I will grant their premise that the holy priesthood is all in all to Mormonism, and that without it, there could be no Mormonism. I will grant that if they have not the priesthood they claim, the whole movement is an imposture. I will even grant that if they do have this priesthood from God (which I have already proved cannot be), they are the true church of God, and all others are false. Surely I will; for the Bible clearly teaches that by no possibility could the authority, even of if did exist, have come to them in the way they claim. "Joseph Smith received a visitation from John the Baptist, who held authority in ancient times to preach and administer baptism for the remission of sins. He came as a ministering angel, and ordained Joseph Smith and his companion Oliver Cowdery, to that Priesthood and authority." Thus endowed, these young men baptized each other, and at a later date were ministered to by the Apostles Peter, James, and John, who ordained them to the apostleship with the authority to lay hands on, baptize believers and confer the gift of the Holy Ghost, also to build up and organize the Church of Christ according to the original pattern."--"Rays of Living Light," page 27. In the face of this assertion, the Bible tells me that John the Baptist, Peter, James, and John were dead. (Matthew 14:3-11; John 21:19, 23; Acts 12:2). For a complete Bible study of the condition of man in death, I refer you to another link on this web site. I cannot take space here to cover such a subject. I shall, however, quote enough Scripture to show that the Word of God teaches that dead men, good or bad, cannot come back to earth. "The dead know not anything." Ecclesiastes 9:5, 6, 10. "In death there is no remembrance." Psalm 6:5. "The dead praise not the Lord." Psalm 115:17. "His sons come to honor, and he knoweth it not." Job 14:21. "Man lieth down, and riseth not: till the heavens be no more, and they shall not awake." Job 14:14. "If I wait, the grave is my house." Job 17:13. Until our Lord Jesus Christ calls forth the dead in the resurrection, no one can be made alive. "In Christ shall all be made alive. But man in his own order; Christ the firstfruits; afterward they that are Christ's at His coming." 1 Corinthians 15:21-23. The resurrection has not yet taken place. (1 Thessalonians 4:16, 17). John the Baptist, Peter, James, and John are dead. They know not anything. They have no remembrance. They are in the grave awaiting the great day of the resurrection. Till then, they cannot come back. They did not appear to Joseph Smith. WHO INSPIRED JOSEPH SMITH? Then who did? For I believe Joseph Smith to have been inspired by some superhuman power. Would good angels in lying deceit have impersonated these dead apostles? The Christian says No; so does the Bible. Who did come to him and deceive him? There can, to the just, be but one conclusion. Paul says that "Satan himself is transformed into an angel of light. Therefore it is no great thing if his ministers also be transformed as the ministers of righteousness." 2 Corinthians 11:13-15. As God could not deceive, nor would the holy angels do so, doubtless some of the evil angels transformed themselves into ministers of righteousness, and imposed on the Mormon prophet Joseph Smith. Counselor Penrose said, in the extract quoted, that they came as ministering angels--just as Paul said the devil would come. We might well expect this; for Jesus warned us that in these very times, "There shall arise false Christs, and false prophets, and shall show great signs and wonders; insomuch that, if it were possible, they shall deceive the very elect. . Wherefore if they shall say unto you, Behold, He is in the desert; go not forth: Behold, He is in the secret chambers; believe it not." Matthew 24:23-27. This same Bible truth of life only in Christ at the resurrection, also brands their baptism for the dead as spurious. A careful study of 1 Corinthians 15:12-29 will convince the truth seeker that it is the dead, resurrected, and ascended Christ for whom, or into whom (see lexicon or concordance), all believers are baptized. Any theory of a second probation, the Bible condemns as a Satanic potion that lulls the unwary past probationary repentance, into the vortex of the second death. The Bible also clearly shows God to be the Creator and not the begetter of mankind. Jesus is the only begotten. And it shows that Adam at creation, and his sons at birth, had their first existence; and that the Mormon doctrine of pre-existence is contrary to Scriptural teaching. Such evidence, you see, prevents my being a Mormon. But it does not prevent my pitying and loving the Mormon people. Can you not, Christian reader, love them and pray for them? Pray that "the faith which was once for all delivered unto the saints" (Jude 3, A.R.V.) may dawn upon the minds of these poor children of Adam, and that they may embrace "the lamb of God, that taketh away the sins of the world;" our High Priest, Jesus, the Son of God. "Great peace have they which love thy law" [Psalms 119:165] Contact us at: [11]saints at acts413.org From checker at panix.com Mon Sep 19 19:42:47 2005 From: checker at panix.com (Premise Checker) Date: Mon, 19 Sep 2005 15:42:47 -0400 (EDT) Subject: [Paleopsych] BBC: Brain chemistry link to anorexia Message-ID: Brain chemistry link to anorexia http://news.bbc.co.uk/2/hi/health/4215298.stm Scientists have produced evidence that the eating disorder anorexia nervosa is linked to disrupted brain chemistry. They have shown a form of the disorder is associated with an alteration of the activity of serotonin - a chemical linked to mood and anxiety. The University of Pittsburgh team hope their work could lead to the development of new drugs and psychological treatments. The study is published in Archives of General Psychiatry. The main symptom of anorexia nervosa is the relentless pursuit of thinness through self-starvation, driven by an obsessive fear of being fat. There are two sub-types. One simply involves restricting food intake, the other involves periods of restrictive eating alternated with episodes of binge eating and /or purging, rather like bulimia. The Pittsburgh team compared serotonin activity in women who had recovered from both sub-types of the disorder, with that in women who had never developed an eating disorder. Using sophisticated brain scans, they showed significantly higher serotonin activity in several parts of the brains of women who had recovered from the bulimia-type form of the disorder. Serotonin levels were also heightened in the group who had recovered from restricting-type anorexia, but not significantly so. However, the highest levels in this group were found among those women who showed most signs of anxiety. The researchers say their work suggests that persistent disruption of serotonin levels may lead to increased anxiety, which may trigger anorexia. However, they could not rule out the possibility that serotonin levels were altered by the malnutrition associated with the disorder. New treatments The researchers, led by Dr Ursula Bailer, said: "There are no proven treatments for anorexia nervosa, and this illness has the highest mortality of any psychiatric disorder. "These data offer the promise of a new understanding of the pathogenesis of anorexia nervosa and new drug and psychological treatment targets." Professor Janet Treasure, an expert in eating disorders at King's College London said other research had suggested eating disorders were linked to disrupted serotonin levels. She told the BBC News website: "The addition of drugs to psychotherapy for anorexia nervosa may be of help, especially in an outpatient setting, but adherence could be a problem as people with anorexia nervosa often are worried about taking drugs." The King's team work with the Pittsburg team in a large International study to seek answers to the questions about what can cause or cure eating disorders. They are currently seeking families in which more than one person has an eating disorder in order to define risk factors in the genes, in development and in the environment. From checker at panix.com Mon Sep 19 19:42:54 2005 From: checker at panix.com (Premise Checker) Date: Mon, 19 Sep 2005 15:42:54 -0400 (EDT) Subject: [Paleopsych] Psychological Science: Sexual Arousal Patterns of Bisexual Men Message-ID: Sexual Arousal Patterns of Bisexual Men Gerulf Rieger1, Meredith L. Chivers2 and J. Michael Bailey1 Psychological Science Volume 16 Issue 8 Page 579 - August 2005 doi:10.1111/j.1467-9280.2005.01578.x [I can supply the PDF.] First, the summary from the "Magazine and Journal Reader" feature of the daily bulletin from the Chronicle of Higher Education, 5.9.9: Bisexual men generally have a physiological response to one sex or the other, but not both, even when they report that they are psychologically aroused by both sexes, three researchers write. The research was conducted by Gerulf Rieger, a doctoral student in psychology at Northwestern University, and Meredith L. Chivers, a postdoctoral fellow in law and mental health at the Center for Addiction and Mental Health in Toronto, under the supervision of J. Michael Bailey, a Northwestern professor of psychology whose research has often been controversial in the past (The Chronicle, December 1, 2004). The researchers showed images of two men having sex, or of two women, to 33 self-declared homosexual men, 33 bisexual men, and 38 heterosexual men. They found that the men who described themselves as bisexual generally had a strong genital arousal to either the male or female sexual images, but not to both, even though the bisexual men reported that they had felt sexually aroused -- psychologically -- by both sets of images. "Male bisexuality," the authors write, "appears primarily to represent a style of interpreting or reporting sexual arousal rather than a distinct pattern of genital sexual arousal." So, while "in terms of behavior and identity, bisexual men clearly exist," skepticism about male bisexuality must "concern claims about bisexual feelings, that is, strong sexual attraction and arousal to both sexes." The authors say they undertook the research because, since the beginnings of the discipline of psychology, controversy has persisted about "whether bisexual men are substantially sexually aroused by both sexes." Clearly, they note, bisexual behavior in men exists, because many men certainly have had sex with both men and women, and, in a 1994 national survey, about 0.8 percent of American men described themselves as bisexual. But, the researchers say, their findings provide a new, more complicated picture of the subject. At the least, they say, "it is reasonable to ask whether male bisexual behavior reflects sexual arousal to both sexes." The divergence between genital and reported arousal is "intriguing because measures of genital and subjective arousal tend to be highly correlated in men," they say. They also note that earlier research suggests that bisexual men exaggerate their subjective arousal. The authors conclude: "With respect to sexual arousal and attraction, it remains to be shown that male bisexuality exists. Thus, future research should also explore nonsexual reasons why some men might prefer a bisexual identity to a homosexual or heterosexual identity." ---------------- Abstract There has long been controversy about whether bisexual men are substantially sexually aroused by both sexes. We investigated genital and self-reported sexual arousal to male and female sexual stimuli in 30 heterosexual, 33 bisexual, and 38 homosexual men. In general, bisexual men did not have strong genital arousal to both male and female sexual stimuli. Rather, most bisexual men appeared homosexual with respect to genital arousal, although some appeared heterosexual. In contrast, their subjective sexual arousal did conform to a bisexual pattern. Male bisexuality appears primarily to represent a style of interpreting or reporting sexual arousal rather than a distinct pattern of genital sexual arousal. Although bisexual behavior is not uncommon in men, there has long been skepticism that it is motivated by strong sexual arousal and attraction to both sexes. For example, the case studies of Krafft-Ebing (1886) suggest that most men with bisexual activity have sex with women because of social pressure but have sexual attraction exclusively or almost exclusively to men (Cases 127, 128, 135 [-] 153, and 167). Hirschfeld (1914/2001, pp. 197 [-] 215) speculated that most self-identified bisexual men are either heterosexual or homosexual and that men with substantial bisexual attractions are rare. Freund, who was a pioneer in measuring male genital arousal, wrote that, after assessing genital arousal in hundreds of men, he never found convincing evidence that bisexual arousal patterns exist (1974, p. 39). The existence of male bisexual attraction and arousal remains controversial and poorly understood (Fox, 2000; MacDonald, 2000; Zinik, 2000). BISEXUALITY: BEHAVIOR, IDENTITY, AND AROUSAL Sexual orientation refers to the degree of sexual attraction, fantasy, and arousal that one experiences for members of the opposite sex, the same sex, or both. Men's self-reported sexual orientation tends to be bimodal, with the large majority reporting exclusive sexual attraction to women and a minority reporting exclusive or near-exclusive attraction to men; the number of men who report substantial sexual attraction to both men and women is even smaller (Bailey, Dunne, & Martin, 2000; Diamond, 1993; Laumann, Gagnon, Michael, & Michaels, 1994). Patterns of sexual behavior (i.e., sexual contact with men or women) are certainly influenced by sexual orientation, but may diverge from it for various reasons, including limitations in opportunity (e.g., imprisoned men without access to women), stigmatization (typically against homosexuality), or material reasons, as in the case of prostitution (Gagnon, Greenblat, & Kimmel, 1999 ). Unquestionably, during the course of their lives, some men have sex with both men and women. One survey of homosexual men found that about 69% had also been sexually active with women (Bell, Weinberg, & Hammersmith, 1981). Furthermore, some imprisoned men say that they are heterosexual even though they engage in homosexual sex (Kirkham, 2000 ). Given these discrepancies between reported sexual orientation and sexual behavior in some men, it is reasonable to ask whether male bisexual behavior reflects sexual arousal to both sexes. Sexual identity refers to labels, including "homosexual,""heterosexual," or "bisexual," that individuals often give themselves (Sell, 1997). In a national survey, 0.8% of American men identified as bisexual (Laumann et al., 1994 ). There may be varied reasons why some men adopt a bisexual identity. For example, they may have intense sexual attraction to both men and women, or they might have sex partners of both sexes. Furthermore, men who adopt a homosexual identity might go through a stage in which they consider themselves bisexual. In one study, up to 40% of homosexual men defined themselves as bisexual before adopting a gay identity (Lever, 1994). In another study, most bisexual men shifted over time toward homosexuality; however, a small number shifted toward heterosexuality (Stokes, Damon, & McKirnan, 1997 ). This suggests that some bisexually identified men might have homosexual feelings (i.e., substantial attraction and arousal only to men), whereas others might have heterosexual feelings (i.e., substantial attraction and arousal only to women). In terms of behavior and identity, bisexual men clearly exist. Skepticism about male bisexuality must therefore concern claims about bisexual feelings, that is, strong sexual attraction and arousal to both sexes. The primary methodological challenge for investigating this issue is to employ a measure of sexual feelings that does not depend on self-report. At present, this is possible only for genital sexual arousal. MEASURING MALE SEXUAL AROUSAL Male genital arousal can be measured using a circumferential strain gauge that reflects the changes in penile girth during erection (Janssen, 2002 ). Homosexual men show substantially more genital arousal to sexual stimuli depicting men (male sexual stimuli) than to those depicting women (female sexual stimuli); heterosexual men have the opposite pattern ( Chivers, Rieger, Latty, & Bailey, 2004; Freund, 1963; Freund, Watson, & Rienzo, 1989; Sakheim, Barlow, Beck, & Abrahamson, 1985). Subjective sexual arousal is measured by self-report and is typically highly correlated with genital arousal in men (Sakheim et al., 1985 ). However, when self-report is suspect, genital arousal may provide a more valid measure. For example, genital arousal to stimuli depicting children is an effective method of assessing pedophilia, even among men who deny attraction to children (Blanchard, Klassen, Dickey, Kuban, & Blak, 2001). Few studies have investigated genital arousal among bisexual men. One study (McConaghy & Blaszczynski, 1991 ) measured genital sexual arousal to slides of nude men and women in 20 men with problematic sexual preferences (e.g., pedophilia, exhibitionism, bondage, and fetishism). The authors reported that the bisexual-identified men among their sample showed bisexual arousal. However, because of the heterogeneous study sample, and because the authors did not use rigorous statistical analyses to distinguish bisexual arousal from heterosexual or homosexual arousal, the study does not definitively demonstrate that bisexual men have bisexual arousal. Another study compared the genital arousal to male and female stimuli of 10 heterosexual, 10 bisexual, and 10 homosexual men (Tollison, Adams, & Tollison, 1979 ). Bisexual-identified men were indistinguishable from homosexual-identified men in their patterns of genital arousal. However, the group sizes in this study were relatively small, and thus the study may have lacked power to detect differences between the two groups. THE CURRENT STUDY We recruited self-identified heterosexual, bisexual, and homosexual men and assessed their genital and self-reported sexual arousal to male and female sexual stimuli. Our analyses investigated three hypotheses: Bisexual men are substantially aroused by both male and female stimuli. Bisexual men, like homosexual men, are much more aroused by male than by female stimuli. Bisexual men show a mixture of homosexual and heterosexual patterns of sexual arousal, with some having much more arousal to male stimuli and others having much more arousal to female stimuli. Note that these hypotheses are not mutually exclusive. METHOD Participants We advertised in gay-oriented magazines and an alternative newspaper in Chicago for "heterosexual,""bisexual," and "gay" men for a paid study of sexual arousal. Men who called the lab were asked about their sexual attraction toward men and women, so that their sexual orientation could be determined (see Measures and Procedure). Participants included 30 heterosexual men, 33 bisexual men, and 38 homosexual men, categorized on the basis of their answers to those questions. We also asked men to describe their sexual identity as straight, bisexual, or gay. Sexual attraction and sexual identity (converted to a numeric 3-point scale) were highly correlated, r= .95. Mean ages (standard deviations in parentheses) were 31.6 (5.9), 31.2 (5.4), and 30.6 (5.8), for the heterosexual, bisexual, and homosexual men, respectively. The percentage of Caucasian participants was 49.3, and this percentage did not vary significantly across groups. The heterosexual and homosexual participants were included in an earlier study (Chivers et al., 2004). Measures and Procedure The measures and procedure of this study were identical to those of our earlier study (Chivers et al., 2004), and the report of that study provides more detail. Sexual Orientation Sexual orientation was assessed via self-report using the Kinsey Sexual Attraction Scale (Kinsey, Pomeroy, & Martin, 1948 ). Participants provided separate Kinsey ratings for their sexual orientation during the past year and during adulthood. The mean of these two ratings was used in all analyses. "Heterosexual men" were defined as men with Kinsey Attraction scores less than or equal to 1, "bisexual men" had Kinsey Attraction scores greater than 1 and less than 5, and "homosexual men" had Kinsey scores greater than or equal to 5. Stimuli Participants viewed an 11-min, neutral, relaxing film (e.g., landscapes), followed by several 2-min sexual films, and another neutral film. Two of the sexual films depicted two men having sex with each other, and two of the films depicted two women having sex with each other. Sexual Arousal Genital arousal was assessed using a penile mercury-in-rubber gauge measuring circumference changes during erection (Janssen, 2002 ). Participants indicated subjective arousal by moving a lever forward to indicate increasing arousal and backward to indicate decreasing arousal. Data Analyses Because not all men become sufficiently sexually aroused for valid assessment, it is important to exclude nonresponders (Seto et al., 2001 ). We excluded participants whose genital response to any sexual stimuli was less than a 2-mm increase in penile circumference and whose subjective response was less than 5%, compared with response to neutral stimuli. The final sample contained 21 heterosexual, 22 bisexual, and 25 homosexual men with sufficient genital arousal for analyses and 24 heterosexual, 24 bisexual, and 31 homosexual men with sufficient subjective arousal. For each combination of participant and film clip, we computed mean genital and subjective arousal. Next, for each participant, we standardized genital and subjective arousal across film clips. Finally, we averaged the standardized genital arousal across female sexual stimuli in order to compute mean genital arousal to female stimuli; analogous calculations yielded mean genital arousal to male sexual stimuli and to the neutral stimulus, and mean subjective arousal to female and to male sexual stimuli, and to the neutral stimulus. Whenever arousal to sexual stimuli was used in analyses, we first subtracted arousal to the neutral stimulus. RESULTS Our first analyses examined whether men who report bisexual feelings have a bisexual arousal pattern. Men with strong bisexual arousal need not have precisely the same degree of arousal to both male and female stimuli. However, on average, their arousal to both male and female stimuli should be substantial. Furthermore, their arousal to male stimuli should exceed that of heterosexual men, and their arousal to female stimuli should exceed that of homosexual men. The hypothesis that bisexual men have bisexual arousal patterns thus implies a negative quadratic relation between self-reported sexual-attraction score (Kinsey score) and sexual arousal to the less arousing sex (Fig. 1a). Figure 1b shows that the predicted curvilinear relation did not occur for genital arousal. The quadratic model was nonsignificant, p= .68, [beta] = [-] .05, [Delta] R2 = .00. In contrast, bisexual men's subjective arousal did show the predicted curvilinearity; the negative quadratic relation was significant, p< .0001, [beta] = [-] .56, [Delta] R2= .29 (Fig. 1c ). Thus, we found no indication of a distinctly bisexual pattern of genital sexual arousal among bisexual men, although they did report a distinctly bisexual pattern of subjective sexual arousal. Notably, on average all men, regardless of their sexual orientation, showed significantly more genital arousal to their less arousing sex than they did to neutral stimuli; the 95% confidence interval for the curve in Figure 1b is above zero. However, the figure also shows that arousal to the less arousing sex was markedly lower than arousal to the more arousing sex. Our next analyses examined whether bisexual men tend to have homosexual arousal patterns, with greater arousal to male than to female stimuli. We computed a male-female contrast by subtracting each participant's arousal to female stimuli from his arousal to male stimuli; thus, higher scores indicate more arousal to men. If most bisexual men are primarily aroused by male stimuli, then there should be a negative quadratic relation between the participants' Kinsey scores and their arousal difference scores (Fig. 2a). With respect to genital arousal, the quadratic relation was significant (Fig. 2b)1; bisexual men were more aroused by male stimuli than by female stimuli, p< .01, [beta] = [-] .21, [Delta] R2= .04. The analogous quadratic relation for subjective arousal was also significant (Fig. 2c), p< .01, [beta] = [-] .16, [Delta] R2= .02; bisexual men reported greater arousal to male than female stimuli. Although these analyses suggest that bisexual men tend to show more arousal to male than to female sexual stimuli, inspection of Figure 2b suggests that not all do. Several men with Kinsey Attraction scores in the bisexual range tended to show most genital arousal to female sexual stimuli (i.e., their arousal contrast scores were negative). To investigate the hypothesis that bisexual men include a mixture of men with either homosexual or heterosexual arousal patterns, we computed the absolute residuals from the regressions shown in Figure 2 . If this hypothesis is correct, then the residuals should be largest within the bisexual range of the Kinsey scale, and the relation between these residuals and Kinsey scores should be negative quadratic (Fig. 3a). This quadratic relation was significant for both genital arousal (Fig. 3b), p< .05, [beta] = [-] .25, [Delta] R2= .04, and subjective arousal (Fig. 3c), p< .01, [beta] = [-] .33, [Delta] R2 = .10. These results suggest that the bisexual men whose arousal patterns were least similar to those of homosexual men tended to have arousal patterns similar to those of heterosexual men. DISCUSSION Men who reported bisexual feelings did not show any evidence of a distinctively bisexual pattern of genital arousal. One must be cautious, of course, in drawing conclusions from negative results. However, the crucial analysis of arousal to the less arousing sex did not provide even a hint of the expected effect. On average, both homosexual and heterosexual men had much higher arousal to one sex than to the other, and this was equally true of bisexual men. To be sure, most men were more genitally aroused to stimuli depicting their less arousing sex than to neutral stimuli. This finding contradicts some prior research in which men's arousal to their less preferred sex was comparable to their response to a neutral stimulus (Freund, 1974; Freund, Langevin, Cibiri, & Zajac, 1973 ). This suggests that most men may possess a certain capacity for bisexual arousal, although the magnitude of this arousal is quite modest. In contrast to bisexual men's genital arousal, their subjective arousal did show the expected pattern. The divergence between results for genital and subjective arousal is intriguing, because measures of genital and subjective arousal tend to be highly correlated in men (Sakheim et al., 1985 ). For example, across all our participants, the correlation between the genital and subjective male-female contrasts was .85. These results suggest that with respect to their less preferred sex, either bisexual men's subjective arousal has been exaggerated or their genital arousal has been suppressed. An earlier study suggests that the former explanation is more likely. In this study, bisexual men, compared with heterosexual and homosexual men, had greater discrepancies between their objectively measured and subjectively estimated genital arousal, and this was primarily due to an overestimation of their erections to female stimuli (Tollison et al., 1979 ). This issue may be clarified by studies using emerging technology identifying brain activation patterns associated with sexual arousal (Barch et al., 2003; Hamann, Herman, Nolan, & Wallen, 2004 ). In principle, such activation patterns could have higher validity than penile erection or self-reported arousal as a measure of sexual arousal. In any case, our results suggest that male bisexuality is not simply the sum of, or the intermediate between, heterosexual and homosexual orientation. Indeed, with respect to sexual arousal and attraction, it remains to be shown that male bisexuality exists. Thus, future research should also explore nonsexual reasons why some men might prefer a bisexual identity to a homosexual or heterosexual identity. References Bailey, J.M., Dunne, M.P., & Martin, N.G. (2000). Genetic and environmental influences on sexual orientation and its correlates in an Australian twin sample. Journal of Personality and Social Psychology, 78, 524 [-] 536. Barch, B.E., Reber, P.J., Levitt, M.R., Gitelman, D.R., Parrish, T.B., Mesulam, M., & Bailey, J.M. (2003, November). Neural correlates of sexual arousal in heterosexual and homosexual men. Paper presented at the annual meeting of the Society for Neuroscience, New Orleans, LA. Bell, A.P., Weinberg, M.S., & Hammersmith, S.K. (1981). Sexual preference, its development in men and women. Bloomington: Indiana University Press. Blanchard, R., Klassen, P., Dickey, R., Kuban, M.E., & Blak, T. (2001). Sensitivity and specificity of the phallometric test for pedophilia in nonadmitting sex offenders. Psychological Assessment, 13, 118 [-] 126. Chivers, M.L., Rieger, G., Latty, E., & Bailey, J.M. (2004). A sex difference in the specificity of sexual arousal. Psychological Science, 15, 736 [-] 744. Diamond, M. (1993). Homosexuality and bisexuality in different populations. Archives of Sexual Behavior, 22, 291 [-] 310. Fox, R.C. (2000). Bisexuality in perspective: A review of theory and research. In B. Greene & G.L. Croom (Eds.), Education, research, and practice in lesbian, gay, bisexual, and transgendered psychology: A resource manual (pp. 161 [-] 206). Thousand Oaks, CA: Sage Publications. Freund, K. (1963). A laboratory method for diagnosing predominance of homo- or hetero-erotic interest in the male. Behaviour Research and Therapy, 1, 85 [-] 93. Freund, K., Langevin, R., Cibiri, S., & Zajac, Y. (1973). Heterosexual aversion in homosexual males. British Journal of Psychiatry, 122, 163 [-] 169. Freund, K., Watson, R., & Rienzo, D. (1989). Heterosexuality, homosexuality, and erotic age preference. Journal of Sex Research, 26, 107 [-] 117. Freund, K.W. (1974). Male homosexuality: An analysis of the pattern. In J.A. Loraine (Ed.), Understanding homosexuality: Its biological and psychological bases (pp. 25 [-] 81). New York: Elsevier. Gagnon, J.H., Greenblat, C.S., & Kimmel, M. (1999). Bisexuality: A sociological perspective. In E.J. Haeberle & R. Gindorf (Eds.), Bisexualities: The ideology and practice of sexual contact with both men and women (pp. 81 [-] 106). New York: Continuum. Hamann, S., Herman, R.A., Nolan, C.L., & Wallen, K. (2004). Men and women differ in amygdala response to visual sexual stimuli. Nature Neuroscience, 7, 411 [-] 416. Hirschfeld, M. (2001). Die Homosexualit?t des Mannes und des Weibes. Berlin: Walter de Gruyter. (Original work published 1914) Janssen, E. (2002). Psychophysiological measurement of sexual arousal. In M.W. Wiederman & B.E. Whitley, Jr. (Eds.), Handbook for conducting research on human sexuality (pp. 139 [-] 171). Mahwah, NJ: Erlbaum. Kinsey, A.C., Pomeroy, W.B., & Martin, C.E. (1948). Sexual behavior in the human male. Philadelphia: Saunders. Kirkham, G.L. (2000). Homosexuality in prison. In P.C.R. Rust (Ed.), Bisexuality in the United States (pp. 250 [-] 267). New York: Columbia University Press. Krafft-Ebing, R.V. (1886). Psychopathia sexualis. Stuttgart, Germany: Enke. Laumann, E.O., Gagnon, J.H., Michael, R.T., & Michaels, S. (1994). The social organization of sexuality: Sexual practices in the United States. Chicago: University of Chicago Press. Lever, J. (1994, August 23). Sexual revelations: The 1994 Advocate survey of sexuality and relationships: The men. The Advocate, pp. 18 [-] 24. MacDonald, A.P., Jr. (2000). A little bit of lavender goes a long way: A critique of research on sexual orientation. In P.C.R. Rust (Ed.), Bisexuality in the United States (pp. 24 [-] 30). New York: Columbia University Press. McConaghy, N., & Blaszczynski, A. (1991). Initial stages of validation by penile volume assessment that sexual orientation is distributed dimensionally. Comprehensive Psychiatry, 32, 52 [-] 58. Sakheim, D.K., Barlow, D.H., Beck, J.G., & Abrahamson, D.J. (1985). A comparison of male heterosexual and male homosexual patterns of sexual arousal. Journal of Sex Research, 21, 183 [-] 198. Sell, R.L. (1997). Defining and measuring sexual orientation: A review. Archives of Sexual Behavior, 26, 643 [-] 658. Seto, M.C., Adkerson, D.L., Hindman, J., Jensen, S.H., Peters, J.M., & Peterson, K.D. (2001). Practice standards and guidelines for members of the Association for the Treatment of Sexual Abusers (3rd ed.). Beaverton, OR: Association for the Treatment of Sexual Abusers. Stokes, J.P., Damon, W., & McKirnan, D.J. (1997). Predictors of movement toward homosexuality: A longitudinal study of bisexual men. Journal of Sex Research, 34, 304 [-] 312. Tollison, C.D., Adams, H.E., & Tollison, J.W. (1979). Cognitive and physiological indices of sexual arousal in homosexual, bisexual, and heterosexual males. Journal of Behavioral Assessment, 1, 305 [-] 314. Zinik, G. (2000). Identity conflict or adaptive flexibility? Bisexuality reconsidered. In P.C.R. Rust (Ed.), Bisexuality in the United States (pp. 55 [-] 60). New York: Columbia University Press. Footnote: 1The curvilinear effect remained even after excluding 3 homosexual men who showed more arousal to women than to men, p= .01, [beta] = [-] .17, [Delta] R2= .03. (Received 8/27/04; Revision accepted 11/10/04) Affiliations 1Northwestern University and 2Centre for Addiction and Mental Health, Toronto, Ontario, Canada Correspondence Address correspondence to Gerulf Rieger, Psychology Department, Northwestern University, 2029 Sheridan Rd., Swift Hall #102, Evanston, IL 60208; e-mail: gerulf at northwestern.edu. Image Previews Fig. 1. Sexual arousal as a function of sexual orientation. In each panel, the curve labeled "1" pres... Fig. 2. Male-female arousal contrast (arousal to male sexual stimuli minus arousal to female sexual s... Fig. 3. Absolute residuals from the regressions of Figure 2 as a function of sexual orientation. The ... To cite this article Rieger, Gerulf, Chivers, Meredith L. & Bailey, J. Michael (2005) Sexual Arousal Patterns of Bisexual Men. Psychological Science 16 (8), 579-584. doi: 10.1111/ j.1467-9280.2005.01578.x From checker at panix.com Mon Sep 19 19:42:59 2005 From: checker at panix.com (Premise Checker) Date: Mon, 19 Sep 2005 15:42:59 -0400 (EDT) Subject: [Paleopsych] Were Modern Humans Neighbors to Neanderthals? Message-ID: Were Modern Humans Neighbors to Neanderthals? http://www.washingtonpost.com/wp-dyn/content/article/2005/09/11/AR2005091100851_pf.html Dating of Modern-Style Artifacts in Famed Neanderthal Cave in France Refuels Debate About Possible Coexistence By Guy Gugliotta Washington Post Staff Writer Monday, September 12, 2005; A07 Sometime between 30,000 and 40,000 years ago, the Neanderthals abruptly disappeared after a run of perhaps 200 millennia in the Near East, west Asia and, most notably, in the ice age caves of Europe. On that score, there is no dispute. How this happened, and why, is another matter. For years, paleontologists have argued about whether anatomically modern humans invading from the east either wiped out the Neanderthals or out-innovated them; or, alternatively, whether Neanderthals and the invaders simply interbred to create today's Homo sapiens . This debate has taken on new virulence amid an accumulation of new, but still inconclusive, evidence. DNA analysis to date suggests that Neanderthals and modern humans are quite probably unrelated -- that Neanderthals were a distinct species altogether. However, archaeologists have shown in the past few years that modern human remains thought to be associated with human-made artifacts from the late Neanderthal era actually date from much more recent times. No one has found modern human remains buried with artifacts older than perhaps 32,000 years. The argument now is about whether Neanderthals were comic book characters -- not-quite-bright, club-carrying, knuckle-draggers who couldn't keep up with the invaders -- or, instead, simply a different people who somehow got sideswiped into extinction for some other reason. This mystery, central to the study of human culture during the Stone Age, is nowhere near resolution. "A lot of this discussion is about how we see our own relationship to these creatures," said Princeton University anthropologist Alan E. Mann. "I worry these discussions are becoming much less about science." Early this month, researchers poured more gasoline on the fire, reporting in the journal Nature on the results of new studies of a famous Neanderthal site at Chatelperron, in France. They said the new analysis of materials from old excavations showed that Neanderthals and modern humans coexisted in western Europe during the Neanderthals' waning days, and thus had "potential demographic and cultural interactions." Co-author Paul A. Mellars, a University of Cambridge archaeologist and leading proponent of the view that modern humans shoved aside the Neanderthals and eventually replaced them, said in a telephone interview that he knew "there would be screaming" after publication of the Nature paper. And there was. "It's hogwash," said Erik Trinkaus, an anthropologist at Washington University in St. Louis who is an advocate both of Neanderthal-modern human interbreeding and Neanderthals' ability to adapt and "modernize." The evidence is not convincing, Trinkaus said, and Mellars "is grasping at straws." The cave at Chatelperron, in central France, was first discovered in the 1840s during railroad construction, and it was excavated periodically through the rest of the 19th century. Today it has become archaeology's prototype late Neanderthal site. In the early 1950s, the famous French archaeologist Henri Delporte revisited the site and meticulously documented five levels of Neanderthal-era occupation. The most recent top three layers and the bottom-most layer had distinctively Neanderthal artifacts. But the fourth layer had modern human, or "Aurignacian," material -- including the "split-based point" of a weapon made from an antler and two ornaments crafted from perforated animal teeth -- typical of the artifacts attributed to early modern humans who spread across Europe perhaps a little more than 40,000 years ago. Mellars said this "interstratification" provided solid evidence that Neanderthals and modern humans had co-existed in Europe. Delporte, now dead, published two obscure papers on the findings, "but didn't make as much of it as he should have," Mellars said. "There was a deep-rooted conviction" that overlap between Neanderthals and modern humans had not occurred. Mellars joined Cambridge graduate student Brad Gravina, lead author of the Nature article, in reexamining the Chatelperron materials after Gravina found among the Delporte artifacts animal bones that could be dated by modern methods. Radiocarbon analysis of the bones showed the bottom Neanderthal level to be between 42,000 and 43,000 years old. The overlying Aurignacian level was between 41,000 and 42,000 years old, while the Neanderthal level on top of that was between 40,000 and 41,000 years old. Mellars suggested that the back-and-forth shift may have occurred because modern humans were better prepared to cope with a western European cold snap about 41,500 years ago. "When it got cold, the Neanderthals moved out and the modern humans moved in," Mellars said. "Anatomically Neanderthals were cold-adapted," he added, acknowledging that Neanderthals survived most of Europe's ice age, but modern humans probably had "better clothing and shelter, better fire control and better technological adaptation." Trinkaus, speaking in a telephone interview, disputed both the integrity of the site and the accuracy of the interpretation. He said Chatelperron was a "heavily damaged, classic site" that had been picked over for 150 years. He also noted that the Gravina-Mellars team had not dated the ornamental teeth or the antler, and "a butchered animal bone doesn't tell you anything." More important, he added, Chatelperron, like other contemporary European sites, had no modern human skeletal remains with the artifacts. "You cannot argue that these things were made by modern humans just because modern humans made that type of tool," Trinkaus said. "The implication is that Neanderthals were too stupid to do it themselves." Mellars noted Delporte's undisputed credentials as an excavator. Also, he said, "I have been looking at these stone tools for 45 years," and they have always been associated with modern humans. The teeth and antler point were not dated because they would have been damaged in the process. However, he acknowledged Trinkaus made "a fair point" about the lack of modern human skeletal remains at Chatelperron. Indeed, this has been the Achilles' heel of those who propound the contact-and-replacement theory. There are sites in Europe about 35,000 years old that do have bones of modern humans but no artifacts with them, and there are several sites, such as Chatelperron, that have artifacts purportedly made by modern humans -- but no bones. All the sites thought to have both have turned out to have remains of much more recent humans, almost certainly the result of later burials that were dug into older archaeological deposits. Either the oldest of Europe's modern humans didn't bury their dead, or archaeologists haven't yet found the bones. Or maybe modern humans weren't there, after all, and the early modern human artifacts were made by Neanderthals. From checker at panix.com Mon Sep 19 19:43:08 2005 From: checker at panix.com (Premise Checker) Date: Mon, 19 Sep 2005 15:43:08 -0400 (EDT) Subject: [Paleopsych] Policy Studies J.: Religion, policy, and secrecy: the Latter Day Saints and masons. Message-ID: Religion, policy, and secrecy: the Latter Day Saints and masons. Policy Studies Journal, Nov 2003 v31 i4 p669(10). Paul Rich and David Merchant AB On a vanished secret order: "Had politics, as in Masonry, been its main object, it would have held on with tenacity to its principles, as to the threads of life, and, disregarding its departure from sound morals, or patriotism, would still have contended, with the infatuation of a Mormonite, for the enjoyment, in secret, of that which in the eye of the public would overwhelm its members in confusions." "A Traveller in the United States", A Ritual and Illustrations of Free-Masonry and the Orange and Odd Fellows' Societies, Accompanied by Numerous Engravings, and a Key to the Phi Beta Kappa, S. Thrne, Devon (Shebbear, near Hatherleigh, England), 1835, 251. ------------- The spectacular growth of the Mormon denomination has been accompanied by church-state policy disputes, ranging from the rights of pro choice groups in the central square in Salt Lake City to the views of the Mormon hierarchy on gays in the military. For academics, how to handle the interface of policy issues with organized religions to which their students or colleagues may belong are a challenge. Secrecy is a particularly troublesome question. The Church of Jesus Christ of Latter Day Saints, the Mormons, has a long history of conflict with other groups in America over public policy issues. The most celebrated was the nineteenth century confrontation over polygamy. Of all the issues that have arisen, perhaps the least discussed is the fact that Mormonism is in some respects a secret society and was strongly influenced by Freemasonry. The founding prophet of Mormonism, Joseph Smith, wrote in his History of the Church, "The secret of Masonry is to keep a secret." (Vol. 6, p. 59) (1) Even in this new millennium, both the Mormons and the Masons feel uncomfortable when their secrets are discussed. (2) But their views of the sanctity of secrecy are much further apart than they were a century ago. (3) (For the purposes of this discussion, the Salt Lake City branch of the Mormon church, which is the principal one, is being focused upon. However, there are a number of other Mormon sects.) (4) These days, American Masons are less secretive than they once were. For the Mormons, the secrecy of their temple activities remains one of the ingredients of what is a highly successful and rapidly growing movement, while for the Masons, secrecy is regarded increasingly as an albatross that helps account for a steep membership decline. That the subject of secrecy with regard to the Church of Jesus Christ of Later Day Saints and with Freemasonry is still contentious is remarkable--it would seem an issue which would belong to a much earlier era which enjoyed covert ritualism. (5) For one thing, secrecy today is a public relations problem for any organization that wishes to grow in a twenty-first century environment of full disclosure and exposure. Moreover, the disenchanted have filled many library shelves with their exposes of both Masonry and Mormonism. (6) Neither movement really has secrets if its ceremonies are being discussed. Attitudes towards secrecy as practiced by a religion or lodge are ambiguous. What should public policy be towards secretive groups? Of course, we believe in policies favorable to freedom of association, and yet we are not quite sure that such freedom should include the existence of societies that may make what appear to be extreme demands on their members and that conduct their affairs in a covert fashion. (7) The confusing civil libertarian sentiments involved in tolerating such groups, which is the price we must pay for preserving a pluralistic and open society, are treated by Joseph Bensman and Robert Lilienfeld in their study Between Public and Private: The Lost Boundaries of the Self: "The double nature of voluntary associations is apparent in the fact that, while Masonry and all voluntary associations emphasize the freedom of their members from both narrower (the family) and wider (the state, the church, and the firm) institutions, they make demands for peer loyalties upon their members and subordinate them to a rich hierarchy of ritual and leadership within their own organization sphere." (8) The same observation might be made about Mormonism. (9) Secrecy then is an aspect of human behavior and especially of political behavior which we feel perpetually unsure about. It receives limited attention from policy journals, although "Secrecy touches our lives more than we generally like to admit. Most of our thoughts do not immediately get uttered. Some are simply forgotten, but others we purposely suppress. These are the secrets--perceptions, gossip, memories, dreams--that we keep back until time and audience are right. Some of these secrets contribute to a positive sense of self and to the harmonious continuation of our communities. Others create debilitating suspicions and uncertainties in self and society." (10) In considering policies towards secrecy, there is a danger of being a voyeur, inviting titillations over Mormon temple undergarments and Masonic aprons. We do well to keep in mind that it is not just Mormons and Masons who have secrets, for secrecy would seem to be part of everyone's personality, and in fact secrecy may be an essential part of a balanced personality. Kittredge Cherry remarks, "We all keep secrets. From neighborhood gossip to government scandal, the power of secrets is deeply woven into our culture. Some people disguise their age, hide their poverty or their wealth; other cover up illness, or the fact that they are in therapy. Deciding the best ways to share knowledge--when to hide and when to speak--is everybody's lifelong challenge." (11) The fact is that policies supporting secrecy are not always a bad thing. The confidentiality of the confession, or of discussions with a doctor or lawyer are guarded legally in many societies, as are exchanges between a married couple--and indeed, clinical psychologists have sought to extend such protection to their discussions with their clients. Psychologically, the experience of secrets is part of maturing. A pro-secrecy authority remarks, "Children take a big step toward psychological maturity and identity when they first learn to keep a secret ... We also express our identity as much in what we hide as in what we reveal about ourselves." (12) Still, it is considered axiomatically a character defect to be a too private and circumspect an individual; in the extreme case such as the billionaire Howard Hughes or the actress Greta Garbo (famous for her remark "I want to be alone."), the desire for personal secrecy slips over a line into what some regard as pathological behavior. Where do we get our attitudes that ultimately influence our support of policies towards secrecy and learn its uses? For some individuals, secrecy is largely learned in the family unit. But for others, their experience with secrecy is partly learned by joining a group such as the Mormons or the Masons, where solemn vows of secrecy are given. The group maintains its identity by sharing secrets amongst its members, but perhaps a neglected aspect is that it also cultivates the individual's sense of secrecy as a value and as a means of empowerment. (13) So possibly Mormonism and Freemasonry have been criticised too much for their secrecy. We can remember the delight in childhood of having secrets. The fact is that most of us still do enjoy the mystery of concealment. It still is an ingredient of our society, though abused, and much of the thoughtful psychological literature on the topic suggests that having secrets is part of a healthy personality. While not always appropriate, it is not the terrible thing that it has been made out to be, and we ought to think twice about treating it like a wayward relative. Ironically, there has been considerable contention between the Mormons and the Masons that is centered around the groups' secret rituals. Both movements have a history of exacting oaths of secrecy from their followers. Few if any organizations have been more adept at reconstruction in the face of adversity, and at creating another world and a magical empire, as have the Mormons and the Freemasons,--and to an outsider their tempestuous relationship might seem more that of rival siblings than deadly rivals than has been understood. Like the Black Muslims, Mormonism is a religion which owes a great deal in its rites to Masonry. The qualification should be that it is not the Masonry of London coffee houses which became intertwined with Mormonism, but the lush ritualism of nineteenth-century American Freemasonry, which is another matter. Consider the ethos of nineteenth-century America when Mormonism made its appearance. While a number of recent studies call attention to the historical contributions of fraternal orders in the United States, including orders that did exist then such as the Odd Fellows, and Red Men, it is Masonry which was the pre-eminent secret society. At the very start of the nineteenth century, most Masonic lodges were relatively simple affairs, both in architecture and in ritual. It might be argued that, until that time, lodges had been more concerned with the world of ideas rather than of ceremony. Authorities such as Margaret Jacob in studying their influence have shown that the Masonic lodges of that earlier seventeenth and eighteenth century period were indeed intellectual and in some ways the progenitors of civil society despite their secrecy and their gender and social bias. However, the nineteenth century that gave birth to Mormonism also saw the creation of an enormous number of additional Masonic degrees and side organizations promising to confer more exalted honors and communicate ever more esoteric secrets. Much of this was deliberately arcane. For example, the Masons often turned to Egypt for a leitmotif, both for their dramas and for their buildings. (14) This is at variance with the British experience, and the British actually imported from America a number of the rites such as the Cryptic and Scottish. So it was in 1823 during a period of considerable interest in the cabalistic aspect of Masonry, and particularly in the Royal Arch degrees, that the brother of the Prophet Joseph Smith, Hyrum Smith, was initiated in Mount Moriah Lodge in Palmyra, only two miles from the Smith farm--the same year that the Prophet was first visited by an angel. The events surrounding the disappearance of Captain William Morgan, who had threatened to publish the secrets of the Masonic degrees, took place in 1826 in Batavia and Canandaigua, just fifteen miles from the farm. Henry Dana Ward published his expose of Freemasonry in 1828, some months before the appearance of the Book of Mormon. The book was reviewed in the Wayne Sentinel, the newspaper published in Palmyra. In nearby Canandaigua, William Wines Phelps was publishing an anti-Masonic paper, the Ontario Phoenix. (15) The numerous exposes of the period would make general knowledge the stories in the Masonic degrees such as the discovery of the golden plates in the vault under the temple. The use by the Masons of cyphers would be familiar, as would the idea of revelations hidden and then discovered. If one is looking for a secular origin of the story of Smith's discovery of golden plates, the Royal Arch degree of Masonry is suggestive. While Mormon temples are closed to non-Mormons and the activities that take place are protected by a veil of secrecy, in the Book of Mormon there is a great deal of negative comment about secrecy. Secret groups are seen as an affront to the Lord They are oath-bound and of the devil. They can be interpreted as anti-Masonic but since Joseph Smith later became a Mason, the evidence of his early knowledge of Freemasonry is contradictory. John Thompson comments, "While it would be folly to make entirely too much out of this, it must be said that if the early Mormons were viewed by their contemporaries as overwhelmingly antimasonic, the language of the Book of Mormon and other early writings of the Prophet must have tended to substantiate the charge. At the same time, we must not forget that, even in the very early stages of the church's life, there were men like Hyrum Smith and Herbert C. Kimball in places of real authority and ministry in the Church, who were still, as far as anyone can tell, in good standing with the Craft. Without ever renouncing or seceding (in an age famous for both, these men were attracted to a Church that apparently had an antimasonic slant and yet, were accepted for what they had to offer." (16) Scholars who are Mormons are restrained by the teachings of their denomination in the areas which they can question. In contrast, while there was a great of bogus Masonic history created in the nineteenth century that sought to legitimatize the ceremonies, but eventually a more scientific spirit competed with the folkloric views and some Masons, though certainly not all, became sharp critics of their own past. Writes Dr. Brent Morris, "The pathetic irony is that only one group today believes the tall tales ...--not the Grand Lodges, not the Scottish Rite, but the antimasons." (17) One must compare that view with the view that much Masonic writing still is of a low value and accepts a great deal of crude mythology about the origins of the fraternity and the necessity for secrecy. As late as the 1960s, a Masonic authority would warn against open discussion of the similarity of Mormon and Masonic secret rituals: "Any discussion, in print, of esoteric Masonic material is prohibited by the Grand Lodge of Washington. Any detailed discussion of differences and/or similarities between the so-called secret ceremonies of the Mormon Church and those of Masonry must be within the tiled recesses of the Lodge." (18) Still, in contemporary Freemasonry, the questioning of origins and the study of ritual change is now quite permissible. The results may upset some Masons, but it would be unthinkable for a Mason to be suspended or dropped from membership for investigating Masonic degrees and believing that they had relatively modern origins. The same is not the case with Mormonism, where a member must believe in the divine origins of the scriptures discovered by Joseph Smith and in the secret rituals which he mandated for the Mormon temples. A history professor at Brigham Young University writes, "My name is David C. Wright ... I am NOT the David P. Wright who believes the Book of Mormon is a novel. I believe in and proclaim the historicity of the Book of Mormon. I believe there really was a Nephi, a Moroni, and yes, I believe that physical gold plates really existed and that Joseph Smith translated them by the gift and power of God." (19) The idea of rooms closed to the public where secret rites are practiced is part of both Masonry and Mormonism. Recently Masonic temples have been opened to the public. In an effort to improve public relations, there are open house functions when non-Masons can tour and be given an explanation of what goes on. Mormon temples, on the other hand, unlike local Mormon churches, are not open to the public. The popularity of Freemasonry was partly because in nineteenth-century America, fraternalism was a way by which men escaped from the profanum vulgus: once the lodge door was guarded, another world opened. This was a serious world. There is considerable difference in sophistication between the ritualistic activities of Victorian lodges and Hollywood depictions of fraternity initiations. The lodge in the nineteenth century was not a place for comic hijinks, but for the enactment of serious dramas teaching moral precepts. In 1842, the Prophet had led his flock to Nauvoo in Illinois and announced that a number of secret rites would be practiced in the newly-constructed temple. These included washings and anointings, the giving of a new and secret name to the candidate, the conferring of specially blessed undergarments, oaths of nondisclosure, and ritual dramas. (20) At the same time he and many Mormons became members of the Masonic lodge. The iconography of Mormonism began to include numerous Masonic symbols: "It is ironic that the Craft which did not obtain the beehive as a symbol from ecclesiastical sources, should have given it to a church--'given' only in the sense that the Craft venerated the beehive long before the Church of Jesus Christ of Latter Day Saints (Mormon) adopted it as one of their principal symbols. The Mormon history includes the Nauvoo, Illinois, episode, in which the Grand Lodge of that State first granted and then revoked the charters of lodges composed of Mormons. Doubtless that experience was at the root of Mormonism taking so much from Freemasonry, basing its Temple ceremonies upon the degrees and embracing so many Masonic symbols, including the beehive." (21) joseph Smith had moved over the years from obliquely criticizing Freemasonry to using it as a source for Mormon rituals. An example of the Masonic affinities of the temple rites is the following, based on the revised 1990 ceremonies now used in the temples: "PETER: By our giving unto you the token and sign you received in the garden of Eden. ADAM: (grasping Peter by the right hand.) What is that? PETER: The First Token of the Aaronic Priesthood. ADAM: Has it a name? PETER: It has. ADAM: Will you give it to me? PETER: I cannot, for it is the New Name, and I have made a covenant not to disclose it, but this is the sign (Peter raises his right arm to square.) ADAM: Now I know that you are true messengers sent down from Father. (Adam turns and looks into the camera, addressing the patrons, while Eve stands smiling at his side). These are true messengers, I exhort you to give strict heed to their counsel and teaching, and they will lead you in the way of life and salvation." (22) Temple activities were and are, even as revised, strikingly similar to Masonic ones. John Brooke remarks in his important study of the origins of Mormonism, The Refiner's Fire: "Throughout the temple rituals themselves there were striking similarities with Masonic symbolism, especially those of the York rite, which was established at the Nauvoo Lodge. The temple garments, very similar to Masonic ceremonial garb, included an apron with the Masonic compass and square, which was also among the emblems of the temple veil. The language of the tokens and penalties of the Mormon priesthoods had exact parallels in freemasonry, progressing from parallels with the first three degrees of Entered Apprentice, Fellow Craft, and Master Mason to parallels with the Royal Arch and the higher degrees. Among these parallels to the first three degrees were the signs of the "five points of fellowship," the penalties for disclosure of secrets, and priestly handgrips and bodily signing ... ". (23) This has led to extensive discussion among Mormons as to whether there were common origins, i.e. that perhaps the Masons had a position similar to Catholics and the Mormons were group that had restored the pure ancient traditions. A Mormon Internet discussion group participant asks, "I have heard speculation that the Masons' ceremony has been passed down since the time of King Solomon's Temple. Any thoughts on this?" The reply is, "There continues to be speculation to this end within LDS circles. In part due to there being some evidence that Joseph Smith made similar statements. For example, there are personal letters and journal entries from Hebert C. Kimball and Benjamin Johnson claiming that Joseph taught them that "Masonry was taken from the Priesthood but has become degenerated. But menny [sic.] things are perfect"--and "freemasonry was the apostate endowment as sectarian religion was the apostate religion.... Shortly after Joseph's death, being a practicing Mason became grounds for excommunication ... and continued for many decades. Likewise Mormons were forbidden from membership of many Masonic Lodges." (24) Another way in which Masonry has diverged from Mormonism in its practices is its inclusiveness. Masonic growth was accompanied by a growing disposition by the Masons for religious universalism, a heretical stance that was taken seriously by orthodox Christian critics and which again placed the Craft in opposition to the Mormon church with its assertions about being the reformed and reorganized true version of Christianity. The suggestion has been repeatedly made that Masonry offered a more satisfactory spiritual experience for some men than did conventional religion, and enabled them to be religious while asserting their masculinity. That open acknowledgment of this at the time would have been disastrous to Masonry is an argument of Professor Mark C. Carnes of Columbia University in his book Secret Ritual and Manhood in Victorian America. Universalism continues to be a problem between Masonry and a variety of religions, including Catholicism and Mormonism. In summary, in the United States, Masonic policies have evolved and gradually moved away from the secrecy and pseudo-historical claims that once characterized the society. Secrecy was not abandoned, but it became less important. On the other hand, for members of the Church of Later Day Saints, both the genuine historical truth of the revelations received by the Prophet and his successors, and the secrecy and validity of the temples have remained cornerstones of faith. It is no wonder that devout Mormons continue to have considerable difficulties when confronted by Masonry, because it touches on precious theological precepts. A Masonic critic writes, "... the Church cannot tolerate any tarnishing of its spernal aura by a shadow cast by such a common, ordinary, mortal social entity of the world as freemasonry. Hence, its goal is to expunge any trace of the Ancient Order from every page of its history." (25) There are many examples of the problems when religious organizations surround their activities with a wall of sanctity. The right to secrecy is one. So is the right of the academy to investigate religion in a scholarly way. The case of the Masons and the Mormons is a good illustration of just how complex these matters are and. Notes (1.) ct. Mervin B. Hogan, "Utah Masons Among the Mormons", Southern California Research Lodge. f.& A.M., n.d., 1. (2.) Rich has on at two occasions had Mormon academics come up after a paper and suggest that it was not appropriate to discuss Mormon temple rites in public: A lady who was a devout church member added that certainly Masonry had originated in the time of Solomon and that there was ample evidence. Her position was strikingly like that of a Roman Catholic defending a Protestant churchman's position about the Divinity of Christ, while disagreeing with most other statements that the Protestant had made. On the other occasion, two Mormon academics who attended a panel came up afterwards to state quite definitely that there were things that should not be talked about because they were intensely personal, and among those things there was the endowment service in the temple. (3.) This group used to be called the * Reorganized * Church of Jesus Christ of Latter-day Saints. I am not sure if they called themselves reformed. But it is true that they have moved a good deal towards a sort of "generic" protestant flavor. They were originally founded after Joseph Smith Jr's murder by his widow (first wife) Emma Smith and her son Joseph III, and for a very long time--still?--the presidency of the Church was held by a lineal descendant of Smith (a practice based upon the text of a blessing he had given Joseph III promising him the mantle of prophecy or words to that effect) But in 1995 there was no male heir and Wallace B. Smith named W. Grant McMurray as his successor (there had been some talk, I think, of naming a female successor but I may mis-remember). A similar crisis of descent happened in the LDS Church when the office of Church Patriarch was done away with because there was no lineal male descendant of Hyrum Smith. Joseph Jr's brother and fellow martyr, whose line had traditionally held the office. The RLDS eschewed polygamy (and do not officially commit themselves on the historical question of whether it was practiced). They built no temple until the 1980's though, as was mentioned in another response, they do own the site of the original LDS Temple in Kirtland. This Temple however was somewhat different than every later Temple beginning with the one in Nauvoo Illinois, which later burned down. I believe it was in Nauvoo that the first Endowments were performed, and, coincidentally or not (I think not), it was also Nauvoo where Smith and others ascended the ranks of Scottish rite freemasonry, which contributed a hit of ceremonialism to the Temple service. The RLDS never went for these aspects of Smith's esotericism and seem to have quietly dropped them, and their Temple now is more or less a large meeting hall-I do not believe it has any particular liturgical function. However they do continue to consider the Book of Mormon as scripture, as well as a number of revelations received by Joseph Smith Jr. and later RLDS presidents. I hadnt known the Strangites even still existed, much less that a rapprochement had occurred between them and the RLDS. There were a number of other splinter groups, and still are. Some died out or were reabsorbed; but to this day there are indeed various sects of polygamous Mormons who live in (4.) a sort of detente with the rest of Utah's LDS population. Contrary to the impression the press might give, they are not all deranged practitioners of "blood atonement". They tend to aspire to a kind of communal living dictated by the "Law of Consecration," after the example of Acts 2. (5.) "Young Protestant middle-class men [in nineteenth-century America] sought their rituals not only in the fraternal and beneficiary lodges, but also in scores of voluntary associations with primarily religious, reform, political, or economic objectives. College fraternities are an obvious example, but they involved few men and their initiations were brief and underdeveloped. Fraternal initiation was more important in Mormonism, temperance societies, the Know-Nothings and the Knights of the Golden Circle, the Grange, labor and veterans' organizations, and the life insurance industry. Historians of each of these subjects have commented on the peculiar role of initiation, which they generally have attributed to shield members from blacklisting, and fraternal life insurance firms used ritual to remind members to pay premiums. What is less appreciated is the extent to which founders and members regarded ritual as important in and of itself." Mark C. Carnes, Secret Ritual and Manhood in Victorian America, Yale University Press, New Haven and London, 1989, 6. See also Lynn Dumenil, Freemasonry and American Culture, 1880-1930. Princeton University Press, Princeton (New Jersey), 1984, 221. (6.) A point by point comparison of Mormon and Masonic rituals can be found at the "helping Mormons" website at http://www.helpingmormons.org/masendow.htm (7.) See Fritz Steele, The Open Organization: The Impact of Secrecy and Disclosure on People and Organizations, Addison-Wesley, Reading (Massachusetts), 1975, passim. (8.) Joseph Bensman and Robert Lilienfield, Between Public and Private: The Lost Boundaries of the Self, The Free Press, New York, 1979, 86. (9.) Of course, I am aware that terms such as Mormon and Mormonism are informal, but they are used here for convenience. (10.) William W. E. Slights, Ben Jonson and the Art of Secrecy, University of Toronto Press, Toronto, 1994, 4. (11.) Kittredge Cherry, Hide and Seek, HarperSanFrancisco, 1991, 19. (12.) Cherry, 23. (13.) "We have a tremendous sense of belonging when we share hidden knowledge with another person or a select group ... Even the most mundane organizations may have secret handshakes and other codes so members can identify who falls within the boundaries of the group." Cherry, 25. (14.) While the inspiration for this has been credited to the presence of Masons on the Egyptian campaign of Napoleon Bonaparte and the discovery of the Rosetta Stone, as well as to the influence of Judaism--witness the synagogue designed by the Mason William Strickland for Philadelphia in 1825--it also is sometimes attributed to the enthusiasm aroused by the creation early in the century of two new Masonic organizations, the Rite of Mizraim (Mizraim simply being the plural of Egyptian) and the Rite of Memphis. Mizraim was organized in Milan in 1805 and moved to Paris by 1812. It has ninety degrees, and although claiming to perpetuate the lost tradition of Egyptian hermetics is a confusing collection of rituals partly based on the kabalah and alchemy as well as the so-called Scottish Rite, Mizraim and Memphis were merged in Italy in 1881 by the famous Italian patriot Giuseppe Garibaldi, but are now practiced both by separate and unified organizations. (Acetates of Misraim and of Memphis.) In England, John Yarker (1833-1913), who served Garibaldi as the grand chancellor in his confederation of Masonic degrees, introduced or re-introduced the Rite of Mizraim as well as other degrees with Egyptian themes, like Garibaldi attempting to combine Mizraim with Memphis. Other sources include Mozart's Magic Flute (1791) which in the production by Karl Friedrich Schinkel in Berlin in 1815 was given a fully Egyptian staging that would influence subsequent productions. One could go further back in history and assert that the interest in Egypt was prompted by Biblical studies or by the classical tradition and by claims that Greece and Rome were primarily indebted to Egypt. (15.) John E. Thompson, The Masons, the Mormons and The Morgan Incident, Iowa Research Lodge No.2, Ames (Iowa), c.1982, 1-12. (16.) Thompson, 16-17. (17.) S.Brent Morris, "The Letter 'G', The Plumbline, Scottish Rite Research Society, Vol.1 No.3, September 1992, 2. (18.) Harry L. Steinberg, "Mormonism and Masonry", Masonic Papers, Walter F. Meier Lodge of Research, Seattle (Washington), Vol. IV No.8, August, 1969, 101. (19.) Re: LA TIMES ARTICLE at IN%"MORMONL%BYUVM.BITNET at CMSA.BERKELEY.EDU" "(MORMON LIST)" 20-JUN-1993 (20.) Occultic and Masonic Influence in Early Mormonism, n.a., Institute for Religous Research, 1996, 2. (21.) "Symbol of Industry", n.a., The Short Talk Bulletin, Vol.xxxv No.12, December 1957, The Masonic Service Association of the United States, 5. (22.) From "Temple ceremonies" at http://pages.ripco.net/~mattl/TempleCeremonies.htm (23.) John L. Brooke, The Refiner's Fire: The Making of Mormon Cosmology, 1644-1844, Cambridge University Press, 1994, 249. (24.) Re: temple text, copyright, Masons, etc. IN%"bunner at macc.wisc.edu" 6-JUL-1993 (25.) Hogan, 6. Paul Rich The Hoover Institution and University of the Americas-Puebla David Merchant Archer Huntington Fellow, Library of Congress From checker at panix.com Mon Sep 19 19:43:22 2005 From: checker at panix.com (Premise Checker) Date: Mon, 19 Sep 2005 15:43:22 -0400 (EDT) Subject: [Paleopsych] SW: On Life-History Invariance across Animal Species Message-ID: Evolution: On Life-History Invariance across Animal Species http://scienceweek.com/2005/sw050916-1.htm The following points are made by Gerdien de Jong (Science 2005 309:1193): 1) There is obvious variation in the way different animals live their lives -- in their life span, in their age and size at maturity, and in their size as full-grown adults, to name a few attributes. But are there fundamental similarities in the life history strategies that different animals use? Charnov [1] argued that there are: He proposed fundamental similarities -- "life history invariants" -- to be a major explanatory ingredient of life history evolution. Life history invariants generalize a life history model over species boundaries and over a wide range of animal sizes, leading to an understanding of universal life history strategies. New work [2] calls into question the principal method to detect life history invariants: the authors have determined that the approach is misleading, throwing the very existence of the concept into doubt. 2) Life history invariants are dimensionless ratios of two life history traits -- for instance, age at maturity and average length of life. Such a ratio is used to answer questions such as "At what relative age do animals first reproduce?" Whether we talk about rabbits or whales, we hope the ratio will enable us to forget about differences in life span, size, environment, and taxonomy. Thus, life history invariants point to common properties of organisms not immediately clear from direct observation. As such, they are potentially very useful for understanding and modeling life history evolution: The models are meant to be general, doing away with the need to model each species separately. The existence of life history invariants is a major argument for one general theory of life history evolution, rather than a theory as a set of recipes for how to make species-specific models. 3) Life history invariants are canonically identified from a log-log plot of two life history traits involved in a dimensionless ratio. In such a plot, the slope is expected to equal 1. Consider two life history traits, a and b, and ask whether their dimensionless ratio a/b is a life history invariant. If their ratio is constant (c), a log-log plot with ln(b) on the x axis and ln(a) on the y axis would show points on a line defined by ln(a) = ln(c) + ln(b), with a slope of 1 and intercept ln(c) (see the top figure). The line is the regression line and the intercept can be used to estimate the life history invariant, c. A log-log plot of two traits involved in a life history invariant leads to a slope of 1 with all variation in the dependent variable on the y axis explained by the variable on the x axis (that is, R2 = 1 for an ideal invariant where R2 of the regression is the proportion of the variation in the dependent variable a explained by the variation in the independent variable b). An empirically determined slope of 1 at high explained variance R2 has therefore been taken to indicate a life history invariant. This is common experimental logic, but treacherous, as it disregards the potential existence of other ways to arrive at the predicted slope of 1 and very high R2. If a life history invariant is the only way to arrive at a slope of 1 and very high R2, then one can conclude from an empirical slope of 1 and very high R2 that a life history invariant exists. 4) Many such log-log plots of traits that indicate potential life history invariants exist. Allsop and West [3] presented data on relative body size at sex change for animals ranging from a 2-mm shrimp to a 1.5-m fish. The log-log plot of body size at sex change versus maximum body size showed a slope of 1.05, and all the points were near the regression line, with R2 = 0.98. The life invariant "relative body size at sex change" was perfectly present. Buston et al. [4] then threw a spanner in the works. Commenting on Allsop and West's data, Buston et al [4] pointed out that random distributions of both total body size and size at sex change lead to identical properties in a log-log plot as a life history invariant: a slope of 1 and an R2 of > 0.95. More null models followed [5], using different random distributions of traits. But Nee et al. [2] describe the general rationale of how slopes of 1 at high R2 arise in log-log plots, independent of the distributions of the traits. The culprit is a variable on the y axis that is a fraction of the x-variable: The plot is of y = cx, with c < 1. In a log-log plot of cx versus x, a slope of 1 follows automatically. A wide range on the x axis -- from rabbit to whale -- guarantees a high R2. The evidence for life history invariants vanishes as the method of finding them evaporates. References (abridged): 1. E. L. Charnov, Life History Invariants: Some Explorations of Symmetry in Evolutionary Ecology (Oxford Univ. Press, Oxford, 1993) 2. S. Nee, N. Colegrave, S. A. West, A. Grafen, Science 309, 1236 (2005) 3. D. J. Allsop, S. A. West, Nature 425, 783 (2003) 4. P. M. Buston, P. L. Munday, R. R. Warner, Nature 428, 10.1038/nature02512 (2004) 5. A. Gardner, D. J. Allsop, E. L. Charnov, S. A. West, Am. Nat. 165, 551 (2005) Science http://www.sciencemag.org -------------------------------- Related Material: DINOSAURS, DRAGONS, AND DWARFS: THE EVOLUTION OF MAXIMAL BODY SIZE The following points are made by G.P. Burness et al (Proc. Nat. Acad. Sci. 2001 98:14518): 1) The size and taxonomic affiliation of the largest locally present species ("top species") of terrestrial vertebrate vary greatly among faunas, raising many unsolved questions. Why are the top species on continents bigger than those on even the largest islands, bigger in turn than those on small islands? Why are the top mammals marsupials on Australia but placentals on the other continents? Why is the world's largest extant lizard (the Komodo dragon) native to a modest-sized Indonesian island, of all unlikely places? Why is the top herbivore larger than the top carnivore at most sites? Why were the largest dinosaurs bigger than any modern terrestrial species? 2) A useful starting point is the observation of Marquet and Taper (1998), based on three data sets (Great Basin mountaintops, Sea of Cortez islands, and the continents), that the size of a landmass's top mammal increases with the landmass's area. To explain this pattern, they noted that populations numbering less than some minimum number of individuals are at high risk of extinction, but larger individuals require more food and hence larger home ranges, thus only large landmasses can support at least the necessary minimum number of individuals of larger-bodied species. If this reasoning were correct, one might expect body size of the top species also to depend on other correlates of food requirements and population densities, such as trophic level and metabolic rate. Hence the authors assembled a data set consisting of the top terrestrial herbivores and carnivores on 25 oceanic islands and the 5 continents to test 3 quantitative predictions: a) Within a trophic level, body mass of the top species will increase with land area, with a slope predictable from the slope of the relation between body mass and home range area. b) For a given land area, the top herbivore will be larger than the top carnivore by a factor predictable from the greater amounts of food available to herbivores than to carnivores. c) Within a trophic level and for a given area of landmass, top species that are ectotherms will be larger than ones that are endotherms, by a factor predictable from ectotherms' lower food requirements. 3) The authors point out that on reflection, one can think of other factors likely to perturb these predictions, such as environmental productivity, over-water dispersal, evolutionary times required for body size changes, and changing landmass area with geological time. Indeed, the database of the authors does suggest effects of these other factors. The authors point out they propose their three predictions not because they expect them always to be correct, but because they expect them to describe broad patterns that must be understood in order to be able to detect and interpret deviations from those patterns. Proc. Nat. Acad. Sci. http://www.pnas.org -------------------------------- Related Material: ON THE EVOLUTION OF SIZE IN LIVING SYSTEMS Notes by ScienceWeek: A long view of the evolutionary history of life on Earth suggests that living systems tend to evolve into larger and more complex forms. However, some of the most successful living systems are relatively small and have remained small. Is there a pattern in the evolution of size? And if there is a pattern, what are the forces responsible for the pattern and how do we explain the exceptions? The following points are made by Sean B. Carroll (Nature 2001 409:1102): 1) For the first 2.5 billion years of life on Earth, most species rarely exceeded 1 millimeter in size and were generally much smaller. The earliest reported bacterial microfossils from approximately 3.5 billion years ago averaged approximately 5 microns in diameter. Early *eukaryotic microfossils (*acritarchs), while considerably larger, still ranged generally from approximately 40 to 200 microns in size (with a few larger exceptions) for much of their first 600 to 800 million year history. Organismal size increased appreciably with the evolution of multicellular forms. In bacterial and algal forms with cell walls, one of the simplest ways to become multicellular was for the products of cell division to remain together to form long filaments. Many early multicellular eukaryotes were millimeter-scale, linear or branched, filamentous forms. 2) The size and shape of life did not expand appreciably until the late *Proterozoic. Radially symmetric impressions and trace fossils indicate the presence of millimeter scale multicellular organisms (metazoans) around 550 million years ago. The puzzling *Ediacaran fauna comprised of tubular, frond-like, radially symmetric forms generally reached several centimeters in size (although some forms approached 1 meter in size), as did macroscopic algae. Organismal sizes expanded considerably in the *Cambrian, including *bilaterians up to 50 centimeters in size, as well as sponges and algae up to 5 to 10 centimeters. Maximal body lengths of animals increased subsequently by another 2 orders of magnitude, as did algal size (e.g., *kelp). 3) The largest existing organisms, giant fungi and trees, evolved from independent small ancestors. Land plants are believed to have evolved from *charophyte green algae, and both green algae and plants apparently evolved from a unicellular *flagellate ancestor. Fossil spores indicating the earliest evidence of plant life date from the *mid-Ordovician. The oldest plant-body fossil (Cooksonia) suggests that early land plants were small, and on the basis of molecular phylogenetic analyses are believed to be comparable in organization and life cycle to *liverworts. Many of the principal groups of land plants have evolved large (> 10 meters) species at some point in their history. Thus, increases in both mean and maximal organismal size apparently occurred in the evolution of bacteria, eukaryotes, and within the algal, fungal, and animal lineages. 4) There is a long history of support for the general notion of overall evolutionary trends toward increases in size, complexity, and diversity. However, there are two fundamentally distinct mechanisms that have been proposed to explain these trends. One proposed mechanism is a random and passive tendency to evolve away from the initial minima of organismal size, complexity, and diversity through an overall increase in variance ("there is no where to go but up"). The second proposed mechanism is a non-random, active or "driven" process that biases evolution towards increased size or complexity. What must be noted is that there are relationships between size and complexity and between complexity and diversity that are intuitive apparent. Increases in organismal size through increases in cell number create the potential for increases in diversity of cell type, and as a result, increases in anatomical complexity. Increases in morphological complexity then may lead to expansions into previously unoccupied "ecospace" and an accompanying expansion of species diversity. Nature http://www.nature.com/nature -------------------------------- Notes by ScienceWeek: eukaryotic: Cells (or organisms composed of such cells) containing internal membrane-bound organelles such as a nucleus. acritarchs: Unicellular microfossils of unknown or uncertain biological origin that occur abundantly in strata from the Precambrian and Paleozoic (see next note). Proterozoic: The complete geological time-scale is as follows: Time-Frame Starting Date (Millions of Years Ago) ---------- ------------------------------------- Hadean 4600 Archaean 4000 Proterozoic 2500 Cambrian 570 Ordovician 510 Silurian 439 Devonian 408.5 Carboniferous 362.5 Permian 290 Triassic 245 Jurassic 208 Cretaceous 145.6 Paleocene 65 Eocene 56.5 Oligocene 35.4 Miocene 23.3 Pliocene 5.2 Pleistocene 1.64 Holocene 0.01 Ediacaran: The term "Ediacaran" refers to an assemblage (until recently the oldest) of soft-bodied marine animals, the assemblage first discovered in the Ediacara Hills in Australia. Cambrian: See time-scale above. The most outstanding aspect of the Cambrian was the rather sudden appearance of numerous invertebrate fossils, so numerous that some researchers have termed the Cambrian an explosion of evolutionary processes. Many of the life forms that existed during the Cambrian are long extinct, but their fossils are numerous, and through their fossils the various Cambrian species have been the subject of much study by paleobiologists. The Cambrian explosion of life forms has been a long-standing puzzle for paleobiologists, and at present there is apparently no single generally accepted explanation. bilaterians: The "Bilateria" are a major division of the animal kingdom comprising all forms with bilateral symmetry, and the term "bilaterians" refers to the first such forms appearing after the emergence of protozoa. kelp: A group of large brown "seaweeds", actually algae, growing in large structures that may be as long as 60 meters. charophyte green algae: In general, "green algae" are algae in which chlorophyll is not masked by another pigment. Charophyte green algae (also known as "stoneworts"), are a type of green algae usually found in fresh or brackish water. flagellate: Possessing one or more flagella. A flagellum is a long threadlike extension providing locomotion for a cell. mid-Ordovician: See time-scale above. liverworts: (Hepaticopsida) A group of lower plants in which the dominant generation is the sexual phase of the plant (gametophyte phase). From checker at panix.com Mon Sep 19 19:43:26 2005 From: checker at panix.com (Premise Checker) Date: Mon, 19 Sep 2005 15:43:26 -0400 (EDT) Subject: [Paleopsych] SW: On Physics and the Real World Message-ID: Theoretical Physics: On Physics and the Real World http://scienceweek.com/2005/sw050916-6.htm The following points are made by George F.R. Ellis (Physics Today 2005 July): 1) Physics is the model of what a successful science should be. It provides the basis for the other physical sciences and biology because everything in our world, including ourselves, is made of the same fundamental particles, whose interactions are governed by the same fundamental forces. It's no surprise then, as Princeton University's Philip Anderson has noted, that physics represents the ultimate reductionist subject: Physicists reduce matter first to molecules, then to atoms, then to nuclei and electrons, and so on, the goal being always to reduce complexity to simplicity. The extraordinary success of that approach is based on the concept of an isolated system. Experiments carried out on systems isolated from external interference are designed to identify the essential causal elements underlying physical reality. 2) The problem is that no real physical or biological system is truly isolated, physically or historically. Consequently, reductionism tends to ignore the kinds of interactions that can trigger the emergence of order, patterns, or properties that do not preexist in the underlying physical substratum. Biological complexity and consciousness -- as products of evolutionary adaptation -- are just two examples. Physics might provide the necessary conditions for such phenomena to exist, but not the sufficient conditions for specifying the behaviors that emerge at those higher levels of complexity. Indeed, the laws of behavior in complex systems emerge from, but are to a large degree independent of, the underlying low-level physics. That independence explains why biologists don't need to study quantum field theory or the standard model of particle physics to do their jobs. 3) Moreover, causes at those higher levels in the hierarchy of complexity have real effects at lower levels, not just the reverse as often thought. Consequently, physics cannot predict much of what we see in the world around us. If it could predict all, then free will would be illusory, the inevitable outcome of the underlying physics. 4) True complexity, with the emergence of higher levels of order and meaning, including life, occurs in modular, hierarchical structures.[1,2] Consider the precise ordering in large intricate networks -- microconnections in an integrated chip or human brain, for example. Such systems are complex not merely because they are complicated; order here implies organization, in contrast to randomness or disorder. They are hierarchical in that layers of order and complexity build upon each other, with physics underlying chemistry, chemistry underlying biochemistry, and so forth. Each level can be described in terms of concepts relevant to its own particular structure -- particle physics deals with behaviors of quarks and gluons, chemistry with atoms and molecules -- so a different descriptive language applies at each level. Thus we can talk of different levels of meaning embodied in the same complex structure. 5) The phenomenon of emergent order refers to this kind of organization, with the higher levels displaying new properties not evident at the lower levels. Unique properties of organized matter arise from how the parts are arranged and interact, properties that cannot be fully explained by breaking that order down into its component parts.[3,4] You can't even describe the higher levels in terms of lower-level language.[5] References (abridged): 1. G. F. R. Ellis, in The Re-Emergence of Emergence, P. Clayton, P. C. W. Davies, eds., Oxford U. Press, New York (in press); also available at http://www.mth.uct.ac.za/~ellis/emerge.doc 2. G. Booch, Object Oriented Analysis and Design with Applications, 2nd ed., Benjamin Cummings, Redwood City, CA (1994) 3. N. A. Campbell, Biology, Benjamin Cummings, Menlo Park, CA (1996) 4. R. B. Laughlin, A Different Universe: Reinventing Physics from the Bottom Down, Basic Books, New York (2005) 5. S. Hartmann, Stud. Hist. Philos. Mod. Phys. 32, 267 (2001) Physics Today http://www.physicstoday.org -------------------------------- Related Material: PARTICLE PHYSICS: AN EXCHANGE CONCERNING RELEVANCE Notes by ScienceWeek: In general, "reductionism" is the idea that macroscopic phenomena can be explained in terms of microscopic entities and/or events, but the specific meaning of the term depends upon context and the conceptual identification within a particular science of levels of understanding. In biology in general, for example, "reductionism" is the term applied to attempts to explain biological phenomena in the language of physics and chemistry. In neurobiology, the term "reductionism" may be applied to attempts to explain human cognitive behavior in terms of the behavior of nerve cells and their connections. In evolutionary biology, the term "reductionism" may be applied to attempts to explain the dynamics of evolution in terms of molecular genetics. In physics and chemistry, the term "reductionism" may be applied to attempts to explain the macroscopic behavior of physical or chemical systems in terms of events at the level of atomic phenomena. Also in physics, the term "reductionism" may be applied to attempts to explain both the macroscopic behavior of a physical system and/or the microscopic atomic behavior of the entities of the system in terms of events at the still more microscopic level of fundamental particles and fundamental forces. The various sciences are split by scientists (not by nature) into various levels of explanation, with researchers working at the various levels using various techniques and concepts. Ordinarily, in the practice of science, the working scientist does not spend much time cogitating about whether a general reductionist approach is useful or not useful, philosophically valid or not valid, or whatever. The attitude essentially is that here is a house, I choose to study in detail the nature of the bricks, you choose to study in detail the nature of the construction of the house, I enjoy what I'm doing, you enjoy what you're doing, and each of us is making some contribution to a general understanding of the nature of the entity "house". This division of labor has been quite fruitful in science, and there is never much of a problem concerning the existence of various levels of investigation until the person who studies bricks says that what he or she is doing is more important than what the person who studies the construction of the house does, or when the person studying the construction of the house says it is the study of the construction of the house that is more important than the study of bricks. From the standpoint of "nature", from the perspective of the giant star *Betelguese, for example, a relatively nearby stupendous and violent supergiant star apparently 400 to 500 times the diameter of our Sun, any serious bickering on the planet Earth about the relative merits of various levels of understanding in science begins to smack of farce. But science is a human enterprise, and occasionally the bickering about reductionism and levels of understanding does get serious and does occupy attention. In 1996, in a most prestigious physics journal (_Reviews of Modern Physics_), the physicist Robert Cahn stated that particle physics is essential to the understanding of our everyday world, that "particle physicists construct accelerators kilometers in circumference and detectors the size of basketball pavilions not ultimately to find the *t-quark or the *Higgs boson, but because that is the only way to learn why our everyday world is the way it is... Given the masses of the quarks and *leptons, and nine other closely related quantities, [the current theory of particle interaction] can account in principle for all the phenomena in our daily lives." In July 1998, in the journal _Physics Today_, Pablo Jensen, a condensed matter physicist, took issue with Cahn's views and suggested that Cahn's "reductionist vision seems to be shared by many other particle physicists." Stating that he wished to "reopen a debate in the physics community," Jensen made the following points: 1) The reductionist ideas of Cahn and other reductionist particle physicists are wrong: even if we knew all the "fundamental" laws, we could not say anything useful about our everyday world. Our everyday world is irremediably macroscopic, and macroscopic concepts are needed to understand it. 2) Contrary to the pretensions of particle physicists, science is organized in decoupled layers, each with its own elementary entities or concepts, which generally are not simply derived from those of the lower level but constructed in creative efforts... Particle physics is practically irrelevant to understanding our everyday world... "If we learned tomorrow that previous results and analysis had overlooked certain systematic errors, and that the t-quark mass is near 195 *GeV and not 175 GeV, it is particle physics that would have to adjust to remain in agreement with the rest of physics, and not vice versa." 3) Considering, for example, the property of *chirality of large molecules (e.g., a sugar or any biological molecule), for all practical purposes, such molecules do not show the symmetry expected from the fundamental laws -- in this case, quantum mechanics. 4) In the study of phase transitions, there are characteristics of such transitions that apparently depend on the collective behavior of the system and are not determined by the microscopic interactions. 5) Each level of complexity must be studied with its own instruments, and requires the invention of new concepts adapted to describe and understand its behavior... Intermediate concepts such as *entropy, *dissipative structures, cells, genes, etc., cannot be simply "deduced" from the fundamental laws: such concepts are said to be "emergent" because they arise at high levels of complexity and must be invented at those levels to deal with specific situations... These emergent concepts are as real and as fundamental as the concepts and particles introduced by particle physicists. The author concludes: "By all means let us each study our chosen "layer" of reality, whether it involves quarks or convective cells. But let us also remember that each layer is just one part of the greater whole. Accounting for all the phenomena in our daily lives *in principle* is entirely different from accounting for them in actuality." In the November 1998 issue of _Physics Today_, Robert Cahn presents a rebuttal to the critique of Pablo Jensen, the author making the following points: 1) The empirical parameters of the *Standard Model of particle physics shape the most familiar aspects of our physical surroundings... Given *these parameters, the Standard Model, which subsumes the Maxwell and Schroedinger equations, determines all the fundamental processes of *electroweak and strong interactions. Changes in the basic parameters would produce worlds quite different from our own. 2) The stuff of daily life is made just of electrons and the lightest quarks. However, we cannot understand these particles by themselves, because they are intimately connected to others accessible only in high energy collisions. 3) Concerning the supposed irrelevance of particle physics, constructs that embody the essential physical features of complex systems are indispensable, but their success is not a reason for abandoning the search for basic physical laws. 4) Nature is not neatly partitioned into autonomous layers, as Jensen suggests. On the contrary, the macroscopic makes manifest the microscopic... The gross properties of the materials around us, their color, conductivity, and strength, reflect the details of their quantum mechanical states. Likewise the structure of atoms reflects divisions in the subatomic world... "Only by willfully closing our eyes can we miss the connection between the fundamental interactions and their manifestations that surround us." The author concludes: "We particle physicists share with all physicists the goal of explaining the world. We differ by asking ever more basic questions. Like young children who relentlessly insist, Why?, particle physicists ask, Why is there light? Why are electrons light and protons heavy? Why are there electrons or protons, anyway? 'Just because' and 'Who cares?' will not satisfy the curious child, nor should they satisfy us." The same issue of the journal includes a number of letters on the subject from other physicists, and in one of these letters Paul Roman suggests that perhaps the motivation for the debate is that the physics research "grant pie is shrinking while the number of pie-hungry individuals is still increasing." Perhaps that is so, and perhaps that is also the motivation behind debates concerning the reductionist approach in other sciences. But perhaps such motivations are also part of science as a human enterprise. Meanwhile, the enormous furnace of Betelguese continues to roar. References (abridged): R.N. Cahn (Lawrence Berkeley Natl. Lab., US) (Rev. Mod. Phys. 1996 68:951) QY: Robert N. Cahn, Lawrence Berkeley National Laboratory, Berkeley, CA US P. Jensen (Claude Bernard University, FR) Particle physics and our everyday world. (Physics Today July 1998) QY: Pablo Jensen, Claude Bernard University, Villeurbanne FR) R.N. Cahn (Lawrence Berkeley Natl. Lab., US) "Particle physics and our everyday world": A reply (Physics Today November 1998) QY: Robert N. Cahn, Lawrence Berkeley National Laboratory, Berkeley, CA US -------------------------------- Notes by ScienceWeek: Betelguese: Also known as Alpha Orionis. It is the 10th brightest star in the sky, with a luminosity 5000 times that of the Sun, with an estimated distance of 400 light years. Some astronomers believe its distance is 1400 light years, which would make its luminosity 50,000 times that of the Sun. The star is a variable, its size swelling and contracting with a period of several years. t-quark: (top-quark) A quark is a hypothetical fundamental particle, having charges whose magnitudes are one-third or two-thirds of the electron charge, and from which the elementary particles may in theory be constructed. A t-quark is one of the types of quarks and has an electrical charge of +2/3. Higgs boson: Higgs fields (named after Peter W. Higgs, University of Edinburgh, UK) constitute a set of fundamental theoretical fields that induce spontaneous symmetry breaking. In general, spontaneous symmetry breaking occurs in systems whose underlying symmetry state is unstable. A Higgs particle is associated with a Higgs field in the same way that a photon is associated with the electromagnetic field. Higgs bosons are massive mesons whose existence is predicted by certain theories. Mesons are apparently composed of quark and anti-quark pairs; they are produced by various high-energy interactions and decay into stable particles. leptons: Leptons are a class of point-like fundamental particles showing no internal structure and no involvement with the strong forces. There are 6 leptons: the electron, the muon, the massive tau lepton, and a specific neutrino associated with each of the former (3 neutrino "flavors"). GeV: (Gev) Also written as Bev, a billion electronvolts. An electronvolt is defined as the energy acquired by an electron falling freely through a potential difference of one volt, and is equal to 1.6022 x 10^(-19) joule. chirality: In chemistry, chirality is a property of certain asymmetric molecules, the property being that the mirror images of the molecules cannot be superimposed one on the other while facing in the same direction. entropy: A measure of disorder in a system. dissipative structures: In general, a dissipative system is a system that loses energy by conversion of energy into heat. Standard Model: In particle physics, the *Standard Model is a theoretical framework whose basic idea is that all the visible matter in the universe can be described in terms of the elementary particles leptons and quarks and the forces acting between them. these parameters: The parameters referred to here are the masses of the quarks, the masses of the charged leptons, the strength of 3 forces, 4 numbers that describe the weak transformations of one quark type into another, the mass of the *W boson, and the mass of the Higgs boson. W boson: Very massive charged particles (+ or -) that convey part of the weak force between leptons and *hadrons. Bose-Einstein statistics is the statistical mechanics of a system of indistinguishable particles for which there is no restriction on the number of particles that may simultaneously exist in the same quantum energy state. Bosons are particles that obey Bose-Einstein statistics, and they include photons, *pi mesons, all nuclei having an even number of particles, and all particles with integer *spin. pi mesons: (pions) Pi mesons are subatomic particles with masses approximately 270 times the mass of the electron. spin: In quantum mechanics, "spin" is the intrinsic angular momentum of a subatomic particle. hadrons: Hadrons are particles with internal structure, e.g., neutrons and protons. electroweak and strong interactions: The fundamental forces comprise the gravitational force, the electromagnetic force, the nuclear strong force, and the nuclear weak force. The electroweak interactions comprise the electromagnetic and nuclear weak interactions, the latter involved in radioactive decay processes. From checker at panix.com Mon Sep 19 19:43:43 2005 From: checker at panix.com (Premise Checker) Date: Mon, 19 Sep 2005 15:43:43 -0400 (EDT) Subject: [Paleopsych] Rebecca Saxe: Do the Right Thing Message-ID: Rebecca Saxe: Do the Right Thing http://bostonreview.net/BR30.5/saxe.html First, the summary from the "Magazine and Journal Reader" feature of the daily bulletin from the Chronicle of Higher Education, 5.9.13 http://chronicle.com/daily/2005/09/2005091301j.htm A glance at the September/October issue of the Boston Review: Searching for morality Is killing one person for the sake of saving five others justifiable? Moral dilemmas like that one are a part of life, says Rebecca Saxe, a junior fellow in cognitive neuroscience at Harvard University. But, she asks, what causes people to judge whether something is right or wrong? By examining several studies, she has explored whether a universal concept of morality exists. One school of thought, she writes, suggests that human beings possess a "moral instinct." In Internet surveys, 89 percent of people said they would answer yes to the above dilemma, depending on the instance. It's "an impressive consensus," she writes, particularly because respondents were indistinguishable by race, sex, wealth, religious affiliation, nationality, or educational background. Still, she notes, Web users have a likely bias for Western culture, making it difficult to ascertain whether the findings represent a universal concept of morality, or just a Western one. Developmental psychologists, meanwhile, have studied preverbal infants to see if human beings are born with a sense of morality that is undeterred by cultural influences. In one exercise, researchers showed 15-month-old toddlers movies of contrasting behavior. In one film, a "nice" man pushes a bag off a seat so that a girl can sit down. In another, a "mean" man pushes the girl off the seat to make room for the bag. The study showed that, afterward, babies were more likely to crawl to the "nice" man. The results are "interesting," Ms. Saxe says, because they show that, "by the time they are 1 year old, babies can distinguish between helpful actions and hurtful ones." The most important research is likely to come from studying brain images, she says. She mentions a study in which magnetic-resonance imaging was used to compare blood-oxygen levels in the brain when people considered sentences describing moral violations -- like "they hung an innocent" -- and sentences that describe unpleasant but not immoral actions or that make morally neutral statements -- like "stones are made of water." That study found a higher oxygenation level in one region of the brain while subjects read about moral violations than either of the other two kinds of sentences. The author of the study speculated that that region of the brain might play a role in moral reasoning. Such findings are exciting, Ms. Saxe says, but she urges caution in their interpretation. It remains controversial whether such a specialized brain region exists, she notes, and even if it did, its implications might not be universal. The article, "Do the Right Thing: Cognitive Science's Search for a Common Morality," is available at --Jason M. Breslow _________________________________________________________________ Cognitive science's search for a common morality 8 Consider the following dilemma: Mike is supposed to be the best man at a friend's wedding in Maine this afternoon. He is carrying the wedding rings with him in New Hampshire, where he has been staying on business. One bus a day goes directly to the coast. Mike is on his way to the bus station with 15 minutes to spare when he realizes that his wallet has been stolen, and with it his bus tickets, his credit cards, and all his forms of ID. At the bus station Mike tries to persuade the officials, and then a couple of fellow travelers, to lend him the money to buy a new ticket, but no one will do it. He's a stranger, and it's a significant sum. With five minutes to go before the bus's departure, he is sitting on a bench trying desperately to think of a plan. Just then, a well-dressed man gets up for a walk, leaving his jacket, with a bus ticket to Maine in the pocket, lying unattended on the bench. In a flash, Mike realizes that the only way he will make it to the wedding on time is if he takes that ticket. The man is clearly well off and could easily buy himself another one. Should Mike take the ticket? My own judgment comes down narrowly, but firmly, against stealing the ticket. And in studies of moral reasoning, the majority of American adults and children answer as I do: Mike should not take the ticket, even if it means missing the wedding. But this proportion varies dramatically across cultures. In Mysore, a city in the south of India, 85 percent of adults and 98 percent of children say Mike should steal the ticket and go to the wedding. Americans, and I, justify our choice in terms of justice and fairness: it is not right for me to harm this stranger--even in a minor way. We could not live in a world in which everyone stole whatever he or she needed. The Indian subjects focus instead on the importance of personal relationships and contractual obligations, and on the relatively small harm that will be done to the stranger in contrast to the much broader harm that will be done to the wedding. An elder in a Maisin village in Papua New Guinea sees the situation from a third perspective, focused on collective responsibility. He rejects the dilemma: "If nobody [in the community] helped him and so he [stole], I would say we had caused that problem." Examples of cross-cultural moral diversity such as this one may not seem surprising in the 21st century. In a world of religious wars, genocide, and terrorism, no one is naive enough to think that all moral beliefs are universal. But beneath such diversity, can we discern a common core--a distinct, universal, maybe even innate "moral sense" in our human nature? In the early 1990s, when James Q. Wilson first published The Moral Sense, his critics and admirers alike agreed that the idea was an unfashionable one in moral psychology. Wilson, a professor of government and not psychology, was motivated by the problem of non-crime: how and why most of us, most of the time, restrain our basic appetites for food, status, and sex within legal limits, and expect others to do the same. The answer, Wilson proposed, lies in our universal "moralsense, one that emerges as naturally as [a] sense of beauty or ritual (with which morality has much in common) and that will affect [our] behavior, though not always, and in some cases not obviously." But the fashion in moral psychology is changing. A decade after Wilson's book was published, the psychological and neural basis of moral reasoning is a rapidly expanding topic of investigation within cognitive science. In the intervening years, new technologies have been invented, and new techniques developed, to probe ever deeper into the structure of human thought. We can now acquire vast numbers of subjects over the Internet, study previously inaccessible populations such as preverbal infants, and, using brain imaging, observe and measure brain activity non-invasively in large numbers of perfectly healthy adults. Inevitably, enthusiasts make sweeping claims about these new technologies and the old mysteries they will leave in their wake. ("The brain does not lie" is a common but odd marketing claim, since in an obvious sense, brains are the only things that ever do.) The appeal of the new methods is clear: if an aspect of reasoning is genuinely universal, part of the human genetic endowment, then such reasoning might be manifest in massive cross-cultural samples, in subjects not yet exposed to any culture, such as very young infants, and perhaps even in the biological structure of our reasoning organ, the brain. How far have these technologies come in teaching us new truths about our moral selves? How far could they go? And what will be the implications of a new biopsychological science of natural morality? "The truth, if it exists, is in the details," wrote Wilson, and therefore I will concentrate on the details of three sets of very recent experiments, each of which approaches the problem using a different method: an Internet survey, a cognitive study of infants, and a study of brain imaging. Each is at the cutting edge of moral psychology, each is promising but flawed, and each should be greeted with a mix of enthusiasm and interpretative caution. * * * Mike, the man we left sitting at the bus station, is in a particularly bad moral predicament: he must choose between two actions (stealing and breaking an obligation), both of which are wrong. Moral psychologists call cases like these "moral dilemmas." Over the last half century, batteries of moral dilemmas have been presented to men and women, adults and children, all over the world. The questions at the heart of these studies are these: How do people arrive at the moral judgment that an action, real or contemplated, is right or wrong? What are the rules governing these moral calculations, and from where do they come? Which, if any, of the fundamental components are universal? All of them, answered the eminent psychologist Lawrence Kohlberg. In the 1970s and 1980s, Kohlberg argued that moral reasoning is based on explicit rules and concepts, like conscious logical problem-solving; over the course of an individual's development, the rules and concepts that he or she uses to solve moral problems unfold in a well-defined, universal sequence of stages. These stages are biologically determined but socially supported. In early stages, moral reasoning is strongly influenced by external authority; in later stages, moral reasoning appeals first to internalized convention, and then to general principles of neutrality, egalitarianism, and universal rights. It may be that what makes one culture, one sex, or one individual different from another is just how high and how fast it manages to climb the moral ladder. To test this hypothesis, moral dilemmas were presented to people of varying ages and classes, both sexes, and many cultures (including people in India, Thailand, Iran, Turkey, Kenya, Nigeria, and Guatemala; communities of Alaskan Inuit; Tibetan Buddhist monks; and residents of an Israeli kibbutz). Kohlberg's key methodological insight was to focus not on the answers that people give to moral dilemmas but on how they justify their choice. A seven-year-old and a white-haired philosopher may agree that Mike should not steal the ticket, but they will differ in their explanations of why not. The seven-year-old may say that Mike shouldn't steal because he will get caught and punished, while the philosopher may appeal to an interpretation of Kant's categorical imperative: act only on a principle that you would wish everyone to follow in a similar situation. Kohlberg's claims were deeply controversial, not least because the highest stage of moral development was accorded almost exclusively to Western adults, and among those, mostly to men. Critics attacked everything from the specific dilemmas to the coding criteria to the whole philosophy of monotonic universal moral development. The psychologist Carol Gilligan, for example, argued that women justify their moral choices differently from men, but with equal sophistication. Men, she claimed, tend to reason about morality in terms of justice, and women in terms of care: "While an ethic of justice proceeds from the premise of equality--that everyone should be treated the same--an ethic of care rests on the premise of non-violence--that no one should be hurt." Similar arguments were made for non-Western cultures--that they emphasize social roles and obligations rather than individual rights and justice. On the whole, this emphasis on group differences won the day. Kohlberg's vision was rejected, and the psychological study of moral universals reached an impasse. Very recently, though, the use of moral dilemmas to study moral universals has reemerged. Marc Hauser of Harvard University and John Mikhail of Georgetown University are among the cognitive scientists leading the charge. The current theorists take as their model for moral reasoning not conscious problem-solving, as Kohlberg did, but the human language faculty. That is, rather than "moral reasoning," human beings are understood to be endowed with a "moral instinct" that enables them to categorize and judge actions as right or wrong the way native speakers intuitively recognize sentences as grammatical or ungrammatical. We can draw three predictions from the theory that morality operates as language does. First, just as each speaker can produce and understand an infinite number of completely original sentences, every moral reasoner can make fluent, confident, and compelling moral judgments about an infinite number of unique cases, including ones that they have never imagined confronting. Second, cross-culturally, systems of moral reasoning can be as diverse as human languages are, without precluding that a universal system of rules, derived from our biological inheritance, underlies and governs all these surface-level differences. Finally, just as native speakers are often unable to articulate the rules of grammar that they obey when speaking, the practitioners of moral judgment may have great difficulty articulating the principles that inform their judgments. Hauser, Mikhail, and their colleagues have tested these predictions with a set of moral dilemmas originally introduced by the philosopher Phillipa Foot in 1967 and now known collectively as the Trolley Problems. To illustrate the category, let's begin with Anna, standing on the embankment above a train track, watching a track-maintenance team do its work. Suddenly, Anna hears the sound of a train barrelling down the tracks: the brakes have failed, and the train is heading straight for the six workers. Beside Anna is a lever; if she pulls it, the train will be forced onto a side track and will glide to a halt without killing anyone. Should she pull the lever? No moral dilemma yet. But now let's complicate the story. In the second scenario, Bob finds himself in the same situation, except that one of the six maintenance people is working on the side track. Now the decision Bob faces is whether to pull the lever to save five lives, knowing that if he does, a man who would otherwise have lived will be killed. In a third version of what is clearly a potentially infinite series, the sixth worker is standing beside Camilla on the embankment. The only way to stop the train, and save the lives of the five people on the track, is for Camilla to push the man beside her down onto the track. By pushing him in front of the train and so killing him, she would slow it down enough to save the others. Finally, for anyone not yet convinced that there are cases in which it is wrong to sacrifice one person in order to save five, consider Dr. Dina, a surgeon who has five patients each dying from the failure of a different organ. Should she kill one healthy hospital visitor and distribute the organs to her patients in order to save five lives? By putting scenarios like these on a Web site (http://moral.wjh.harvard.edu) and soliciting widely for participants, Hauser and his lab have collected judgments about Trolley Problems from thousands of people in more than a hundred countries, representing a broad range of ages and religious and educational backgrounds. The results reveal an impressive consensus. For example, 89 percent of subjects agree that it is permissible for Bob to pull the lever to save five lives at the cost of one but that it is not permissible for Camilla to make the same tradeoff by pushing the man onto the track. More importantly, even in this enormous sample and even for complicated borderline cases, participants' responses could not be predicted by their age, sex, religion, or educational background. Women's choices in the scenarios overall were indistinguishable from men's, Jews' from Muslims' or Catholics', teenagers' from their parents' or grandparents'. Consistent with the analogy to language, these thousands of people make reliable and confident moral judgments for a whole series of (presumably) novel scenarios. Also interestingly, Hauser, Mikhail, and their colleagues found that while the "moral instinct" was apparently universal, people's subsequent justifications were not; instead, they were highly variable and often confused. Less than one in three participants could come up with a justification for the moral difference between Camilla's choice and Bob's, even though almost everyone shares the intuition that the two cases are different. So what can we learn from this study? Has the Internet--this new technology--given us a way to reveal the human universals in moral judgments? We must be cautious: Web-based experiments have some obvious weaknesses. While the participants may come from many countries and many backgrounds, they all have Internet access and computer skills, and therefore probably have significant exposure to Western culture. (In fact, although the first study included just over 6,000 people from more than a hundred countries, more than two thirds of them were from the United States.) Because the survey is voluntary, it includes a disproportionate number of people with a preexisting interest in moral reasoning. (More than two thirds had previously studied moral cognition or moral philosophy in some academic context, making it all the more surprising that they could not give clear verbal justifications of their intuitions.) And because subjects fill out the survey without supervision or compensation, sincerity and good faith cannot be ensured (although Hauser, Mikhail, and their colleagues did exclude the subjects who claimed to live in Antarctica or to have received a Ph.D. at 15). Also, this is only one study, focused on only one kind of moral dilemma: the Trolley Problems. So far, we don't know whether the universality of intuitions observed in this study would generalize to other kinds of dilemmas. The results of the experiment with Mike and the bus ticket suggest it probably would not. On the other hand, the survey participants did include a fairly even balance of sexes and ages. And the fact that sex in particular makes no difference to people's choices in the Trolley Problems, even in a sample of thousands (and growing), could be important. Remember, Carol Gilligan charged that Lawrence Kohlberg's theory of multi-stage moral development was biased toward men; she claimed that men and women reason about moral dilemmas with equal sophistication, but according to different principles. Hauser and Mikhail's Internet study lets us look at the controversy from a new angle. Gilligan's analysis was based on justifications: how men and women consciously reflect upon, explain, and justify the moral choices that they make. It is easy to imagine that the way we justify our choices depends a lot on the surrounding culture, on external influences and expectations. What Hauser and Mikhail's results suggest is that though the reflective, verbal aspects of moral reasoning (which Hauser and Mikhail found inarticulate and confused, in any case) may differ by sex, the moral intuition that tells us which choice is right and which wrong for Anna or Bob or Camilla is part of human nature, for women just as for men. Still, the Internet's critical weakness is intransigent. As long as people must have Internet access in order to participate, the sample will remain culturally biased, and it will be hard to know for sure from where the moral consensus comes: from human nature or from exposure to Western values. The only way to solve this problem is to investigate moral reasoning in people with little or no exposure to Western values. And cognitive scientists are beginning to do just that. * * * One group of experimental participants that is relatively free of cultural taint is preverbal infants. Before they are a year old, while their vocabulary consists of only a few simple concrete nouns, infants have presumably not yet been acculturated into the specific moral theories of their adult caretakers. Infant studies therefore offer scientists the chance to measure innate moral principles in mint condition. With this opportunity, of course, comes a methodological challenge. How can we measure complex, abstract moral judgments made by infants who are just beginning to talk, point, and crawl? To meet this challenge, developmental psychologists who study all areas of cognition have become adept--often ingenious--at teasing meaning out of one of the few behaviors that infants can do well: looking. Infants look longer at the things that interest them: objects or events that are attractive, unexpected, or new. Looking-time experiments therefore gauge which of two choices--two objects, people, or movies--infants prefer to watch. From just this simple tool, a surprisingly rich picture of infant cognition has emerged. We have learned, for example, that infants only a few days old prefer to look at a human face than at other objects; that by the time they are four months old, infants know that one object cannot pass through the space occupied by another object; and that by seven months, they know that a billiard ball will move if and only if it is hit by something else. Only recently, though, has this tool begun to be applied to the field of moral cognition. The questions these new studies seek to answer include the following: Where do we human beings get the notions of "right," "wrong," "permissible," "obligatory," and "forbidden"? What does it mean when we judge actions--our own or others'--in these terms? How and why do we judge some actions wrong (or forbidden) and not just silly, unfortunate, or unconventional? Not all transgressions are created equal; some undesirable or inappropriate actions merely violate conventions, while others are genuinely morally wrong. Rainy weather can be undesirable, some amateur acting is very bad, and raising your hand before speaking at a romantic candlelit dinner is usually inappropriate, but none of these is morally wrong or forbidden. Even a tsunami or childhood cancer, though both awful, are not immoral unless we consider them the actions of an intentional agent. The psychologist Elliott Turiel has proposed that the moral rules a person espouses have a special psychological status that distinguishes them from other rules--like local conventions--that guide behavior. One of the clearest indicators of this so-called moral-conventional distinction is the role of local authority. We understand that the rules of etiquette--whether it is permissible to leave food on your plate, to belch at the table, or to speak without first raising your hand--are subject to context, convention, and authority. If a friend told you before your first dinner at her parents' house that in her family, belching at the table after dinner is a gesture of appreciation and gratitude, you would not think your friend's father was immoral or wrong or even rude when he leaned back after dinner and belched--whether or not you could bring yourself to join in. Moral judgments, in contrast, are conceived (by hypothesis) as not subject to the control of local authority. If your friend told you that in her family a man beating his wife after dinner is a gesture of appreciation and gratitude, your assessment of that act would presumably not be swayed. Even three-year-old children already distinguish between moral and conventional transgressions. They allow that if the teacher said so, it might be okay to talk during nap, or to stand up during snack time, or to wear pajamas to school. But they also assert that a teacher couldn't make it okay to pull another child's hair or to steal her backpack. Similarly, children growing up in deeply religious Mennonite communities distinguish between rules that apply because they are written in the Bible (e.g., that Sunday is the day of Sabbath, or that a man must uncover his head to pray) and rules that would still apply even if they weren't actually written in the Bible (including rules against personal and material harm). There is one exception, though. James Blair, of the National Institutes of Health, has found that children classified as psychopaths (partly because they exhibit persistent aggressive behavior toward others) do not make the normal moral-conventional distinction. These children know which behaviors are not allowed at school, and they can even rate the relative seriousness of different offences; but they fail when asked which offences would still be wrong to commit even if the teacher suspended the rules. For children with psychopathic tendencies (and for psychopathic adults, too, though not for those Blair calls "normal murderers"), rules are all a matter of local authority. In its absence, anything is permissible. Turiel's thesis, then, is that healthy individuals in all cultures respect the distinction between conventional violations, which depend on local authorities, and moral violations, which do not. This thesis remains intensely controversial. The chief voice of opposition may come not from psychologists but from anthropologists, who argue that the special status of moral rules cannot be part of human nature, but is rather just a historically and culturally specific conception, an artifact of Western values. "When I first began to do fieldwork among the Shona-speaking Manyika of Zimbabwe," writes Anita Jacobson-Widding, for example, "I tried to find a word that would correspond to the English concept `morality.' I explained what I meant by asking my informants to describe the norms for good behavior toward other people. The answer was unanimous. The word for this was tsika. But when I asked my bilingual informants to translate tsika into English, they said that it was `good manners.' And whenever I asked somebody to define tsika they would say `Tsika is the proper way to greet people.'" Jacobson-Widding argues that the Manyika do not separate moral behavior from good manners. Lying, farting, and stealing are all equally violations of tsika. And if manners and morals cannot be differentiated, the whole study of moral universals is in trouble, because how--as Jacobson-Widding herself asks--can we study the similarities and differences in moral reasoning across cultures "when the concept of morality does not exist?" From the perspective of cognitive science, this dispute over the origins of the moral-conventional distinction is an empirical question, and one that might be resolvable with the new techniques of infant developmental psychology. One possibility is that children first distinguish "wrong" actions in their third year of life, as they begin to recognize the thoughts, feelings, and desires of other people. If this is true, the special status of moral reasoning would be tied to another special domain in human cognition: theory of mind, or our ability to make rich and specific inferences about the contents of other people's thoughts. Although this link is plausible, there is some evidence that distinguishing moral right from wrong is a more primitive part of cognition than theory of mind, and can exist independently. Unlike psychopathic children, who have impaired moral reasoning in the presence of intact theory of mind, autistic children who struggle to infer other people's thoughts are nevertheless able to make the normal moral-conventional distinction. Another hypothesis is that children acquire the notion of "wrong" actions in their second year, once they are old enough to hurt others and experience firsthand the distress of the victim. Blair, for example, has proposed that human beings and social species like canines have developed a hard-wired "violence-inhibition mechanism" to restrain aggression against members of the same species. This mechanism is activated by a victim's signals of distress and submission (like a dog rolling over onto its back) and produces a withdrawal response. When this mechanism is activated in an attacker, withdrawal means that the violence stops. The class of "wrong" actions, those that cause the victim's distress, might be learned first for one's own actions and then extended derivatively to others' actions. Both of these hypotheses suggest a very early onset for the moral-conventional distinction. But possibly the strongest evidence against the anthropologists' claim that this distinction is just a cultural construct would come from studies of even younger children: preverbal infants. To this end, developmental psychologists are currently using the new looking-time procedures to investigate this provocative third hypothesis: that before they can either walk or talk, young infants may already distinguish between hurting (morally wrong) and helping (morally right). In one study, conducted by Valerie Kuhlmeier and her colleagues at Yale, infants watched a little animated ball apparently struggling to climb a steep hill. A triangle and a square stood nearby. When the ball got just beyond halfway up, one of two things happened: either the triangle came over and gave the ball a helpful nudge up the hill, or the square came over and pushed the ball back down the hill. Then the cycle repeated. Later, the same infants saw a new scene: across flat ground the little ball went to sit beside either the triangle or the square. Twelve-month-old infants tended to look longer when the ball went to sit beside the "mean" shape. Perhaps they found the ball's choice surprising. Would you choose to hang out with someone who had pushed you down a hill? Another study, by Emmanuel Dupoux and his colleagues in France, used movies of live human actors. In one, the "nice" man pushes a backpack off a stool and helps a crying girl get up onto the stool, comforting her. In the second movie, the "mean" man pushes the girl off the stool, and picks up and consoles the backpack. The experiment is designed so that the amounts of crying, pushing, and comforting in the two movies are roughly equal. After the movies, the infants are given a choice to look at, or crawl to, either the "mean" man or the "nice" one. At 15 months, infants look more at the mean man but crawl more to the nice one. These results are interesting, but each of these studies provides evidence for a fairly weak claim: by the time they are one year old, babies can distinguish between helpful actions and hurtful ones. That is, infants seem to be sensitive to a difference between actions that are nice, right, fortunate, or appropriate and ones that are mean, wrong, undesirable, or inappropriate--even for novel actions executed by unknown agents. On any interpretation, this is an impressive discovery. But the difference that infants detect need not be a moral difference. These first infant studies of morality cannot answer the critical question, which is not about the origin of the distinction between nice and mean, but between right and wrong; that is, the idea that some conduct is unacceptable, whatever the local authorities say. Eventually, infant studies may provide evidence that the concepts of morality and convention can be distinguished, even among the Manyika--that is, that a special concept of morality is part of the way infants interpret the world, even when they are too young to be influenced by culture-specific constructions. So far, though, these infant studies are a long way off. In the meantime, we will have to turn to other methods, traditional and modern, to adjudicate the debate between psychologists and anthropologists over the existence of moral universals. First, if Hauser and Mikhail's Internet-survey results really do generalize to a wider population, as the scientists hope, then we might predict that Manyika men and women would give the same answers that everyone else does to the Trolley Problems. If so, would that challenge our notions of how different from us they really are? Second, if Elliott Turiel and his colleagues are right, then even Manyika children should distinguish between manners, which depend on local custom, and morals, which do not, when asked the right kinds of questions. For example, according to Manyika custom, "If you are a man greeting a woman, you should sit on a bench, keep your back straight and your neck stiff, while clapping your own flat hands in a steady rhythm." What if we told a four-year-old Manyika child about another place, very far away, where both men and women are supposed to sit on the ground when greeting each other? Or another place where one man is supposed to steal another man's yams? Would the children accept the first "other world" but not the second? I have never met a Manyika four-year-old, so I cannot guess, but if so, then we would have evidence that the Manyika do have a moral-conventional distinction after all, at the level of moral judgment, if not at the level of moral justification. Finally, some modern cognitive scientists might reply, we scientists hold a trump card: we can now study moral reasoning in the brain. * * * In the last ten years, brain imaging (mostly functional magnetic resonance imaging, or fMRI) has probably exceeded all the other techniques in psychology combined in terms of growth rate, public visibility, and financial expense. The popularity of brain imaging is easy to understand: by studying the responses of live human brains, scientists seem to have a direct window into the operations of the mind. A basic MRI provides an amazingly fine-grained three-dimensional picture of the anatomy of soft tissues such as the gray and white matter (cell bodies and axons) of the brain, which are entirely invisible to x-rays. An fMRI also gives the blood's oxygen content in each brain region, an indication of recent metabolic activity in the cells and therefore an indirect measure of recent cell firing. The images produced by fMRI analyses show the brain regions in which the blood's oxygen content was significantly higher while the subject performed one task--a moral-judgment task, for example--than while the subject performed a different task--a non-moral-judgment task. Jorge Moll and his colleagues, for example, compared the blood-oxygen levels in the brain while subjects read different kinds of sentences: sentences describing moral violations ("They hung an innocent"), sentences describing unpleasant but not immoral actions ("He licked the dirty toilet"), and neutral sentences ("Stones are made of water"). They found that one brain region--the medial orbito-frontal cortex, the region just behind the space between the eyebrows--had a higher oxygenation level while subjects read the moral sentences than either of the other two kinds of sentences. Moll proposed that the medial orbito-frontal cortex must play some unique role in moral reasoning. In fact, this is not a new idea. In 1848 Phineas Gage was the well-liked foreman of a railroad-construction gang until a dynamite accident destroyed his medial orbito-frontal cortex (along with a few neighboring brain regions). Although Gage survived the accident with his speech, motion, and even his intelligence unimpaired, he was, according to family and friends, "no longer Gage": obstinate, irresponsible, and capricious, he was unable to keep his job, and later he spent seven years as an exhibit in a traveling circus. Modern patients with similar brain damage show the same kinds of deficits: they are obscene, irreverent, and uninhibited, and they show disastrous judgment, both personally and professionally. Still, the claim of a moral brain region remains controversial among cognitive scientists, who disagree both about whether such a brain region exists and what the implications would be if it did. Joshua Greene of Princeton University, for example, investigates brain activity while subjects solve Trolley Problems. He finds lots of different brain regions recruited--as one might imagine--including regions associated with reading and understanding stories, logical problem-solving, and emotional responsiveness. What Greene doesn't find is any clear evidence of a "special" region for moral reasoning per se. More broadly, even if there were a specialized brain region that honored the moral-conventional distinction, what would this teach us about that distinction's source, or universality? Many people share the intuition that the existence of a specialized brain region would provide prima facie evidence of the biological reality of the moral-conventional distinction. The problem is that even finding a specialized neural region for a particular kind of thought does not tell us how that region got there. We know, for example, that there is a brain region that becomes specially attuned to the letters of the alphabet that a person is able to read, but not of other alphabets; this does not make any one alphabet a human universal. Similarly, if Western minds (the only ones who participate in brain-imaging experiments at the moment) distinguish moral from conventional violations, it is not surprising that Western brains do. In sum, both enthusiasm and caution are in order. The discovery of a specialized brain region for moral reasoning will not simply resolve the venerable problem of moral universals, as proponents of imaging sometimes seem to claim. On the other hand, not every function a brain performs is assigned a specialized brain region. In visual cortex, there are specialized regions for seeing faces and human bodies, but there is no specialized region for recognizing chairs or shoes, just a general-purpose region for recognizing objects. Some distinctions are more important than others in the brain, whatever their importance in daily life. Cognitive neuroscience can tell us where on this scale the moral-conventional distinction falls. * * * One thing these cutting-edge studies certainly cannot tell us is the right answer to a moral dilemma. Cognitive science can offer a descriptive theory of moral reasoning, but not a normative one. That is, by studying infants or brains or people around the world, we may be able to offer an account of how people actually make moral decisions--which concepts are necessary, how different principles are weighed, what contextual factors influence the final decision--but we will not be able to say how people should make moral decisions. Cognitive scientists may eventually be able to prove that men and women reason about Trolley Problems with equal sophistication, that African infants distinguish moral rules that are independent of local authority from conventions that are not, and even that the infants are using a specialized brain region to do so. What they cannot tell us is whether personal and social obligations should triumph over the prohibition against stealing, whether Mike should steal the ticket, and whether in the end it would be a better world to live in if he did. < Rebecca Saxe is a junior fellow of the Harvard Society of Fellows. Originally published in the [22]September/October 2005 issue of Boston Review From ljohnson at solution-consulting.com Tue Sep 20 02:03:16 2005 From: ljohnson at solution-consulting.com (Lynn D. Johnson, Ph.D.) Date: Mon, 19 Sep 2005 20:03:16 -0600 Subject: [Paleopsych] Walter H. Bradley: Why I am not a Mormon In-Reply-To: References: Message-ID: <432F6DE4.3070907@solution-consulting.com> These are pretty old straw-man arguments and not particularly cogent. For a look at current mormon and anti-mormon dialogs, see: www.fairlds.org Plenty of more interesting things there. Lynn Premise Checker wrote: > Walter H. Bradley: Why I am not a Mormon > http://acts413.org/religions/mormon.htm > > > "NOW WHEN THEY SAW THE BOLDNESS OF PETER AND JOHN, AND PERCEIVED THAT > THEY WERE UNLEARNED AND IGNORANT MEN, THEY MARVELLED; AND THEY TOOK > KNOWLEDGE OF THEM, THAT THEY HAD BEEN WITH JESUS." > > CHALLENGING PEOPLE TO THINK FOR THEMSELVES ABOUT LIFE'S MOST IMPORTANT > QUESTIONS. > > WHY I AM NOT MORMON > > > The day that Christ came into my heart, I covenanted with Him to > accept the Bible as a guide in all things. I take for granted that my > reader, if a Christian, has done the same. It was from the standard of > the Word that I investigated Mormonism, seeking further light from > God. > > I have no tirade to direct against Mormonism as a national menace, as > do some. I am not jealous of her growing political power. I have no > quarrel with her vast wealth and mercantile pursuits. I care not to > hale forth the skeletons that may linger in the closets of her past. > To the Christian, the question is not, how faulty are the lives of > poor erring men? But, how closely are they trying to follow the Bible? > Be their lives as faulty as they may, if they love the truth of God's > Word, that truth will eventually sanctify their lives, and make them > like the divine pattern. (John17:17). Judas was among the twelve, but > that does not condemn Christianity. Satan and his host are transformed > into ministers of righteousness (2 Corinthians. 11:14, 15), still his > teaching is false. > > The Christian's sole question must be, "What saith the Scripture?" If > they speak not according to this word, it is because there is no light > in them." Isaiah 8:20. It is the Scripture alone that is "profitable > for doctrine." 2 Timothy 3:15-17. In mentioning a few of the things > that the Bible contradicts and condemns in Mormonism, I trust I may > help some, and offend none but the son of perdition. > > GOD IS SUPERHUMAN > > The Bible is plain in portraying the super humanity of God. When the > Almighty spoke to Moses from the burning bush, He proclaimed Himself > to be Jehovah, the Self-existent One. This fundamental fact He > reiterated and emphasized in many Scriptures. "Before Me there was no > God formed, neither shall there be after Me. I, even I, am the Lord: > and beside Me there is no Saviour." Isaiah 43:10, 11. "I am the first, > and I am the last; and beside me there is no God. . . Is there a God > beside me? Yea, there is no God; I know not any." Isaiah 44:6, 8. > > Then, as God, the Deity stands alone. Apart from the Godhead, no one > ever was divine. God is the Creator, man the created, absolutely > dependent upon God for every phase of his being. Any attributes of > divinity man might ever have would only be imputed to him by God. Man > did not have the power of creation (Psalm 100:3), and his very > existence was conditional (Genesis 2:17). By sinning, Adam and all his > family became mortal, subject to death, (Romans 5:12). The same as you > and I, Adam had to experience a spiritual rebirth, and receive from > Christ all he might ever hope to have or to be (1 Corinthians 15:22); > and Adam's Creator said, "Beside me there is no Saviour." > > In contradiction to this, I find that Mormonism's highest authority > makes God a mere man, the man Adam; and makes Adam the creator, the > father of his own Saviour. I submit as proof of these startling > teachings of the Latter-day Saints, only what may be found in the > writings of the first presidency and the twelve apostles of the Mormon > Church. That this is to the Mormon the highest of authority, witness > the following: > > "It would seem altogether gratuitous and uncalled for on our part, to > write a commendatory preface to the discourses of the First Presidency > and The Twelve Apostles of this church. To the Saints their words are > the words of God, their teaching fraught with heavenly wisdom, and > their directions leading to the salvation and eternal lives. . . . The > choicest fruit that can be culled from the tree of knowledge, suited > to the taste of all who can appreciate such delicious food."--Preface > to "Journal of Discourses," volumes 2, 4. > > Then surely I can gain a correct view of the doctrines of Mormonism by > comparing the writings of these men with the Bible, which to me is the > Word of God. > > SAYS ADAM IS GOD > > Place before you the Bible and the "Journal of Discourses." Open the > Bible and read prayerfully the scriptures I have given. Now open the > "Journal of Discourses," volume 1, page 50. Compare with the Bible > what follows: > > "Now hear it, O inhabitant of earth, Jew and gentile, saint and > sinner! When our father Adam came into the Garden of Eden, he came > into it with a celestial body and brought Eve, one of his wives with > him. He helped to make and organize this world. He is Michael, the > archangel, the Ancient of Days about whom holy men have written and > spoken. He is our Father and our God, and the only God with whom we > have to do. Every man upon the earth, professing Christian or > non-professing, must hear it, and will know it sooner or later." > (Italics and capitals as in text). > > There can be no mistake here. Whereas God declares His superhuman > divinity, Brigham Young declares the human divinity of the man Adam as > the sole God of the human race. Not only does this teaching exalt a > sinful man into the seat of God, but it hurls the Bible Jehovah from > His heavenly throne. "God Himself was once as we are now and is an > exalted man, and sits enthroned in yonder heavens! This is the great > secret."--"Compendium," page 190, or 175. This comparison of Mormonism > and the Bible is disastrous. No one can deny that the God Mormonism is > a man, a created man, a man who sinned! And this is one reason why I > am not a Mormon. > > ADAM Jesus' FATHER > > That Adam is the father of his own Saviour must follow logically from > such teaching, and is unblushingly preached by the same prophet, seer, > and revelator. Denying the Bible statement that "that which is > conceived (margin "begotten") in her is of the Holy Ghost. And she > shall bring forth a son, and thou shalt call His name Jesus" (Matthew > 1:20, 21; Luke 1:35). President Young teaches: "When the Virgin Mary > conceived the child Jesus, the Father had begotten him in His own > likeness. He was not begotten by the Holy Ghost. And who is the > Father? He is the first of the human family. Now remember from this > time forth and forever that Jesus Christ was not begotten by the Holy > Ghost." And that he, in his position as prophet, seer, and revelator, > and first president of the Mormon Church, considered this teaching as > of utmost importance, Brigham Young continues, "Now let all who may > hear these doctrines pause long before they make light of them, or > treat them with indifference, for they will prove their salvation or > damnation."--"Journal of Discourses." Volume 1, pages 50, 51. > > God forbid that I should treat this with indifference, or pass it by > lightly; for not only does it make the man Adam the creator of his own > Saviour, but without question, it teaches that Jesus was not the pure > and Holy Son of God, but was the illegitimate offspring of an > adulterous liaison between the man Adam, the first of the human race, > and Mary, the lawful wedded wife of Joseph. This is another reason why > I am not a Mormon. > > Not content with so degrading our blessed Lord in His birth, this > system we are studying makes Him a lawbreaker in His life. The law > that Jesus gave Moses, and which He kept on earth (Leviticus 18:18; > John 1:1-3; 15:10), forbade a man's having two sisters as wives at the > same time. President Hyde, of the Mormon church, asks himself the > following question: "Then you really mean to hold to the doctrine that > the Saviour of this world was married? Do you mean to be understood > so? And if so, do you mean to be understood that He had more than one > wife?" > > His answer follows on page 82: "We say it was Jesus Christ who was > married (at Cana) to be brought into the relation whereby He could see > His seed before He was crucified. . .I do not despise to be called a > son of Abraham if he had a dozen wives; or to be called a brother, a > son, a child of the Saviour if He had Mary, Martha, and several others > as wives." "Journal of Discourses," Volume 2, pages 81, 82. > > THE DOCTRINE OF THE HOLY PRIESTHOOD > > Even more derogatory to the glory of Jesus, and yet of paramount > importance to the Mormon system, is the doctrine of the holy > priesthood. It is to them what the sun is to light. Without it, the > gospel has no virtue, the dead cannot be resurrected, the living > baptized, or men and women be married in the sight of God. Witness the > words of Charles W. Penrose, first counselor to Heber J. Grant, first > president, prophet, seer, and revelator of Mormonism today: > > "The ordinances of the gospel referred to in the previous tracts of > this series, cannot be effectually administered without divine > authority. . . .Baptism, even if solemnized according to the form and > pattern followed by the Saviour and His appointed servants, will be of > no avail and will not bring remission of sins, unless the officiating > minister has received authority from the Deity to act in the name of > the Father and of the Son and of the Holy Ghost. . . . This Divine > Authority was called the Holy Priesthood. . .and was established in > the Christian Church by the Saviour Himself."--"Rays of Living Light," > page 17. "There are in the church two Priesthoods, namely the > Melchisedec and the Aaronic. . . All other authorities or offices in > the Church are appendages to this Priesthood. . . .The Latter Day > Saints have this Priesthood, with its authority, ordinances, and > blessing. How they have obtained it is a very important > question."--"Compendium," page 64, 65. > > Even if their own estimate of the absolute necessity of this > priesthood to their system is accepted, yet that priesthood is found > to be strictly anti-biblical and anti-christian. I shall deal with > three aspects of the question--the Aaronic priesthood, the Melchisedec > priesthood, and the authority of both. > > The Aaronic priesthood, beginning with the calling of Aaron himself, > was distinctly limited as to both duration and place. "Thou and thy > sons with thee shall keep your priest's office for everything of the > altar, and within the veil." Numbers 18:7. There they were to > minister. They were not anointed priests until after the sanctuary had > been anointed. As long as God had on earth a temple or a sanctuary, > they were to be priests there, and no one else could have any right to > priesthood. Any other than a son of Aaron coming near would be put to > death. > > In this sanctuary, they were to offer the sacrifices of "bulls and > goats" (Hebrews 9:1-4), for "every high priest is ordained to offer > gifts and sacrifices (Hebrews 8:3)." Without these sacrifices, they > could not be priests. If the time should ever come when God should > remove His presence from the Aaronic temple and sanctuary, and the > blood of bulls and goats should cease to have sacrificial value, then > Aaron, with no ministry, no temple, and no sacrifice, would no longer > be a priest. > > The Bible surely declares that time to have been at the first advent > of Jesus. The Aaronic sanctuary "was a figure for the time then > present," "imposed on them until the time of reformation" (Hebrews > 9:9, 10), "till the seed should come to whom the promise was made" > (Galatians 3:19). Christ is the Seed. (Galatians 3:16) The Seed came. > (Galatians 4:4). The true sacrifice was offered. (Hebrews 9:14) No > longer had the blood of bulls and goats any value. (Hebrews 10:8-14). > The veil that had hidden Aaron's sacred priesthood from common eyes > was rent in twain. (Matthew 27:51). The time of reformation had come, > "Christ being come an High Priest." (Hebrews 9:11). "When that which > is perfect is come, then that which is in part shall be done away." 1 > Corinthians 13:10. The limits, as to both time and place, that the > Bible had prescribed for Aaron's work, were reached in Christ, and > Aaron's priesthood had to cease by limitation. Thus to claim now to > have the Aaronic priesthood, is to deny the coming, the death, the > sacrifice, and the priesthood of Christ. Hence I cannot be a Mormon. > > AARON'S PRIESTHOOD ONLY AN OBJECT LESSON > > This work of Aaron's, which ended when the priesthood of Jesus began, > was simply an object lesson to teach the Jews of Christ's priesthood > (Hebrews 8:1-5), and Aaron's priesthood had to cease before Christ's > could even begin (Hebrews 9:8). "We have a great high priest, that is > passed into the heavens, Jesus the Son of God." Hebrews 4:14. "If > therefore perfection were by the Leviticus priesthood. . . .what > further need was there that another priest should rise after the order > of Melchisedec, and not be called after the order of Aaron." Hebrews > 7:11. "The priesthood being changed." Hebrews 7:12. It was not > continued subordinate, not perpetuated on earth in a lesser degree, > but changed; changed because we must have a priest, and Aaron was a > priest no longer. Christ, our new priest succeeding Aaron, is the "one > mediator between God and men, and not priests. As there could, > therefore, be but one Melchisedec priest, and as Christ, on the oath > of God, was that one forever (Hebrews 7:21), it follows with > inexorable logic "this man, because He continueth ever, had a > priesthood that passeth not from one to another." Hebrews 7:24, > margin. Then in their claim to have the Melchisedec priesthood on > earth today, Mormonism tacitly declares that Christ has again died, > and is still dead, and that His priesthood has passed to them! Do you > wonder why I cannot be a Mormon? > > JOHN THE BAPTIST DEAD > > Now on the third aspect, the "authority," as Counselor Penrose says, > "How they have obtained it is a very important question." It is indeed > a very great question. I will grant their premise that the holy > priesthood is all in all to Mormonism, and that without it, there > could be no Mormonism. I will grant that if they have not the > priesthood they claim, the whole movement is an imposture. I will even > grant that if they do have this priesthood from God (which I have > already proved cannot be), they are the true church of God, and all > others are false. Surely I will; for the Bible clearly teaches that by > no possibility could the authority, even of if did exist, have come to > them in the way they claim. > > "Joseph Smith received a visitation from John the Baptist, who held > authority in ancient times to preach and administer baptism for the > remission of sins. He came as a ministering angel, and ordained Joseph > Smith and his companion Oliver Cowdery, to that Priesthood and > authority." Thus endowed, these young men baptized each other, and at > a later date were ministered to by the Apostles Peter, James, and > John, who ordained them to the apostleship with the authority to lay > hands on, baptize believers and confer the gift of the Holy Ghost, > also to build up and organize the Church of Christ according to the > original pattern."--"Rays of Living Light," page 27. > > In the face of this assertion, the Bible tells me that John the > Baptist, Peter, James, and John were dead. (Matthew 14:3-11; John > 21:19, 23; Acts 12:2). For a complete Bible study of the condition of > man in death, I refer you to another link on this web site. I cannot > take space here to cover such a subject. I shall, however, quote > enough Scripture to show that the Word of God teaches that dead men, > good or bad, cannot come back to earth. > > "The dead know not anything." Ecclesiastes 9:5, 6, 10. "In death there > is no remembrance." Psalm 6:5. "The dead praise not the Lord." Psalm > 115:17. "His sons come to honor, and he knoweth it not." Job 14:21. > "Man lieth down, and riseth not: till the heavens be no more, and they > shall not awake." Job 14:14. "If I wait, the grave is my house." Job > 17:13. Until our Lord Jesus Christ calls forth the dead in the > resurrection, no one can be made alive. "In Christ shall all be made > alive. But man in his own order; Christ the firstfruits; afterward > they that are Christ's at His coming." 1 Corinthians 15:21-23. The > resurrection has not yet taken place. (1 Thessalonians 4:16, 17). > > John the Baptist, Peter, James, and John are dead. They know not > anything. They have no remembrance. They are in the grave awaiting the > great day of the resurrection. Till then, they cannot come back. They > did not appear to Joseph Smith. > > WHO INSPIRED JOSEPH SMITH? > > Then who did? For I believe Joseph Smith to have been inspired by some > superhuman power. Would good angels in lying deceit have impersonated > these dead apostles? The Christian says No; so does the Bible. Who did > come to him and deceive him? There can, to the just, be but one > conclusion. Paul says that "Satan himself is transformed into an angel > of light. Therefore it is no great thing if his ministers also be > transformed as the ministers of righteousness." 2 Corinthians > 11:13-15. As God could not deceive, nor would the holy angels do so, > doubtless some of the evil angels transformed themselves into > ministers of righteousness, and imposed on the Mormon prophet Joseph > Smith. Counselor Penrose said, in the extract quoted, that they came > as ministering angels--just as Paul said the devil would come. We > might well expect this; for Jesus warned us that in these very times, > "There shall arise false Christs, and false prophets, and shall show > great signs and wonders; insomuch that, if it were possible, they > shall deceive the very elect. . Wherefore if they shall say unto you, > Behold, He is in the desert; go not forth: Behold, He is in the secret > chambers; believe it not." Matthew 24:23-27. > > This same Bible truth of life only in Christ at the resurrection, also > brands their baptism for the dead as spurious. A careful study of 1 > Corinthians 15:12-29 will convince the truth seeker that it is the > dead, resurrected, and ascended Christ for whom, or into whom (see > lexicon or concordance), all believers are baptized. Any theory of a > second probation, the Bible condemns as a Satanic potion that lulls > the unwary past probationary repentance, into the vortex of the second > death. The Bible also clearly shows God to be the Creator and not the > begetter of mankind. Jesus is the only begotten. And it shows that > Adam at creation, and his sons at birth, had their first existence; > and that the Mormon doctrine of pre-existence is contrary to > Scriptural teaching. Such evidence, you see, prevents my being a > Mormon. > > But it does not prevent my pitying and loving the Mormon people. Can > you not, Christian reader, love them and pray for them? Pray that "the > faith which was once for all delivered unto the saints" (Jude 3, > A.R.V.) may dawn upon the minds of these poor children of Adam, and > that they may embrace "the lamb of God, that taketh away the sins of > the world;" our High Priest, Jesus, the Son of God. > > "Great peace have they which love thy law" [Psalms 119:165] > Contact us at: [11]saints at acts413.org > _______________________________________________ > paleopsych mailing list > paleopsych at paleopsych.org > http://lists.paleopsych.org/mailman/listinfo/paleopsych > > From anonymous_animus at yahoo.com Tue Sep 20 18:07:09 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Tue, 20 Sep 2005 11:07:09 -0700 (PDT) Subject: [Paleopsych] scripture In-Reply-To: <200509201800.j8KI0OX06816@tick.javien.com> Message-ID: <20050920180709.8123.qmail@web30805.mail.mud.yahoo.com> A guy who isn't Mormon says: >>The Christian's sole question must be, "What saith the Scripture?"<< --To which I (and many others) ask, "why?" And, who determines what a Christian is or isn't? Is there some kind of peer review? Michael __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From ljohnson at solution-consulting.com Wed Sep 21 01:02:22 2005 From: ljohnson at solution-consulting.com (Lynn D. Johnson, Ph.D.) Date: Tue, 20 Sep 2005 19:02:22 -0600 Subject: [Paleopsych] scripture In-Reply-To: <20050920180709.8123.qmail@web30805.mail.mud.yahoo.com> References: <20050920180709.8123.qmail@web30805.mail.mud.yahoo.com> Message-ID: <4330B11E.7080902@solution-consulting.com> Michael hits that nail on the head. Scripture is like a Rorschach, you tend to project into it what you wish. There is actually a fairly robust dialog going on between Mormons and Evangelical Christians, but none of that was reflected in the piece that Frank shared. Most Christians totally reject the fellow's exigesis of what happens when one dies, emphasizing Luke 16 (if my memory serves me -- the parable of the rich man and Lazarus) as opposed to bitter stuff written by old King Solomon, after his life became pointless to him. That part of the exigesis is disallowed by the fellow writing, because of the notion that every word of the bible is totally inspired, something Mormons scoff at. We have more peer review in Christianity than Islam, but perhaps less than Buddhism. That's why I referenced the fairlds.org site, there is dialog there. Yet the real question is how one lives the life one is given. Dr. Bruce Grayson (U Conn) recounted an example: a minister dies, meets Jesus, and begins asking him doctrinal questions, to which Jesus laughs and asks him, "How did you treat the people I put into your life?" Actually, the minister wasn't totally and completely dead, just mostly dead, and with a bit of Magical Max manipulation, he returned and told us the story. All true. Lynn Michael Christopher wrote: >A guy who isn't Mormon says: > > >>>The Christian's sole question must be, "What >>> >>> >saith the Scripture?"<< > >--To which I (and many others) ask, "why?" And, who >determines what a Christian is or isn't? Is there some >kind of peer review? > >Michael > >__________________________________________________ >Do You Yahoo!? >Tired of spam? Yahoo! Mail has the best spam protection around >http://mail.yahoo.com >_______________________________________________ >paleopsych mailing list >paleopsych at paleopsych.org >http://lists.paleopsych.org/mailman/listinfo/paleopsych > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at panix.com Wed Sep 21 01:39:53 2005 From: checker at panix.com (Premise Checker) Date: Tue, 20 Sep 2005 21:39:53 -0400 (EDT) Subject: [Paleopsych] Spiked: Why people hate fat Americans Message-ID: Why people hate fat Americans http://www.spiked-online.com/Printable/0000000CAD43.htm 5.9.9 Today's attacks on obese Yanks are motivated by a broader unease with affluence. by Daniel Ben-Ami If Americans had to be described with one word, there's a good chance it would be 'fat'. Americans, we are constantly told, are the fattest people on the planet. Obesity is rife. Compared with other nations the Americans are not just big, but super-size. Yet this obsession with obese Americans is about more than body fat. Certainly there is a debate to be had about the extent to which obesity is a problem in America - a discussion best left to medical experts. But a close examination of the popular genre on obesity reveals it is about more than consumption in the most literal sense of eating food. Obesity has become a metaphor for 'over-consumption' more generally. Affluence is blamed not just for bloated bodies, but for a society which is seen as more generally too big for its own good. It is especially important to examine this criticism of American affluence in the aftermath of Hurricane Katrina. An assumption underlying much of the discussion is that, at the very least, wealth did America no good in its battle with nature. An editorial in last weekend's UK Guardian caught the tone: 'America is the richest and most powerful country on Earth. But its citizens, begging for food, water and help, are suffering agonies more familiar from Sudan and Niger. The worst of the third world has come to the Big Easy.' The implication is that America's wealth is somehow pointless. A column in the Washington Post went even further, by advocating what it described as a Confucian approach to the question. It argued that Americans 'blithely set sail on churning seas and fly into stormy skies. We build homes on unstable hillsides, and communities in woodlands ripe for fire. We rely on technology and the government's largess to protect us from our missteps, and usually, that is enough. But sometimes nature outwits the best human efforts to contain it. Last week's hurricane was a horrifying case in point. The resulting flooding offered brutal evidence that the efforts we have made over the years to contain nature - with channels and levees and other great feats of engineering - can contribute to greater catastrophes.' From this perspective, the pursuit of economic development is worse than useless: it may be well-intentioned but it only makes matters worse for humanity. To understand how a disaster such as Hurricane Katrina can become an occasion for attacking American affluence, it is worth examining the fat metaphor in more detail. Take Super Size Me, the documentary in which Morgan Spurlock lives on nothing but McDonald's food for a month. Within the first minute the American flag is shown fluttering in the wind. The voiceover then says: 'Everything's bigger in America. We've got the biggest cars, the biggest houses, the biggest companies, the biggest food - and finally - the biggest people.' Spurlock makes his assumptions even clearer in his follow-up book, Don't Eat This Book. The first chapter discusses how America has become 'the biggest consuming culture on the planet' (1). He talks of how 'the epidemic of overconsumption that's plaguing the nation begins with the things we put in our mouths' (2). Other popular works on obesity make similar points. Eric Schlosser, author of Fast Food Nation, says at the start: 'This is a book about fast food, the values it embodies, and the world it has made. Fast food has proven to be a revolutionary force in American life; I am interested in it both as a commodity and as a metaphor.' (3) Just how the Big Mac or Chicken McNugget can embody values, let alone make the world, is not made clear. Schlosser frequently argues that such food has little nutritional value but he seems happy to endow it with incredible powers to influence society. Greg Critser, a liberal and a Democrat, and author of Fat Land, talks about food consumption in almost religious terms. Like Schlosser and Spurlock he makes it clear that he is not talking about food alone. In chapter two of Fat Land he argues: 'Bigness: the concept seemed to fuel the marketing of just about everything, from cars (SUVs) to homes (mini-manses) to clothes (super-baggy) and then back again to food.' (4) In the same chapter he makes it clear that a key objection to McDonald's is that it campaigned to override 'cultural mores against gluttony' (5). Implicitly at least Critser is arguing that the Deadly Sin of gluttony should be somehow rehabilitated. Making a connection between obesity and consumption is not limited to books about fat Americans. It is a staple of many environmentalist texts. For example, Jeremy Rifkin makes a similar connection: 'The US GDP continues to expand along with our waistlines, but our quality of life continues to diminish.' (6) Clive Hamilton, an Australian critic of economic growth, talks of overweight people 'revealing in such a confronting way our dirty secret of overconsumption' (7). Michael Moore only refers to the obesity issue in passing - in Stupid White Men he argues: 'If you and I would eat less and drink less, we'd live a little longer.' (8) Perhaps this is a sensitive issue for him, seeing as he is no lightweight. But he does criticise America for being number one in relation to several areas of consumption, including beef, energy, oil, natural gas and calories (9). Such arguments seem to have won considerable resonance both inside and outside America. According to the latest Pew Global Attitudes survey, a comprehensive opinion poll of public attitudes in America and 16 other countries, the USA is routinely seen as greedy by Western publics. For example, 67 per cent of the Dutch, 64 per cent of Britons and 62 per cent of Canadians see Americans as greedy. Perhaps most striking of all, 70 per cent of Americans see their fellow compatriots as greedy (10). Despite the huge volume of discussion on American obesity the key arguments put forward by the critics can be reduced to a few simple ingredients. Each of these is open to question, although it is unfortunately rare for them to be critically scrutinised: * Over-consumption is not just about food. Food is being portrayed as simply the most conspicuous example of a society that consumes too many resources. * Consumption is widely seen as a problem. At the very least it is regarded as incapable of making Americans, or any other people, happy. At worst it is portrayed as having a down side or even being akin to a disease. So in their best-selling Why Do People Hate America?, Ziauddin Sardar and Merryl Wyn Davies argue that 'the "virus" of American culture and lifestyle replicates so readily because it is founded on the premise of abundance, the lure of affluence' (11). Similarly, an American documentary and follow-up best-seller on affluence was called Affluenza (12). * The sin of gluttony, it is widely argued, needs to be rehabilitated. Usually the argument is put in secular rather than religious terms but the content is the same. What is being suggested is that a morality of limits needs to be popularised. People should apparently be encouraged to limit their consumption - whether the limits are voluntary or imposed by law. Often controls on advertising are also favoured as it is seen as somehow propagating the culture of consumption. Let us examine each of these arguments in turn. Then we can consider why the attack on consumption has come to the fore in recent years. The end of hunger The first argument has some truth to it, in that there is a close relationship between food consumption and the use of resources more generally. So criticising the pervasiveness of cheap food in rich societies can also be a way of attacking affluence. In other words, the wide availability of food can become a metaphor for large-scale consumption more broadly. But critics of cheap food forget that its attainment is a considerable historical achievement. Most of human history involved a constant struggle to find enough food. The battle against hunger was the norm. This is still true in much of the developing world, where the World Bank estimates that 815million people ate too little to meet their daily energy needs in 2002 (13). So to have achieved a situation where, at least in the developed world, food scarcity is virtually eliminated is a tremendous achievement. As well as being good in itself, it also allows people to spend more time on other things rather than struggling to meet their most basic needs. Of course this does not mean that obesity cannot have negative consequences. But it should be recognised as a problem associated with success. Food today is more plentiful and of better quality than ever before. No doubt quality can improve still further in the future and other factors, such as insufficient exercise, can be tackled. However, these are relatively small challenges compared with the historic battle to rid the world of the scourge of hunger. Is consumption the problem? As for the second argument, that consumption is in itself somehow bad - that is deeply flawed. On the contrary, humanity has benefited enormously from economic growth and the attendant increase in consumption. It has allowed people to live longer and healthier lives than ever before. It also has brought enormous cultural benefits, as people have more leisure time rather than focusing their entire lives on survival. Critics of consumption start from the incorrect assumption that there is a finite amount of resources in the world. From such a narrow perspective any consumption by one group of people is inevitably at the expense of another. It also assumes, wrongly, that the world is in danger of running out of resources. Food provides a good example of the flaws in this argument. Just because people in America are well-fed it does not follow that they are depriving those in, say, Ethiopia or Niger. There is no reason why with higher productivity - more production of food per person - everyone in the world should not have enough to eat. The problem is not too much food in America but too little food in the developing world. The aspiration should be to raise the levels of consumption in poorer countries to match those in the West. Instead, the anti-consumption campaigners seem to want to concentrate on reducing the level of consumption in the rich world. What is true for food also holds for other resources. It is not as if there is a set amount of resources which will be used up as society becomes wealthier. On the contrary, as the world becomes richer the amount of resources available to humanity also expands. For example, for Stone Age man, or even in the early twentieth century, uranium and plutonium were of no use to humanity. But with economic development it became possible to use them as power sources. A wealthy society can utilise more resources and use them more efficiently than a poor one. That is why the doom-mongers' arguments about the world running out of resources - which have been made in one form or another for over two centuries - have always been proved wrong. Consumption and happiness As for the contention that economic growth does not make people happier - that is less clear-cut than is made out. It is certainly true that, objectively speaking, Americans are better off than ever. As Gregg Easterbrook writes, comparing today with the 'Golden Age" of the 1950s: '[I]n real dollars almost everything costs less today than it did then, healthcare is light-years better, three times as many people now make it to college, and the simpler, more innocent ethos of the 1950s denied the vote to blacks and job opportunities to women.' (14) Whether people feel subjectively happier is a more complex question. In Britain there are certainly those, such as Richard Layard, who argue that beyond a certain point society becomes no happier as it becomes wealthier (15). It is also widely assumed that Americans are less happy than those in other developed societies. But there are some grounds to dispute this view. According to the conclusion of a recent opinion poll by Harris Interactive: 'The big picture is that Americans are much more satisfied with their lives, much more likely to believe that their lives have improved and much more likely to expect their personal situations will improve than most Europeans.' (16) The poll found that 58 per cent of Americans said they were very satisfied with their lives in 2004-5 compared with 31 per cent of Western Europeans in a similar European Union survey. Only the Danes, with 64 per cent saying they were very satisfied, were happier than the Americans. But let us assume that, as many polls seem to indicate, there is a pervasive sense of unhappiness in American society. It does not necessarily follow from this premise that economic growth is necessarily bad or should be downplayed. On the contrary, its objective benefits should be clear. The widespread sense of disaffection certainly raises interesting questions about American society and about the developed world more generally. In particular, why, despite greater affluence than ever before, is there widespread foreboding about the future? By simply assuming that economic growth is the problem, the anti-consumption critics avoid asking difficult but critical questions about the prevalence of social pessimism in contemporary society (17). Limiting consumption Finally, let us examine the idea that consumption should be limited in some way - that, in either a secular or religious form, the notion of gluttony should be rehabilitated. The call for limits is a central element of contemporary politics, whereas in the past the focus was on how best to make society wealthier so that everyone could benefit. The discussion of food and healthy eating clearly provides a metaphor for placing limits on consumption more generally. It is a useful way of illustrating the argument that individuals should consume less. In addition, the assumption is that people should eat what the anti-consumption lobby designates as healthy food rather than 'junk'. Underlying all this are the implicit assumptions that consumption needs to be limited and pursued in a responsible way. In other words, there is a strong element of moralism from the preachers of limited consumption. If they came from a traditional preacher, such views might be laughed at. But placed in the context of health - for individuals or society - they are taken seriously. Support for limits is also expressed in more general terms by contemporary thinkers. George Monbiot, a Guardian columnist and environmental campaigner, is a prime example. In his view, the world has reached the stage where 'the interests of global society will be served primarily by restraint' (18). Although Monbiot is often seen as a radical critic of society it would be more accurate to see him as a mainstream advocate of imposing limits on consumption. The case for limits was put in more theoretical terms by Christopher Lasch, a prominent American social critic who died in 1994. In The True and Only Heaven he developed an intellectually coherent case against progress. Lasch, who was generally associated with the left, criticised liberals for not seeing 'the positive features of petty-bourgeois culture: its moral realism, its understanding that everything has a price, its respect for limits, its scepticism about progress' (19). Such views are not simply social theory but they have become embodied in contemporary policy. For example, the concept of 'sustainable development' has been accepted by international agencies, such as the United Nations and the World Bank, as well as by national governments. The notion of sustainable development itself embodies the needs of limits. Our Common Future, a landmark UN report first published in 1987, clearly defined sustainability in these terms, describing it as 'development that meets the needs of the present without compromising the needs of future generations to meet their own needs'. It goes on to say that sustainable development contains 'the idea of limitations imposed by the state of technology and social organisation on the environment's ability to meet present and future needs' (20). The UN has also more explicitly spelt out its assumptions in relation to consumption. Its Agenda 21 report to the 1992 Rio Summit made this clear. One of the principles of the report was 'to achieve sustainable development and a higher quality of life for all people', and it called on states to 'reduce and eliminate unsustainable patterns of production and consumption and promote appropriate demographic policies' (21). More recently the 1998 edition of the UN's annual Human Development Report was on 'consumption for human development'. Although it starts by acknowledging the advantages of consumption it soon changes tack to talk of its downside: 'Today's consumption is undermining the environmental resource base. It is exacerbating inequalities. And the dynamics of the consumption-poverty-inequality-environment nexus are accelerating.' (22) National governments have also taken on board such ideas. For example, under President Bill Clinton the USA had the 'President's Council on Sustainable Development (PCSD)' (23). One of its task forces, created in 1994 and reporting in 1996, had the specific role of looking at population and consumption (24). The arguments in this report were posed in terms of finding a better balance between consumption and population on the one hand, and the environment on the other. The sin of gluttony So it should be clear that the idea of limiting the growth of consumption is mainstream. Gluttony has, in a subtle way, been reinstituted as a sin. It does not apply just to food but to consumption of resources more broadly. The need to impose limits on consumption is accepted by national governments and by influential multilateral agencies. Of course, many would go along with the idea that limits should be placed on consumption. They might reject the use of religious language but they would accept the notion of sustainability. After all, such a view has become the conventional wisdom. However, those who hold to this view should remember that there is still an enormous amount of work to be done to raise the level of consumption in the world. This is most clear in relation to the poor countries. According to the World Bank, 2.7billion people were living on less than $2 a day in 2001, of which 1.1billion lived on less than a dollar (25). A lot remains to be done to raise a large proportion of the world's population to the consumption standards we enjoy in the West. But even in the developed world there is still much to do. For example, there is much talk of a 'demographic time bomb', meaning that it is not possible to provide a decent income for pensioners. However, with economic growth, and higher consumption levels for all, there is no reason why this problem cannot be solved. A more productive economy is key to solving what is often wrongly cast as an intractable problem (26). Perhaps the most misleading aspect of sustainable development is its supposed orientation towards the future. It wrongly assumes that curbing consumption growth will benefit future generations. The opposite is true. Holding back on economic growth means that future generations will be less wealthy than they would otherwise be. It means that they will be in a weaker position to tackle their problems and live an affluent life. The worst that we can do for the future is put limits on economic growth in the present. Before examining why anti-consumption sentiment has come to the fore it is worth saying something about advertising. The whole anti-fat genre makes much of the fact that fast food companies spend a huge amount on advertising - particularly aimed at children. In itself, the discussion of advertising is not new. Back in 1957 The Hidden Persuaders, a study on how the American advertising industry was shaping personal behaviour, was first published (27). What is different today is the power attributed to the power of advertising across the whole of society. It has now become mainstream to rail against the advertising industry. Anti-consumerist campaigns such as Adbusters are highly respected (28). What is rarely commented on is the elitist assumptions on which the anti-advertising campaign is based. Their starting point seems to be the snobbish view that people are somehow duped into consuming by the advertising industry. Yet, in reality, consumption is popular precisely because, for obvious reasons, people like to be better off. This is the big weakness of the anti-consumption movement. It wants to persuade people to curb their consumption but, from a common sense perspective, individuals rightly prefer to be richer rather than poorer. Advertising might persuade them to eat in, say, McDonald's rather than Burger King, but it is not necessary to persuade people to enjoy consumption. The pervasiveness of attacks on affluence, despite the benefits of consumption, begs the question of why they are so popular. Criticisms of consumption are not new. Back in the late nineteenth century Thorstein Veblen, an American sociologist, coined the term 'conspicuous consumption' (29). But they have never been so pervasive. From the 1970s onwards, anti-consumption sentiment has moved from being an elite preoccupation to the mainstream (30). There are many reasons why this shift has occurred. A complete explanation would demand a comprehensive examination of how society has changed in recent years. But a key part of the reason is the institutionalisation of the idea that there is no alternative to the market. Capitalism, in one form or another, is seen as the only realistic way of organising society. In terms of political debate there is what Thomas Frank, a liberal social commentator, describes as 'the systematic erasure of the economic' (31). In other words, cultural matters are open for debate but fundamental economic questions are not. Or as Will Hutton, a British liberal commentator, puts it: 'The allegedly futile and empty materialist culture [is] deplored by conservative, liberal and religious fundamentalist alike.' (32) One way to understand this point is in relation to consumption and production. Matters related to the sphere of consumption are open to debate. This includes not just the literal act of consumption itself but related questions such as brands and identity. To the extent that the economy more broadly is discussed - whether by critics or those who are pro-business - it is from the perspective of consumption. Areas such as advertising, brands and marketing are given inordinate importance. In contrast, the productive sphere is seen as fixed. This is not just a technical question of the manufacturing process for, say, semiconductors or plasma screen televisions. It means that the possibility of developing a qualitatively better economy is denied. Humanity's creative potential, including the possibility of transcending the limits of the market, is banished from discussion. As a result, a one-sided view of humanity has taken hold. The consumption of resources, important as it is, is given too much weight. Human beings are seen as parasites on the planet using up the world's natural resources. In contrast, the creative side of humanity, the ability to solve social problems and create a more productive society, is at best downplayed in importance. At worst, the capacity of human beings to create a more productive society is seen as a problem, a destructive characteristic, rather than the positive quality it represents. That is why fat Americans are so widely hated. Overweight Americans represent, in caricatured form, the affluence of US society. They are the personification of a society in which scarcity, if not eliminated, has become marginalised. Yet we live in a world in which consumption is seen as a problem and the possibility of creating a better society is seen as unrealistic. By focusing on fat Americans the critics of consumption are saying, implicitly at least, that people should consume less. They are arguing for a world in which Americans become more like those who live in the poorer countries of the world. From such a perspective equality means levelling everyone down rather than raising the living standards of the poor. It means giving up on the battle to resist hurricanes or to reclaim land from the sea. Yet implementing such a viewpoint is a super-size mistake. Our aspiration for the world should be to give the poor the advantages of affluence enjoyed by those in the West. Living standards in countries such as Ethiopia and Niger should be, at the very least, as high as those in America today. In that sense we should all aim to be fat Americans. Daniel Ben-Ami is the author of Cowardly Capitalism: The Myth of the Global Financial Casino, John Wiley and Sons, 2001 (buy this book from [2]Amazon (UK) or [3]Amazon (USA)) Read on: [4]spiked-issue: Obesity (1) Don't Eat This Book, Morgan Spurlock, Penguin 2005, p6 (2) Don't Eat This Book, Morgan Spurlock, Penguin 2005, p7 (3) Fast Food Nation, Eric Schlosser, Penguin 2002, p3 (4) Fat Land, Greg Critser, Penguin 2003, p29 (5) Fat Land, Greg Critser, Penguin 2003, p20. Also see Rachel Cooke, 'The big issue', Observer, 9 March 2003. See [5]Fat and fiction, by Rob Lyons (6) The European Dream, Jeremy Rifkin, Polity 2004, p81 (7) Growth Fetish, Clive Hamilton, Pluto 2003, p95 (8) Stupid White Men, Michael Moore, Penguin 2002, p156 (9) Stupid White Men, Michael Moore, Penguin, 2002, p174 (10) 'American character gets mixed reviews', Pew Global Attitudes Project, 23 June 2005, at the [6]Pew Global Attitudes Project website. Americans are also viewed as violent and not to be trusted in relation to the global environment. On the positive side they are often seen as hardworking, honest and inventive. The survey was conducted among nearly 17,000 people in 16 countries, including America (11) Why Do People Hate America?, Ziauddin Sardar and Merryl Wyn Davies, Icon 2003, p117 (12) See the [7]Affluenza website (13) [8]Progress towards the Millennium Development Goals, 1990-2005, United Nations Statistics Division (14) The Progress Paradox, Gregg Easterbrook, Random House 2004, p78 (15) Happiness, Richard Layard, Allen Lane 2005. For a critique of such arguments see [9]Economic misery, by Benjamin Hunt (16) Quoted in 'Contented cowboys', Wall Street Journal Europe, 17 August 2005 (17) For an alternative explanation of social pessimism see Culture of Fear, Frank Furedi, Cassell 1997 (18) [10]A restraint of liberty, George Monbiot, Guardian, 24 May 2005. Monbiot argues that the threat of climate change means restraint is necessary. For more on Monbiot's anti-consumption approach see [11]Recipe for austerity, by Daniel Ben-Ami (19) The True and Only Heaven, Christopher Lasch, Norton 1991, p17 (20) Our Common Future, Oxford University Press 1987, p43 (21) [12]Report of the United Nations Conference on Environment and Development, Rio de Janeiro, 3-14 June 1992 (22) [13]Consumption for Human Development, UN Human Development Report 1998 (23) See [14]President's Council on Sustainable Development (24) See [15]Population and Consumption Task Force Report (25) [16]Global poverty down by half since 1981 but progress uneven as economic growth eludes many countries, World Bank, 23 April 2004 (26) See [17]Ageing: the future is affordable, by Phil Mullan (27) The Hidden Persuaders, Vance Packard, Penguin 1960 (28) See the [18]Adbusters website (29) [19]The Theory of the Leisure Class, Thorstein Veblen, 1899, chapter IV. For a discussion of Thorstein Veblen, see [20]Conspicuous consumption, a century on, by George Blecher (30) The popularity of EF Schumacher's Small is Beautiful, first published in 1973, is a signal of growing pervasiveness of attacks on large-scale consumption (31) What's the Matter with Kansas?, Metropolitan 2005, p127 (32) 'Shopping and tut-tutting', Will Hutton, Observer, 4 September 2005. Hutton then makes the mistake of arguing that the way to counter the attack on consumption is simply to embrace shopping. References 2. http://www.amazon.co.uk/exec/obidos/ASIN/0471899631/spiked 3. http://www.amazon.com/exec/obidos/ASIN/0471899631/spiked-20 4. http://www.spiked-online.com/sections/health/obesity/index.htm 5. http://www.spiked-online.com/articles/0000000CABC8.htm 6. http://pewglobal.org/ 7. http://www.affluenza.org/ 8. http://unstats.un.org/unsd/mi/mi_coverfinal.htm 9. http://www.spiked-online.com/articles/00000006DDBD.htm 10. http://www.monbiot.com/archives/2005/05/24/a-restraint-of-liberty 11. http://www.spiked-online.com/articles/00000006DE37.htm 12. http://www.un.org/documents/ga/conf151/aconf15126-1annex1.htm 13. http://hdr.undp.org/reports/global/1998/en 14. http://clinton1.nara.gov/White_House/EOP/pcsd/index.html 15. http://clinton2.nara.gov/PCSD/Publications/TF_Reports/pop-toc.html 16. http://web.worldbank.org/WBSITE/EXTERNAL/NEWS/0,,contentMDK:20194973~menuPK:34463~pagePK:64003015~piPK:64003012~theSitePK:4607,00.html 17. http://www.spiked-online.com/articles/0000000CA4E3.htm 18. http://www.adbusters.org/ 19. http://xroads.virginia.edu/~HYPER/VEBLEN/veblenhp.html 20. http://www.spiked-online.com/articles/0000000CA542.htm 21. http://www.spiked-online.com/Articles/0000000CAD43.htm From checker at panix.com Wed Sep 21 01:40:00 2005 From: checker at panix.com (Premise Checker) Date: Tue, 20 Sep 2005 21:40:00 -0400 (EDT) Subject: [Paleopsych] NYTBR: Manga for Girls Message-ID: Manga for Girls http://www.nytimes.com/2005/09/18/books/review/18glazer.html By SARAH GLAZER Walk into almost any chain bookstore and you're likely to find a teenage girl sprawled on the floor reading manga - thick black-and-white comic books by Japanese authors. Graphic novels, including manga, have been popular with American boys for years now. But, to the surprise of publishers, "shojo" comics (or manga for girls) have become one of the hottest markets in the book business. Two publishers - Viz Media, which is Japanese-owned, and Tokyopop, an American company - have been the leaders in the American manga market, which has more than doubled since 2002, helped along by a $5 billion business in related animated films, TV series and licensed products like dolls and action figures. Del Rey, in the Random House Publishing Group, has become the first New York publisher to enter the shojo market in a big way (in partnership with Kodansha in Japan). Last year, Del Rey sold a million copies of its first 16 releases combined. Next year, it plans to bring out close to 85 manga titles, most of them aimed at teenage girls. Shojo - the word means girl in Japanese - frequently involves a lovelorn teenager seeking a boyfriend or dealing with situations like entering a new school, being bullied or trying to break away from a clique. There are also action stories featuring girls in strong roles as scientists and samurai warriors. (The shojo genre has been called "big eyes save the world," after the characteristic drawing style of girls with saucer-shaped eyes who are sometimes endowed with supernatural powers.) But parents and teachers, who are sometimes happy to see teenagers reading just about anything, might be caught off guard by some of the content of the girls' favorite books. Among the best-selling shojo are stories that involve cross-dressing boys and characters who magically change sex, brother-sister romances and teenage girls falling in love with 10-year-old boys. Then there's a whole subgenre known as shonen ai, or boy's love, which usually features romances between two impossibly pretty young men. Shonen ai themes are common in Japanese comics for girls (as they are in Japanese literature) but, intriguingly, they appear to be almost as popular with girls here. "Fake," a best-selling series from Tokyopop, the largest manga publisher in the United States, revolves around two New York City police officers who look more like male fashion models. The older, more experienced one surprises his novice partner with a French kiss halfway through Volume 1, complete with cinematic close-ups. "Fake" is the No. 1 manga series requested by teenage girls in Glendale, Ariz., but Kristin Fletcher-Spear, a librarian who specializes in teenage services there, says she refuses to purchase it because of the graphic sex scenes in the last volume. That volume, the seventh, which finally lands the heroes in bed, is shrink-wrapped and stamped "Mature" for ages 18-plus by Tokyopop. (Volume 1 is rated for readers 13 and older.) So far, publishers have been relying on their own age-rating schemes, and there's no central governing body enforcing a uniform rating system. While parents have campaigned against books by authors from Judy Blume to Roald Dahl, there have been few complaints about manga, according to a survey Fletcher-Spear conducted of 100 librarians around the country. That could be because most adults have never even heard of it. (More than 40 percent of the general population is still unfamiliar with the genre, according to market research released by Viz Media.) And manga is unlikely to catch the attention of the local P.T.A. because teachers don't typically assign comics as homework or accept them for book reports. "Manga has been below the radar of the kind of people who insist certain works be pulled from library shelves," says Gilles Poitras, a librarian at Golden Gate University in San Francisco who leads manga workshops for librarians. "I'm expecting more challenges to come up in the next few years." During her five-year tenure as a young-adult librarian in Fort Wayne, Ind., which ended in 2002, Katharine Kan had many requests for manga, she said, but she could recall only one complaint, concerning nudity in a story about a boy who changes into a girl when splashed with water. Publishers say they encounter the most resistance to manga not from parents but from independent booksellers, like JoAnn Fruchtman, owner of the Children's Bookstore in Baltimore, which does not stock any manga. "I feel most of it is quite violent and the outcome is not necessarily as uplifting as I think literature should be," she says. Shojo comics in America are mostly translated from Japanese originals, and like all manga are meant to be read back to front, right to left. If you open a typical shojo like a regular book, you're likely to see a note saying: "Stop! You are going the wrong way!" The idea is that it's "a completely different reading experience," as one popular series, "Othello," explains on its first (that is, last) page. Shojo has also spawned a fashion craze among girls for dressing up as their favorite characters; the oddest must be "Gothic Lolita" - an innocent-girl-gone-bad look that involves black frilly Victorian dresses and a little girl's bonnet or headband. All this adds to the books' cult appeal. Manga has been the engine driving one of the fastest-growing segments of publishing - graphic novels, according to Milton Griepp, publisher of ICv2, an online trade publication. Manga sales alone surged to $125 million last year, from $55 million in 2002, and girls and women account for about 60 percent of manga's readership. The strongest market right now is among girls aged 12 to 17, according to Tokyopop and Viz Media. The proliferation of Web sites and high school clubs devoted to manga testifies to the devotion of its fans. And not all of it is R-rated or gender-bending. "Fruits Basket," a series with 11 volumes already, is No. 1 on the latest shojo best-seller chart compiled by ICv2; it follows the adventures of a girl who moves in with a supernatural family suffering under a curse. The latest volume is expected to sell well into six figures, according to Tokyopop, its publisher. But sales figures don't fully capture the fan base among shojo readers, who can whip through a volume in half an hour in a bookstore and often pass along copies to friends. Librarians complain that shojo books are so worn out from multiple readers that they quickly fall apart. Manga titles in general are among the most popular young-adult books at the Brooklyn Public Library, according to one librarian, Joe Anne Shapiro - four of the top five young-adult books on the current reserved list are shojo books (the No. 1 spot going to the fantasy best seller "Eldest"); one recent volume of "Fruits Basket" had 90 holds placed on it, she said. However, like the owner of Children's Bookstore, some librarians and booksellers are unwilling to stock shojo because of their concerns about nudity, sex and violence. "I'm constantly amazed at what I see. Books that appear appropriate for little girls all of a sudden have a girl and boy in bed together," Betsy Mitchell, vice president and editor in chief of Del Rey, says of Japanese shojo that she declines to publish here. Some publishers have expurgated manga for the American market. But they run the risk of outraging fans, many of whom buy the books in the original Japanese and find fan translations online (known as "fanlations") to make sure they're getting their manga pure. After DC Comics deleted some sexual and violent content from "Tenjho Tenge" - a "Lord of the Flies" set in Tokyo - incensed readers attacked the publisher for censorship on fan Web sites and picketed its booth at comic conventions. "We've made a serious decision to publish only series that we didn't have to do that to because readers that enjoy manga hate seeing it touched in any way," Mitchell says. At a Manhattan Barnes & Noble recently, I found 14-year-old Hilary Roberts sitting on the floor in the manga section absorbed in "Ray," a series by Akihito Yoshitomi, about a nurse who uses her X-ray vision to save patients. She also steered me to "W Juliet," by Emura, in which a boy disguised as a girl wins the part of Juliet in the school play and falls in love with the androgynous-looking girl playing Romeo. "Normal American comics like 'Superman' don't appeal to me that much. They focus more on superheroes and fighting evil. Manga has more fantasy and it's more romantic," Roberts, who is in 10th grade at Bronx High School of Science, told me, adding, "I think the art is prettier." That about sums up why girls long ago abandoned American superhero comics, a market increasingly dominated by adult male collectors saving mint-condition comics in plastic bags. Some manga experts have tried to explain the huge popularity in Japan of shonen ai and "yaoi" - its racier version aimed at older women - by pointing to the lack of romance in traditional Japanese marriages and the more restrictive dating codes common until recently. In Japan, manga are "like the release of the id," says Frederik L. Schodt, a Japanese translator and author of "Dreamland Japan: Writings on Modern Manga." But why should teenage girls in anything-goes America be equally attracted? "It's safer, especially if you're a younger teen, because it doesn't put you in the story; you can relate and not feel it's something you have to emulate" sexually, says Robin Brenner, a librarian in Lexington, Mass., who runs a manga fan group. The frequent Shakespearean switches of sexual identity also mirror the fluctuations for girls at puberty between feeling like a tomboy and a sexual woman, several psychologists suggested. As for the standard plot, in which a resistant male is overwhelmed by an older male pursuer, girls may get a thrill from seeing the tables turned on traditional sex roles. In June, Viz Media introduced "Shojo Beat," the first manga magazine aimed at American girls, and last month Tokyopop began serializing manga in CosmoGirl, which has six million readers. Both publishers have also announced novelizations of popular shojo. Tokyopop's spring lineup includes a novel version of a best-selling shonen ai, "Gravitation," about a love affair between a high school boy and a famous male romance author (rated for 16 and up). And Viz will release two novels aimed at girl readers under its new Shojo Beat fiction imprint. Next month "Socrates in Love," an all-time best-selling romance novel in Japan about a girl who falls ill with leukemia, will come out, and early next year "Kamikaze Girls," about the high school friendship between a female biker gang member and a frilly Lolita type, will be published. Publishers are also banking on girls staying interested in manga as they head into their college years and beyond. Masumi Homma O'Donnell, publisher of the New York-based Central Park Media imprint BeBeautiful, believes that the sexier yaoi novels, which her company introduced in the United States a year ago, are the key to keeping female readers. Books like BeBeautiful's "Kizuna," a male love triangle featuring explicit sex scenes between men, have been selling strongly at Borders and Barnes & Noble, and adult women make up the most enthusiastic readers. Among the almost 500 fans who lined up for a book signing by the author, Kazuma Kodaka, in New York City in May, only two were men, ICv2 reported, and they were seeking autographs for female fans. "We hit gold with shojo with 14-year-old girls," says Eric Searleman, an editor at Viz Media. "Now we have to lay the groundwork for the 20-year-old woman." The industry will be watching closely to see if 20-somethings can be enticed by six top-selling Harlequin romances coming out in manga format starting this December. The new joint venture between Harlequin Enterprises and Dark Horse Comics, based in Oregon, will feature two color-coded series: pink for 13-year-olds and up; violet for older teenagers and readers in their early 20's. Harlequin sees the venture as an opportunity to take their novels to "a much younger audience" than the typical romance reader, says Mary Abthorpe, vice president for new business development. More than 250 Harlequin books have already been released as manga in Japan, where manga are read much more by adults than they are here. Whether American women will enjoy their bodice-rippers in comicbook form as much as Japanese readers remains to be seen. Sarah Glazer last wrote for the Book Review in April, about self-publishing. From checker at panix.com Wed Sep 21 01:40:20 2005 From: checker at panix.com (Premise Checker) Date: Tue, 20 Sep 2005 21:40:20 -0400 (EDT) Subject: [Paleopsych] Jesse Friedman: House vs. Home: A Semiotic Analysis of Real Estate Staging Message-ID: Jesse Friedman: House vs. Home: A Semiotic Analysis of Real Estate Staging http://www.jessefriedman.com/writings/college/semiotics%20paper.htm Anthropology 27500 Term Project Paper, due June 13, 2003 In an open house, a chair can be more than a convenient object on which to sit and nibble on crudit?s. Its type and location form part of a metapragmatics (imposed rule of interpretation) that suggests the functionality of the room in which it is placed: a potential buyer sitting on an armchair looking at the other side of the room at an angle could imagine her favorite Monet replica instead the abstract Rothko on the wall, or family photos instead of the small floral arrangement on the charming, spotless mantelpiece. On the other hand, its design and colors denote a certain fashionability, which connotes a certain lifestyle or social class: the Monet admirer's husband might imagine himself lounging (but not slouching!) in a sharp outfit with a martini in his gracefully poised hand, hobnobbing with a fashionable crowd flocking to his swanky cocktail party. If we understand this chair as a facilitator of interpretation and artifactualization (construction as an object of study) of an architectural feature, as well as a signifier of social class and a desired lifestyle, it no longer appears to be a single-function object, but rather an important piece in the strategic preparation of a house for sale. A semiotically informed analysis of promotional literature by practitioners of home staging can explain how the discipline manipulates signs and symbols to increase the sales price of a house. Theories. Staging, "invented" by Realtor Barb Schwarz of Bellevue, Washington in the mid-1970's and registered as a trademark in 1990 (StagedHomes), is, to use a goal-oriented definition popular in the industry, "the process of preparing a house to be sold" (Berrios), which "involves the same commonsense advice that real estate agents have been giving home sellers for decades, but with some fresh additions" (Wickell). In their drive to support a higher or faster sale (indeed, many people learn of staging through articles with titles such as "Home Staging Equals Quick Sale" (Fowler)), the growing hundreds of professional stagers[1][1] cover the basics such as cleanliness and sometimes recommend substantial renovations. However, their greatest and most frequent challenge is to improve the desirability of the house on a limited budget and with the sellers still in occupancy, within which selective arrangement of the objects of daily life, especially of furniture, becomes the primary tool. It is no accident that staging has only in the last few decades emerged as such; it marks the latest stage in a process that has been long in the making. Baudrillard's 1968 landmark semiotic exposition Le syst?me des objets (The System of Objects) discusses his era's "liberation" of home furnishings "from ritual, from etiquette, from the entire ideology that make the environment an opaque mirror of a reified human structure" (25) of the preceding traditionalist, symbolist r?gime, where every object has its own meaning and associated morals, and the combination of objects is highly prescribed. Just as over time people have become liberated within, but not emancipated from, the societal structure[2][2], so too has the "`functional' evolution" in modernist design, which "signifies only the liberation of the function of the object, and not of the object itself," (25) freed furniture from any naturalized symbolic signification and instead located its meaning in its context among the other furniture in the room -- "the self-referencing, hyperreal system of appearances based on the play of signifiers alone" (Gottdiener 43). By not being liberated itself, the object still has function, and it is these functions that form part of the core of the stager's repertoire. The staged home can be understood to operate with Sausseurian codes relating a signifier to a signified. In The Fashion System, Barthes explains how the clothing fashion industry operates with stacked signification, meaning that there are multiple systems of hierarchical code systems, each comprising one-half of the next level up. On the first level, a signifier, such as printed clothing, might be felt to be equivalent to fashion, the signified, but on the level stacked above it, in the idea that This year, prints are a sign of the races, the signifier is the sentence-as-such, where the signified is the proposition, the collection of ideas that constitutes the first level, embodied in the meaning of the sentence[3][3] (Barthes 34-35). [image002.gif] In the staged environment, the first system is of objects (the structure and its contents) in ensemble denotationally signifying their functionality to the observer or user -- and since functionality is liberated, this signified could mean many things to different people. In the second level, the ensemble that is the object-functionality pair of the first level constitutes the signifier, and the lifestyle they connote to the beholder is the signified. The difference between the two levels is that of profane and sacred, housing and dwelling (Saegert, in Belk 24), house and home. It is a distinction that did not exist in the days when property inheritance was more common and design of the house proper was more likely to be built by and directly reflect the ideologies, as Baudrillard names it, of its owners. Only recently, with the vastly increased commoditization of real estate over centuries past and house designs that hold far less of a moral order, have the parts been in place to separate the two. The house has become increasingly similar to other products--being bought and sold, used and discarded like a car or washing machine. Home, on the other hand, involves a commitment not of money, but of time and emotion. It is the place where we invest dreams, hopes, and care. Although we can buy the props and freedom that make such an investment possible and secure, the phenomenon of home itself cannot be commoditized.... Yet the increasing commoditization of the house engenders a confusion between house and home because it is the image of home that is bought and sold in the marketplace. (Dovey, 53-54, quoted in Belk 24) This framework makes the first Barthesian system, of objects and their function, the house, the commoditized object being resold. It is the location of the exchange value, the product's worth in the marketplace. The second system, where the ensemble of items with and within the house signifies a lifestyle, comprises the home, and represents the use value, the usefulness of the thing. Marxists have long held that "capitalist commodity manufacturers produce objects for their exchange value, whereas purchasers of those objects desire them for their use value" (Gottdiener, 180). By exploiting dreams with tantalizing suggestions, staging strives to drive up the perception of use value of the home in the buyer's mind, making them feel more desire for the house, and thus willing to pay more to own it and build their life there. A chronic problem in real estate, that the attachment the sellers feel toward their home feel leads them to overvalue their house, can be thereby overcome by making the next buyer feel a similar connection to their future home, a feat accomplished by a host of transformative procedures aimed at heightening the confusion between house and home. Neutralization. Every home is sacred to those family members whose fancies and personalities it reflects, but just as the selling family has applied countless dollars, hours, and emotions in making the place their own, so too will the house's next residents. Therefore, when put up for sale, it is necessary that the presentation of the house "allow potential buyers to focus on the home and envision themselves in it. This is difficult to do when [the sellers'] pictures, trophies and name are displayed throughout the house" (Mayhugh). The buyer is looking to establish their own home, and not purchase someone else's second-hand, and "private spaces" that can be highly personalized, such as the hearth, "serve as inner sanctums in a society favoring individualism" (Belk 10). Not only is it easier for the buyer to picture their "community family altar" (Belk 10) if the preceding one has already been removed, but the removal of all sorts of private goods, from photos on the walls to prescriptions in the bathroom, eliminate sources of even momentary discomfort (Mayhugh) in a society that draws a highly rigid line between public and private. In preparing a tabula rasa for the buyer's imagination, not only must indices of the previous owners be removed, but often those of their home in general as well. As ideologies often engender stereotypes and negative preconceptions can often hold more swaying power than positive associations, avoiding them altogether further liberates the potential use value, and further depersonalizes; one Realtor strongly admonishes, "Don't allow your home to cause even the mildest controversy. Remove any signs, posters, emblems, symbols, icons, artwork, etc. that might be considered controversial. Keep your political views to yourself" (Bouton). Some recommend going even further, especially to those putting in some touch-up work, by modifying the house itself. This is most easily done by repainting, with which "it is absolutely imperative to make your home as generic as possible. When in doubt go neutral" (HomeStagingOnline). While a buyer might happen to like those unobtrusive tones, their raison d'?tre is in fact negatively oriented, in the quest to be the least offensive possible. When neutralized, the house "can be its most attractive, and appeal to the greatest number of potential buyers" (Litchfield), and hopefully bring the economic advantage of multiple competing bids on the property. Invitation. Avoiding intrusion and conflict is not the only goal of depersonalization, however; with objects judiciously arranged throughout the house, the potential buyer is invited to picture himself and his own objects in the house. Like all who hope to profit from manipulating the mind (one Accredited Staging Professional's promotional copy prominently advertises her "undergraduate and graduate degrees in the Social Sciences, including Sociology and Psychology" (StagedHomes), and industry literature frequently, although rather unscientifically, proudly waves its hands at its scientific basis), stagers must strike a careful balance lest their constructed world be too unrealistic and exposed for the fa?ade it is. Certainly, most of the time objects must be removed, because oftentimes, free space facilitates mental moving-in better than suggestion through placeholders. Since "buyers want to see their vehicle and work bench in your garage, their computer system and home office in your den," it is best to provide "lots of room for visualization" (Bouton) by removing the seller's items from the showcased house. On the other hand, human beings rely heavily on the power of suggestion (as we shall see in greater detail when discussing lifestyle), and at least a little "personality" must be either allowed to remain or be consciously added. Overly sparse rooms, rather than leaving a great deal of space for the imagination to run wild, instead strand the potential buyer without a framework for picturing his new home[4][4]. Accordingly, staging aims to make rooms "welcoming yet non-personal, so that a buyer can visualize what their furnishings and personal items would look like. It's impossible for most people to visualize a room without context, i.e. empty" (M. Friedman). In an age with both a prevailing "customer is king" attitude and a trend toward pre-packagedness and ease of use, "most buyers want a move-in-ready home" (Evans). By demonstrating both space for possessions and piquing the mind's interest in their future arrangement, the sales process can be made more likely to close earlier, since the process will already feel "well on its way," and for a higher price, because the perception of greater use value will be translated into a higher exchange value. The initial depersonalization is, short of moving, often the most emotionally difficult part of the staged-home sales process, as it forces the seller to dissolve the home even before leaving. In removing the artifacts that signify the home in preparation for the open house, the singularizing process of home is terminated and therefore its sacred status as constructed locus of identity vanquished; indeed, one way "the sacred is desacralized is to turn it into a saleable commodity, and thus desingularize it" (Belk 23). Difficult as it may be for a seller to understand that her painstakingly personalized home may not be charming to everyone, and tawdry as the commercializing of the house might appear to her, who has developed a bond toward the home that the house connotes, an appeal to her capitalist greed -- "a different appearance for the home might better project an image consistent with its listing price" (HomeStagingOnline) -- allows her to accept its destruction. Deception. Once distinguishing artifacts of the previous home are removed, the focus turns to improving the house proper, or at least enhancing the perception thereof. Stagers are only selling an image, taking advantage of the fact that "buyers only know what they see...not the way it's going to be" (StagedHomes) in dressing the house up in its Sunday best. The primary item on this agenda is obscuring undesirable features of the house, which often involves temporarily cleaning up the unsightly indices of chronic problems such as leaf-filled gutters, rusting metalwork, difficult-to-clean windows, musty odors, and clog-prone drains, and saving money on actually making the necessary repairs. This masking takes advantage of the cognitive process of abduction, where "result and the rule infer the given case probabilistically" (Mick 199), which leads to a possibly faulty jump to conclusions about the quality of the physical structure[5][5]. The clean-up game must be played with finesse, as, for instance, "too much use of smells, sends a cover-up message" (Bouton), and potential buyers may feel the fantasy-ruining need to peek behind the poker face of perfection. In addition to hiding problems that may reduce the house's exchange value, these fixes have the added benefit of simplifying the all-important process of self-visualization, as potential buyers will "be able to concentrate on picturing themselves in the home when they're not distracted by clutter, dirt, or home maintenance problems" (Morton). Reality is often a sobering diversion from daydreaming, and the fewer the drips and cracks in the physical house, the easier it is to begin conceiving of a home. The deceptive side of staging also aims to exaggerate underdeveloped features of the house. Not surprisingly for our age of proportion distortion, this section of the discipline focuses primarily on making things appear larger and more spacious, which it does in a variety of ways. Qualia of the room itself, such as the colors, can be modified: "Often by neutralizing wallpaper and dark paint colors, a home will actually appear larger, brighter and more appealing to a potential buyer" (HomeStagingOnline). In addition to finding wider general appeal as discussed above, neutral tones have the added benefit of making rooms look bigger, possibly because neutral walls, the delimiting factor of a room, do not assert their presence as much. Environmental factors that affect interpretation of these qualia, such as lighting, are manipulable for many situations, as "bright lights make a small space appear larger. Softer light creates a warm, intimate atmosphere" (Bouton, italics in original). The house can also be made to seem larger by arranging the patio or lawn to seem inviting and functional, to make "the outdoors a visual extension of the inside [to add] unconscious value" (Bouton). By incorporating the exterior into the functional scheme of the interior, the latter appears to be larger and more valuable, and thus worth more. In addition to these techniques of transforming, enhancing, or expanding to make the space look artificially larger, possibly the most important way in which space is exaggerated is, rather than by filling it to demonstrate its storage capacity, instead by thinning out items within it. Closets and other storage spaces, which in our materialist society are usually crammed full, are to be sparsely populated if not empty, as "prospective buyers will be opening doors and drawers and they will want to see a lot of space" (Fowler). More visibly and more significantly, furniture must be arranged sparingly not only to allow space for potential buyers to move their own in mentally, but also to increase the perception of spaciousness. "As owners, many times we have too much furniture in a room. This is wonderful for our own personal enjoyment, but when it comes to selling, we need to thin out as much as possible to make rooms appear larger" (StagedHomes). The removal of the previous home's artifacts is here further occasioned by their undesirable diminution of the perception of size, and thus the exchange value, of the house. Emotion. With the exchange value of the house thus exaggerated, staging's focus turns to building use value, which is accomplished in two important ways, the first of which focuses on creating emotional attachment. While the process of economic exchange is nominally founded upon rational choice, the more relevant a product is perceived to be to the consumer's lifestyle, the more irrational the consumer's decision process becomes. The process of consumption gains greater personal importance, as "the consumer-marketer seeks out an emotional impetus behind the material craving, coding it for himself as an emotional need--status, approval, novelty, vitality, embarrassment avoidance, and so on--rather than a material one" (Applbaum 326). A home incorporates all these mentioned factors, and accordingly the decision to purchase a house depends on its ability to satisfy the many emotional needs of the buyer, and do so better on the whole than the other houses in competition. It is this emotionalizing of needs that forms the basis of consumerism, as it what drives consumers to purchase items of greater expense than necessary for their simple health and comfort. In this society that lives its days utilizing primarily mass-produced goods, novelty is a particularly distinguishing, and thus important, factor. As most products are impersonally designed for a mass of consumers, those that "speak to the buyer" demand special attention. This conception is known as "hierophany (`the act of manifestation of the sacred' (Eliade, 7))...the notion that the sacred does not manifest itself to everyone" (Belk 6). The mystical combination of a perception of uniqueness and a sense of a special connection to the house and the home it will nurture make the buyer not only more likely to bid on the property, but to possibly bid higher than necessary to ensure the long-term connection with this personally manifested sacred object. Confirming the market's exploitation of irrationality, a Realtor asserts, "In today's quirky MarketPlace, creating an EMOTIONAL bond between the buyer and the home is the key to selling success" (Bouton, as capitalized). "MarketPlaces" have always been quirky; the manipulation of emotions, encouraging potential buyers to eschew economic rationalism, only serves to perpetuate this quirkiness. For an industry selling commodities that are owned for years on end and often have a functional lifespan upwards of a century, an astonishingly high degree of importance is placed on the impression gleaned by the potential buyer in the first few seconds, the instant where the buyer makes the first and usually lasting judgment about the house's potential: "The first glimpse people get of your home creates a lasting impression. Buyers can determine in a few seconds whether the home appeals to them or not" (Litchfield). The school of "curb appeal," which dictates that the front exterior and yard of the house must be in tip-top shape to encourage drive-by browsers to give the house a second look, is a relatively early development in modern real estate, but of extreme importance to the development of staging. Building upon this line of thought, stagers will place disproportionate attention to the items visible upon opening the front door and the "feel" they create. Just as in a stereotyped social scene, where the nice but reserved are chronically single while the gregarious and risk-taking find mates aplenty, so it sometimes is with the residential real estate market. Because "people buy homes based on emotions...you want to be certain your home will arouse those emotions the second a prospective buyer walks in the front door" (Evans, emphasis added). While we have seen that decoration must be inoffensive and cast a wide net to generate broad interest, it must also be characterized and engaging to create the hierophanical moment. a fairly contradictory set of rules that must be finessed. Further complicating and frustrating the quest for the emotional grab are the flighty preconceptions that potential buyers bring with them. While uniqueness and personality tend to sell a house well, sometimes "perceiving a place as real is more a matter of having it fit one's prior images or imaginative reconstructions than it is a matter of being factually, historically, or locally accurate" (Belk 16). While professional or residential pride might lead those preparing the house for sale to create a well-balanced and consistent theme, achieving a perfect craftsman color-set with a beautiful cypress grove in the expansive back yard means nothing if the potential buyer doesn't like olive trim and was hoping for oaks. The stakes involved in hitting the visitor's individualized conceptions of desirability are quite high all around: Good staging is effective because when potential buyers walk into a house, they carry with them all their hopes for their new life in the new place...how they feel in the home ultimately gets reflected in the sales price and the number of offers...prestige, love, dreams, all the common denominators we all have as human beings, are at work in the home purchase...the staging maximizes, intensifies all of those feelings. (McAllister) Staging is an imperfect art, but it can and often does use the laws of probability to its advantage. A talented stager will, by studying the demographics of the expected potential buyers, predict which emotional triggers to accentuate and which to avoid, in the perpetual quest to increase perceived use value. However, generalizations break down on the individual level, and knowing what buyers are looking for is often a matter of blind luck. Lifestyle. If you do not know what sort of expectation someone is looking to fulfill, it often can't hurt to give them a suggestion. It is the realm of suggesting the lifestyle of the future home within the house for sale, squarely in the second Barthesian level of signification, where staging is arguably at its most creative. As advertising is both "a model of and a model for reality" (Sherry, p. 9, quoted in Mick 205), a staged house is an ideal medium for creating an environment that both (supposedly) reflects the way people live, to make the scene feel realistic, and suggests that same life to those who have not yet achieved it. Whether by a young couple embarking on corporate career tracks and looking for a fun yet modest first house, or a family with a third child on the way hoping for a large and practical later house, or a single young professional moving across the country for whatever reason, moving house often reflects a significant shift in people's lives. The move itself can also occasion such a renewal, as the process of starting home anew can be "taken as an opportunity to reconfigure both the repair and rewriting of narratives of our own personal biography and also the way relationships to others have formed part of this biography" (Marcoux, summarized in Miller 122). Although they are sometimes hung up on preconceptions of what their new home should entail, new home buyers are among the most susceptible to suggestion, and likely to be filled with excitement in anticipating the construction of a new identity. When walking among the rooms and around the objects of the staged home, potential buyers are given a tantalizing taste of the home they could build for themselves in the house. As "people who shop for a home these days, also shop for a lifestyle," a successful staging will "make your home look successful" (Bouton). The word "shop" reflects the extent to which the house has been commodified, and like any commodity, its use values connote the owner's prestige. Those who are hoping to appear successful, which can be supposed to be nearly everyone, will be more attracted to a staged home that builds such an image. The marketing industry has long exploited people's gaze upwards; "through their lifestyle concept, marketers aim to appeal to consumers' aspirations to belong to more than their actual status groups" (Applbaum, 336). The staged home that reflects the trappings of the life a step up from the potential buyer's current status will be all the more coveted, as lifestyle and house begin to intertwine in the mind. Building such an environment within the house requires not only high-quality furniture and nice colors, but also suggestions of functionality: "The d?cor will show the buyer where they can sit by the fire and sip a glass of wine; where they can host dinner parties; and where they can end their stressful day in the master bedroom." (Evans) "Even though the buyer won't live in your home as it's staged, they'll be attracted to it because it presents a lifestyle to which they aspire....For instance, you might turn a large walk-in closet into a computer room." (Bouton) Rather than a static work of art, staging builds a tableau vivant, a painting of a lifestyle come alive in the house for sale. From the mildly fanciful, such as the above portrayal of a lifestyle so uncluttered that a closet could become an office, to the outlandish, which has seen professional opera singers serenade from staircases, staging is, in addition to its other modalities, a performance art, in which the major actors are the potential buyers enjoying the atmosphere and imagining themselves using the prepared home. Furniture and other peripheral items will denote what can be done in a home, and therefore connote a style of life, but if attentively arranged to "romance the structure and character of your home" (StagedHomes), home furnishings will convince the buyer that it is specific qualities in the house itself that will facilitate such activities, and thus, such a lifestyle. Every house has its particular features, and the stager who understands and takes advantage of them can give the house a feeling of natural personality rather than the dreaded "sterile model-home look" (?lan) that ignores the house's specific strong points and instead lazily shoots for average. To bring out the use value of the house, an interior decorator recommends, "Just as you would use props when setting the stage for a play, do the same in your home, especially for an open house. Make the visitors say, `I would love to sit and read here' or `Wow, what a great yard. We can finally have the gang over for a barbecue" (Mayhugh). The theatrical metaphor is quite fitting: actors make their money by how they perform before the audience and not after the show, as does the seller showcasing a staged home. Because "syntax tells us what kind of object anything is" (Wittgenstein, in Kehret-Ward 223), the manner in which the objects of the staged home mediate the potential buyer's interpretation of the house is by exhibiting its specific use values. In the same way cracker boxes show their product with cheese, or even how beer commercials show identifiable men attracting bikini-clad women, "buyer needs can only be addressed by thinking in terms of the buyer's total consumption system...`the way a purchaser of a product performs the total task of whatever he or she is trying to accomplish when using the product' (Boyd & Levy 1962)" (Kehret-Ward 219). The toying with lifestyle in home staging, by demonstrating to the potential buyer of the house the identity that could be created for those who live within it, is a discipline that centers on addressing the consumption system. It does not matter that the buyer will not inherit the furniture with the house, nor is even likely to populate the new home with something similar, just as a case of beer does not come with women, nor do most people drinking beer at any one instance happen to attract the sort featured in the advertisement; of importance is the buyer's aspirations, and how they are related to the item for sale. If done very well, staging will, by transform the lifestyle signification from one of contiguity to similarity, have the buyer buy the presented lifestyle: Contiguity...involves bringing together in the ad a select set of objects, persons, and activities with the product. [Similarity] takes over as the audience is invited to acknowledge resemblances and, in effect, transfer properties between the co-present entities. (Mick 203) The buyer is no longer thinking that he is purchasing a physical structure on a plot of land; he is not even believing that he is buying a house that has the potential to be a nice home that reflects some wants he had going into the real estate market and maybe a few he didn't know he had suggested by the staging. He is, because the transitivity of similarity, putting in a bid to buy the home he saw during the open house. Staging has worked magic: it has sold something that does not exist. Conclusion. Home staging is a highly interdisciplinary art, involving such disperse fields as psychology, interior design and economics. Its tenets are full of contradictions, advising neutrality yet personality, suggestion but not overbearance. It works with the most prototypical of physical objects, furniture, to enable subliminal sentiments. It even gets down on its hands and knees to clean grout and unclog drains. But most importantly, it exploits human psychology to deceptive ends, making the potential buyer believe that the structure for sale will grant him an improved lifestyle. The chair in the staged open house isn't just a chair; it is a cast member of a carefully crafted act, whose sole goal is, by building a highly desirable home, to increase the final sale value of the house. Alchemy could not create gold from ordinary objects, but staging indeed can. References Applbaum, Kalman. "The Sweetness of Salvation: Consumer Marketing and the Liberal-Bourgeois Theory of Needs." Current Anthropology 39:3 (1998) 323-349. Barthes, Roland. The Fashion System. Trans.: Matthew Ward and Richard Howard. Berkeley: University of California Press. 1983. Baudrillard, Jean. Le syst?me des objets. Paris: Gallimard. 1968. Belk, Russel W., Melanie Wallendorf and John F. Sherry, Jr. "The Sacred and the Profane in Consumer Behavior: Theodicy on the Odyssey." Journal of Consumer Research 16:1 (1989): 1-38. Berrios, Jerry. "Staging: dressed to sell." Contra Costa Times [Walnut Creek, CA]. 26 July 1998, G1. Bouton, Cherry. Staging your home. Vancouver. Viewed 12 Apr. 2003. . Brook Furniture Rental. What is Staging? Northern California. Viewed 12 Apr. 2003. . --. Staging Benefits. . Colt, Elizabeth. ?lan home staging, real estate staging. Denver. Viewed 12 Apr. 2003. . Evans, Blanche. "REALTORS? Who Set the Stage Get Top Billing." Realty Times. 6 Oct. 1998. Viewed 3 June 2003. . --. "Staging Helps Sell Homes" Realty Times. 22 Apr. 2002. Viewed 3 June 2003. . Gottdiener, Mark. Postmodern Semiotics. Malden, MA: Blackwell. 1995. Fowler, Jerry. "Home Staging Equals Quick Sale." Realty Times. 17 Aug. 2001. Viewed 3 June 2003. . Friedman, Michael. Personal correspondence via AOL Instant Messenger. May 27, 2003. Kehret-Ward, Trudy. "Combining Products in Use: How the Syntax of Product Use Affects Marketing Decisions." In Marketing and Semiotics. Ed. Jean Umiker-Sebeok. Berlin: Mouton de Gruyter. 1987. 219-238. Kopec, David. "Home Staging: A Modern Tool For Sales." Realty Times. 3 March 2003. Viewed 3 June 2003. . Litchfield, Linda M. Stage Right - Room Renaissance - Home Staging. Rochester, NY. Viewed 12 Apr. 2003. . Mayhugh, Gail. GMJ Interiors - Real Estate Staging. Las Vegas. Viewed 12 Apr. 2003. . --. "Staging home for sale can pay off." Review-Journal [Las Vegas]. May 19, 2002. At . McAllister, Sue. "Dressed to Sell: the practice of `staging' homes catches on as sellers discover what it can add to the final sales price." Mercury News [San Jose]. 6 May 2000, 1F Mick, David Glen. "Consumer Research Semiotics: Exploring the Morphology of Signs, Symbols, and Significance." Journal of Consumer Research 13:2 (1986): 196-213. Miller, Dennis. "Accommodating." In Contemporary Art and the Home. Ed. Colin Painter. Oxford, UK: Berg. 2002. 115-130. Morton, Glenna. Home Staging Tips. Viewed 12 Apr. 2003. From . O'Donnell, Joanne. Our Profiles. Oakland, CA. Viewed 3 June 2003. . Serra. Melanie. Home Staging. Atlanta. Viewed 12 Apr. 2003. . Sherry, John F., Jr. "Advertising as a Cultural System." In Marketing and Semiotics. Ed. Jean Umiker-Sebeok. Berlin: Mouton de Gruyter. 1987. 441-461. Silverstein, Michael. Anthropology 27500. University of Chicago. Spring 2003. StagedHomes.com. Staged Homes sell faster :: Information on Accredited Home Staging Professionals :: Professional Staging Strategies. Concord, CA. Viewed 12 Apr. 2003. . --. Staged Homes :: What does Staging a Home Mean? . --. Staged Homes :: Tips for Home Sellers. . --. Staged Homes :: Linde Lapish :: Accredited Staging Professional. . --. Staged Homes :: Accredited Home Stager Credentials. . Solomon, Michael R. and Henry Assael. "The Forest or the Trees?: A Gestalt Approach to Symbolic Consumption." In Marketing and Semiotics. Ed. Jean Umiker-Sebeok. Berlin: Mouton de Gruyter. 1987. 189-217. Wickell, Janet. Home Staging Tips & Techniques. Viewed 12 Apr. 2003. From . _______________________ [6][1] While Schwarz operates an "Accredited Staging? Professional" program, no trade organization (such as the National Association of Realtors for agents) exists, nor do any statistics on the number of professional stagers; thus, "hundreds" is an intentionally vague estimate. Regardless, all indications are that the industry is growing rapidly. [7][2] "The bourgeois and industrial revolutions liberate the individual little by little from religious, moral and familial implication. He then agrees to the liberty of right as a man, but to the liberty of action as a worker, which is to say to the liberty to sell himself as such." (25) This Marx-informed view sees modernist freedom as license to make choices within the social structure but not to escape it. [8][3] Barthes identifies further systems in which fashion is reproduced and in which rhetorical description of resides. As this analysis only concerns itself with the physical nature of staging, and does its best to avoid categorizing its description, we need not concern ourselves for the time being with these higher stages of the hierarchy. [9][4] One Realtor, discussing a property he had bought to resell, provides an example where, unproven causality aside, staging by adding objects greatly affected the sales process: "I bought a house, and the market went south, and even though I recarpeted and repainted, it didn't sell until it was staged.... It sold the first day after it had been staged for full price." (Evans) [10][5] An abductive process might proceed like so: Result: The drain is flows smoothly. Rule: Drains that do not clog flow smoothly. Case:Therefore, this drain does not clog. Some, like Eco, assert the entire sign-interpretation process relies upon abduction. References 1. http://www.jessefriedman.com/writings/college/semiotics%20paper.htm#_ftn1 2. http://www.jessefriedman.com/writings/college/semiotics%20paper.htm#_ftn2 3. http://www.jessefriedman.com/writings/college/semiotics%20paper.htm#_ftn3 4. http://www.jessefriedman.com/writings/college/semiotics%20paper.htm#_ftn4 5. http://www.jessefriedman.com/writings/college/semiotics%20paper.htm#_ftn5 6. http://www.jessefriedman.com/writings/college/semiotics%20paper.htm#_ftnref1 7. http://www.jessefriedman.com/writings/college/semiotics%20paper.htm#_ftnref2 8. http://www.jessefriedman.com/writings/college/semiotics%20paper.htm#_ftnref3 9. http://www.jessefriedman.com/writings/college/semiotics%20paper.htm#_ftnref4 10. http://www.jessefriedman.com/writings/college/semiotics%20paper.htm#_ftnref5 From checker at panix.com Wed Sep 21 01:40:34 2005 From: checker at panix.com (Premise Checker) Date: Tue, 20 Sep 2005 21:40:34 -0400 (EDT) Subject: [Paleopsych] Democratic Underground: The Klutzes - politically incorrect Message-ID: The Top 10 Conservative Idiots http://www.democraticunderground.com/top10/05/213.html [Thanks to Pen Name Withheld for this.] September 12, 2005 What didn't go right? Edition In the aftermath of Hurricane Katrina, Clueless George wants to know "What didn't go right?" The list of screw-ups is too long to list in one place, but we've got ten of 'em right here. At the top of the list we've got George W. Bush (1) himself, who messed everything up, but somehow (as usual) doesn't seem to realize it. FEMA (2) and its director Michael Brown (3) are also here, as their incompetence equaled that of Dubya himself. Barbara Bush (4) said something royally stupid, and Dick Cheney (5) went mansion shopping. And once again, the "blame America first" crowd (10) turns out to be conservative. Enjoy (if you can) and don't forget the key! 1 George W. Bush dumb covering your ass photo-opping screwing the poor screwing the poor The last two weeks have been pretty rough on poor old Dubya. Apparently his mythical crisis-management skills, honed during the aftermath of 9/11, have turned out to be just a myth (see cartoon here), as he and his administration completely bungled the relief effort. But apparently no one told Bush, as this cringe-inducing discussion with House Minority Leader Nancy Pelosi illustrates: Pelosi: Tells Bush to fire FEMA director Michael Brown Bush: "Why would I do that?" Pelosi: "Because of all that went wrong, of all that didn't go right last week." Bush: "What didn't go right?" Mind you, this is the President of the United States, who is supposed to be the most powerful man on earth, with access to the greatest intelligence-gathering infrastructure ever created -- but somehow he doesn't know what went wrong. If I were Nancy Pelosi, I would have punched Bush right in the smirk right there. (Which, incidentally, is pretty much what Louisiana Senator Mary Landrieu threatened to do. Oh, man, I would pay good money to see that.) Of course, it was obvious to everyone who wasn't the President of the United States that damn near everything didn't go right. All I can figure is that Bush's advisors have been so successful in keeping him in his hermetically sealed cocoon that he actually isn't aware of the massive human tragedy. Or else he doesn't give a shit, which is also possible. All he knows is that everything went swell when they had 50 firefighters flown in from Atlanta for that presidential photo op. And the other photo op with the two black ladies was great too, except for that whole having to interact with black people part. And he really showed a lot of compassion when he learned that his friend Trent Lott had lost his house in the storm. Said Bush: "Out of the rubbles of Trent Lott's house ... there's going to be a fantastic house. And I'm looking forward to sitting on the porch." 2 FEMA lazy covering your ass screwing the poor screwing the poor Speaking of "what didn't go right," how about: Every goddamned thing that FEMA tried to do for the last two weeks. No doubt The Buck stops with George W. Bush, but the bureaucratic Keystone Kops who have been stopping, passing, and fucking up The Buck for the last two weeks were FEMA. To give you an idea of how badly they screwed this up, here is a just a small sampling of the utterly moronic choices made by FEMA during the last two weeks, courtesy of DUer peabody71: FEMA won't accept Amtrak's help in evacuations FEMA turns away experienced firefighters FEMA turns back Wal-Mart supply trucks FEMA prevents Coast Guard from delivering diesel fuel FEMA won't let Red Cross deliver food FEMA bars morticians from entering New Orleans FEMA blocks 500-boat citizen flotilla from delivering aid FEMA fails to utilize Navy ship with 600-bed hospital on board FEMA to Chicago: Send just one truck FEMA turns away generators FEMA: "First Responders Urged Not To Respond" (That last one is a direct quote from the FEMA website.) Now mind you, that is a list of examples where FEMA was faced with a clear choice, and made a conscious decision about how to respond. So, for example, the Red Cross comes to FEMA and says, "Hey, FEMA, we would like to deliver some food," and FEMA responds, "Please don't deliver food, thanks." And the Red Cross is like, "But people are starving." And FEMA is like, "Whatever, dude. I'm the boss here, and I say no food." And the Red Cross is like, "What the hell is wrong with you people?" But the list doesn't include stuff like how they just kinda sat there doing nothing for a few days while people were dying in New Orleans, apparently because ... um ... I don't know ... maybe they just forgot. "Hey, FEMA. We've got a situation in New Orleans. Maybe you should go check it out." FEMA looks up from its desk and scratches its ass and yawns and says, "Yeah, I'll get right on that in a sec," and then returns to playing solitaire and on its computer (because it's too fucking stupid to figure out how to play Minesweeper). And when FEMA finally looks up from its three-day solitaire-fueled stupor, it's like, "Oh, shit! I forgot all about that New Orleans thing!" and it starts frantically running around its office shuffling papers and trying to remember what it was supposed to be doing. And thousands of people are already dead. 3 Michael Brown dumb But what are we supposed to expect when Bush used FEMA as a way to hand out political patronage to his cronies? According to the New York Daily News, practically all of the political appointees in FEMA are Bush buddies with no experience handling disasters. Deputy director and chief of staff Patrick Rhode worked on the Bush-Cheney campaign. Deputy chief of staff Scott Morris was a public relations guy. But the most impressive resume (at least, the part that doesn't include false information) belongs to FEMA director Michael Brown, who was the commissioner of judges and stewards for the International Arabian Horse Association, which is well known for its depth of experience managing emergencies involving, um, Arabian horses. Like, for example, when a rider gets thrown out of his saddle. Or when a horseshoe falls off. Or when you're taking one of those horse-and-carriage rides in Central Park and the horse takes a dump right in the beginning of the trip and it stinks up the entire experience and maybe even ruins your entire vacation to New York City. Except Mr. Brown did such a shite job in that position that he was fired. I don't know about you, but I've been fired from a job before in my life. Let's just say that nobody swooped in and offered me a position like Director of FEMA. I was more like Director of Sitting at Home Watching Reruns of Jerry Springer and Eating Doritos. But I digress. Apparently in the current atmosphere where national security is the number one priority, it never crossed anyone's mind that perhaps the Federal Emergency Management Agency might be important. (Go figure.) So we get a washed-up horse-show reject in a position of vital national importance. And even after the guy has fucked up New Orleans so bad that even the media starts to notice, the President of the United States says, "Brownie, you're doing a heck of a job!" It took nearly two weeks and thousands dead before someone in charge finally sent this moran packing back to Washington DC. 4 Barbara Bush dumb dumb dumb During a crisis when virtually every public statement coming from everyone associated with the current presidential administration was idiotic, out-of-touch, and insensitive, it seems pointless to even try to select one quote as the most idiotic, the most out-of-touch, and the most insensitive. But Barbara Bush, mother of the current president, has made it easy. During an interview last week, she said something that pretty well epitomized the insulated, self-absorbed worldview of everyone involved in this administration. Speaking of the Hurricane survivors in Huston, the Bush family Battle Axe said this: "What I'm hearing which is sort of scary is they all want to stay in Texas. Everyone is so overwhelmed by the hospitality. And so many of the people in the arena here, you know, were underprivileged anyway, so this--this (she chuckles slightly) is working very well for them." Gee, Mrs. Bush, it does seem to be working out very well for them. I daresay that the people in the Astrodome got a really sweet deal. A massive hurricane leveled their hometown, destroyed their homes, cars, and property, caused them to lose their jobs, left them without food and water for days, and maybe even killed some of their family members. In exchange, they get a fun-filled open-ended field trip to one of America's favorite sports venues (considered by some to be the Eighth Wonder of the World with a new Monsanto "Magic Carpet" system instead of Astroturf, two Diamond Vision screens, a large matrix board, two auxiliary matrix boards and a game-in-progress board!) It's even better than a mansion in Kennebunkport! 5 Dick Cheney greed >From the Compassionate Conservatism file: While tens of thousands of Hurricane survivors were left homeless and hungry (that is, if they were lucky enough to not be dead), Vice President Dick Cheney decided it was a great time to go mansion shopping. You heard me right: mansion shopping. These assholes aren't even trying to look like they care anymore. According to the Washington Post, Dick Cheney was checking out real estate in St. Michaels, a tiny resort town on the Maryland's Eastern Shore. Apparently "Cheney's house," which is listed at $2.9 million, "backs up in spectacular fashion to an inlet of the Chesapeake Bay." How charming. But I suppose it is possible that I am being too hard on the Vice President. Given the timing of the purchase, I think there may be something else going on here. Maybe Vice President and Mrs. Cheney were so distressed by the stories of displaced families from New Orleans that they are buying the property in order to use it for emergency housing for hurricane survivors. Yeah, that's the ticket. By the way, Crooks and Liars has video of someone telling the Vice President to go fuck himself. 6 Michael Holdener dumb On August 30, Lt. David Shand and Lt. Matt Udkow, two helicopter pilots from Pensacola Naval Air Station, were sent on a mission to deliver food, water and other supplies to Stennis Space Center, near the Mississippi coast. After finishing the mission, the pilots picked up a Coast Guard transmission calling for helicopters to help with rescue operations in New Orleans. Unable to get permission from Pensacola because they were out of radio range, the two pilots -- trained in search-and-rescue -- had to make a life-or-death decision. They chose to respond to the call for help. This is just one story of courage and heroism which came out of a terrible tragedy. Thanks to the efforts of Lieutenants Shand and Udkow, 110 people were saved. Upon their return to Pensacola, the two pilots were greeted as heroes and given medals and immediate promotions. No, wait a second. That's not what happened at all. Upon their return to Pensacola, the two American heroes were reprimanded by Commander Michael Holdener, who said their rescue efforts were a diversion. For his efforts, Lieutenant Udkow was removed from flying rotation and assigned to overseeing dog kennels instead. There is something seriously wrong with this country if a genuine hero who saved a hundred people is stuck watching dog kennels, while all the fools at FEMA and the White House whose incompetence cost the lives of thousands will likely escape any kind of accountability whatsoever. Disgraceful. 7 Tom DeLay hypocrisy partisanship Believe it or not, there has been some news that is totally unrelated to Hurricane Katrina. Long-time Top 10 favorite Tom DeLay was once again in hot water. Last week a grand jury indicted Texans for a Republican Majority, a political action committee founded by Tom DeLay, on charges of accepting illegal corporate money. Three of DeLay's political buddies had been previously indicted on related charges of money laundering, unlawful acceptance of corporate political contributions, and making corporate donations. According to the Associated Press, the Texas Association of Business was also charged in a scheme to funnel "massive amounts of secret corporate wealth" into Texas campaigns. Somehow The Hammer avoided indictment himself, although a complaint was filed with the House Ethics Committee last year. Unsurprisingly, the committee has so far neglected to take any disciplinary action against DeLay. It's yet another example of the Party of Personal Responsibility failing to hold any of its own members responsible. Oh, and while we're on the subject of Tom DeLay, don't miss this quote. Speaking to a group of young hurricane survivors in Houston, DeLay compared their situation to being at camp, and asked, "Now tell me the truth boys, is this kind of fun?" Yeah, Tom, it's a real blast. 8 Dick Morris excessive spin excessive spin While 9/11 was perhaps the worst thing to happen to the American people in decades, it was undoubtedly the best thing that ever happened to George W. Bush. In the years since that terrible tragedy, when the rest of America was in shock, Bush was cynically mining a rich political vein by robotically repeating "9/11 ... bring it on ... 9/11 ... war on terror ... 9/11 ... dead or alive ... 9/11 ..." over and over again, thus diverting attention from his utter failure as president long enough to steal another four years in the oval office. With his second term off to a rocky start, it looks like Bush needed another September Miracle to get his presidency back on track, and Dick Morris is here to show the way. Once again, Bush is poised to turn our suffering into political gold. Says Morris: "A disaster like Katrina is just what a president needs to anchor his second term and give him relevance and popularity far into his tenure." Apparently in Dick Morris's bizarro world, having bad shit happen to the country is the best thing that could possibly happen to a president. So far in the aftermath of the Hurricane, Dick Morris doesn't seem to be right, as Bush's approval ratings have sunk to an all time low. But if 9/11 is any indication, Bush might still be able to turn this to his advantage. Which would just go to show how messed up our country is right now. Consider the following: Thousands of Americans die because of your incompetence (twice) -- you're relevant and popular. Eight years of peace and prosperity -- you get impeached. 9 Rush Limbaugh racism racism It has been suggested that the lack of urgency and/or compassion from conservatives in the wake of Katrina might have something to do with the fact that most of the victims were black. (I know, I know. You're shocked that anyone would even suggest such a thing. Conservatives? Racist? No way!) Just in case there is any question here, Rush Limbaugh has provided some damning evidence while talking about New Orleans on his show. Courtesy of Crooks and Liars, here is a brief mp3 of Rush Limbaugh mispronouncing the name of New Orleans Mayor Ray Nagin: Click to hear Rush Limbaugh show his true colors. (mp3) You heard it right, he said "Mayor Nayger." Freudian slip or intentional smear? You be the judge. Either way, he's a flaming racist asshole. 10 Kooks who Blame America for Katrina religious nut religious nut religious nut Why is it that conservatives (and certain unnamed Democrats) seem to get off claiming that it's liberals who "blame America" when bad things happen, even though it is almost always their fellow conservatives who are pointing the finger after a national tragedy? If you remember, four years ago uber-kook Pat Robertson blamed "the pagans, and the abortionists, and the feminists, and the gays and the lesbians, ... the ACLU, People For the American Way -- all of them who have tried to secularize America" for helping to cause 9/11. In the wake of Katrina, the second massive national tragedy of the Bush Administration, once again conservatives are pointing the finger of blame squarely at their fellow Americans. Michael Marcavage of Philadelphia blames The Gays (there's a surprise), because there was supposed to be a gay pride parade in New Orleans around the time of the Hurricane. Steve Lefemine, an anti-abortion activist from Columbia, South Carolina, has a different idea. He insists that "God judged New Orleans for the sin of shedding innocent blood through abortion." How does he know? It's simple, really. He was looking at a weather map, and in the swirling vortex of Katrina he saw some clouds that looked like a fetus. Realizing that he had discovered the secret key for reading the Will of God, Mr. Lefemine looked deeper into the swirl of clouds and declared that responsibility for Katrina also lay with "Snail shells, the Whirlpool dishwasher company, and some Cinnabons - the kind with the white frosting, not the kind with the nuts on top." But American religious kooks weren't the only people claiming that the Hurricane was some sort of divine retribution. According to Muhammad Yousef Mlaifi, a Kuwaiti official (yes, the same Kuwait whose ass we saved in the early 90s): "The Terrorist Katrina is One of the Soldiers of Allah ... It is almost certain that this is a wind of torment and evil that Allah has sent to this American empire." Sounds like some religious extremists here in America have a lot in common with the religious extremists we're supposed to be fighting against. See you next week! From checker at panix.com Wed Sep 21 01:41:02 2005 From: checker at panix.com (Premise Checker) Date: Tue, 20 Sep 2005 21:41:02 -0400 (EDT) Subject: [Paleopsych] Science and Creationism: A View from the National Academy of Science, 2 ed. 1999 Message-ID: Science and Creationism: A View from the National Academy of Science, 2 ed. 1999 Preface In his preface to the original 1984 version of this document, Frank Press, my predecessor as president of the National Academy of Sciences, called attention to a pair of illustrations similar to the ones on the front and back of this booklet. The first is a photograph of Earth from space--the one on this booklet was taken by the GOES-7 satellite in 1992 as it passed over Earth and captured in graphic detail Hurricane Andrew. The second shows a map of the world prepared during the 7th century by the scholar Isidore of Seville. As Press pointed out, both illustrations reflect the efforts of humans to understand the natural world. "How then," he wrote, "can the two views be so different? The answer lies at the very heart of the nature of this system of study we call science." Since those words were written, the mapping of Earth has provided further powerful examples of how science and science-based technologies progress. Beginning in the early 1990s, a network of satellites has allowed anyone with a hand-held receiver to know his or her position on Earth to within a few feet. This Global Positioning System* (GPS) now is being used to locate vessels lost at sea, study plate tectonics, trace open routes through crowded city streets, and survey Earth's surface. Yet the technology originated with a purely scientific objective--the desire to build extremely accurate clocks to test Einstein's theory of relativity. The tremendous success of science in explaining natural phenomena and fostering technological innovation arises from its focus on explanations that can be inferred from confirmable data. Scientists seek to relate one natural phenomenon to another and to recognize the causes and effects of phenomena. In this way, they have developed explanations for the changing of the seasons, the movements of the sun and stars, the structure of matter, the shaping of mountains and valleys, the changes in the positions of continents over time, the history of life on Earth, and many other natural occurrences. By the same means, scientists have also deciphered which substances in our environment are harmful to humans and which are not, developed cures for diseases, and generated the knowledge needed to produce innumerable labor-saving devices. The concept of biological evolution is one of the most important ideas ever generated by the application of scientific methods to the natural world. The evolution of all the organisms that live on Earth today from ancestors that lived in the past is at the core of genetics, biochemistry, neurobiology, physiology, ecology, and other biological disciplines. It helps to explain the emergence of new infectious diseases, the development of antibiotic resistance in bacteria, the agricultural relationships among wild and domestic plants and animals, the composition of Earth's atmosphere, the molecular machinery of the cell, the similarities between human beings and other primates, and countless other features of the biological and physical world. As the great geneticist and evolutionist Theodosius Dobzhansky wrote in 1973, "Nothing in biology makes sense except in the light of evolution." Nevertheless, the teaching of evolution in our schools remains controversial. Some object to it on the grounds that evolution contradicts the accounts of origins given in the first two chapters of Genesis. Some wish to see "creation science"--which posits that scientific evidence exists to prove that the universe and living things were specially created in their present form--taught together with evolution as two alternative scientific theories. Scientists have considered the hypotheses proposed by creation science and have rejected them because of a lack of evidence. Furthermore, the claims of creation science do not refer to natural causes and cannot be subject to meaningful tests, so they do not qualify as scientific hypotheses. In 1987 the U.S. Supreme Court ruled that creationism is religion, not science, and cannot be advocated in public school classrooms. And most major religious groups have concluded that the concept of evolution is not at odds with their descriptions of creation and human origins. This new edition of Science and Creationism: A View from the National Academy of Sciences is a companion volume to a publication released in 1998 by the Academy, Teaching About Evolution and the Nature of Science. That longer document is addressed to the teachers, educators, and policymakers who design, deliver, and oversee classroom instruction in biology. It summarizes the overwhelming observational evidence for evolution and explains how science differs from other human endeavors. It also suggests effective ways of teaching the subject and offers sample teaching exercises, curriculum guides, and "dialogues" among fictional teachers discussing the difficulties of presenting evolution in the classroom. This new edition of Science and Creationism has a somewhat different purpose. It, too, summarizes key aspects of several of the most important lines of the evidence supporting evolution. But it also describes some of the positions taken by advocates of creation science and presents an analysis of these claims. As such, this document lays out for a broader audience the case against presenting religious concepts in science classes. Both this document, and the earlier Teaching About Evolution and the Nature of Science, are freely available online at the Academy website (www.nap.edu). Scientists, like many others, are touched with awe at the order and complexity of nature. Indeed, many scientists are deeply religious. But science and religion occupy two separate realms of human experience. Demanding that they be combined detracts from the glory of each. Bruce Alberts President National Academy of Sciences *"The Global Positioning System: The Role of Atomic Clocks." Part of the series Beyond Discovery: The Path from Research to Human Benefit by the National Academy of Sciences (Washington, D.C.: National Academy Press, 1997). This document is also available at www2.nas.edu/bsi. Introduction Science is a particular way of knowing about the world. In science, explanations are limited to those based on observations and experiments that can be substantiated by other scientists. Explanations that cannot be based on empirical evidence are not a part of science. In the quest for understanding, science involves a great deal of careful observation that eventually produces an elaborate written description of the natural world. Scientists communicate their findings and conclusions to other scientists through publications, talks at conferences, hallway conversations, and many other means. Other scientists then test those ideas and build on preexisting work. In this way, the accuracy and sophistication of descriptions of the natural world tend to increase with time, as subsequent generations of scientists correct and extend the work done by their predecessors. Progress in science consists of the development of better explanations for the causes of natural phenomena. Scientists never can be sure that a given explanation is complete and final. Some of the hypotheses advanced by scientists turn out to be incorrect when tested by further observations or experiments. Yet many scientific explanations have been so thoroughly tested and confirmed that they are held with great confidence. The theory of evolution is one of these well-established explanations. An enormous amount of scientific investigation since the mid-19th century has converted early ideas about evolution proposed by Darwin and others into a strong and well-supported theory. Today, evolution is an extremely active field of research, with an abundance of new discoveries that are continually increasing our understanding of how evolution occurs. This booklet considers the science that supports the theory of evolution, focusing on three categories of scientific evidence: Evidence for the origins of the universe, Earth, and life Evidence for biological evolution, including findings from paleontology, comparative anatomy, biogeography, embryology, and molecular biology Evidence for human evolution At the end of each of these sections, the positions held by advocates of "creation science" are briefly presented and analyzed as well. The theory of evolution has become the central unifying concept of biology and is a critical component of many related scientific disciplines. In contrast, the claims of creation science lack empirical support and cannot be meaningfully tested. These observations lead to two fundamental conclusions: the teaching of evolution should be an integral part of science instruction, and creation science is in fact not science and should not be presented as such in science classes. Terms Used in Describing the Nature of Science* Fact: In science, an observation that has been repeatedly confirmed and for all practical purposes is accepted as "true." Truth in science, however, is never final, and what is accepted as a fact today may be modified or even discarded tomorrow. Hypothesis: A tentative statement about the natural world leading to deductions that can be tested. If the deductions are verified, the hypothesis is provisionally corroborated. If the deductions are incorrect, the original hypothesis is proved false and must be abandoned or modified. Hypotheses can be used to build more complex inferences and explanations. Law: A descriptive generalization about how some aspect of the natural world behaves under stated circumstances. Theory: In science, a well-substantiated explanation of some aspect of the natural world that can incorporate facts, laws, inferences, and tested hypotheses. The contention that evolution should be taught as a "theory, not as a fact" confuses the common use of these words with the scientific use. In science, theories do not turn into facts through the accumulation of evidence. Rather, theories are the end points of science. They are understandings that develop from extensive observation, experimentation, and creative reflection. They incorporate a large body of scientific facts, laws, tested hypotheses, and logical inferences. In this sense, evolution is one of the strongest and most useful scientific theories we have. *Adapted from Teaching About Evolution and the Nature of Science by the National Academy of Sciences (Washington, D.C.: National Academy Press, 1998). The Origin of the Universe, Earth, and Life The term "evolution" usually refers to the biological evolution of living things. But the processes by which planets, stars, galaxies, and the universe form and change over time are also types of "evolution." In all of these cases there is change over time, although the processes involved are quite different. In the late 1920s the American astronomer Edwin Hubble made a very interesting and important discovery. Hubble made observations that he interpreted as showing that distant stars and galaxies are receding from Earth in every direction. Moreover, the velocities of recession increase in proportion with distance, a discovery that has been confirmed by numerous and repeated measurements since Hubble's time The implication of these findings is that the universe is expanding. Hubble's hypothesis of an expanding universe leads to certain deductions. One is that the universe was more condensed at a previous time. From this deduction came the suggestion that all the currently observed matter and energy in the universe were initially condensed in a very small and infinitely hot mass. A huge explosion, known as the Big Bang, then sent matter and energy expanding in all directions. This Big Bang hypothesis led to more testable deductions. One such deduction was that the temperature in deep space today should be several degrees above absolute zero. Observations showed this deduction to be correct. In fact, the Cosmic Microwave Background Explorer (COBE) satellite launched in 1991 confirmed that the background radiation field has exactly the spectrum predicted by a Big Bang origin for the universe. As the universe expanded, according to current scientific understanding, matter collected into clouds that began to condense and rotate, forming the forerunners of galaxies. Within galaxies, including our own Milky Way galaxy, changes in pressure caused gas and dust to form distinct clouds. In some of these clouds, where there was sufficient mass and the right forces, gravitational attraction caused the cloud to collapse. If the mass of material in the cloud was sufficiently compressed, nuclear reactions began and a star was born. Some proportion of stars, including our sun, formed in the middle of a flattened spinning disk of material. In the case of our sun, the gas and dust within this disk collided and aggregated into small grains, and the grains formed into larger bodies called planetesimals ("very small planets"), some of which reached diameters of several hundred kilometers. In successive stages these planetesimals coalesced into the nine planets and their numerous satellites. The rocky planets, including Earth, were near the sun, and the gaseous planets were in more distant orbits. The ages of the universe, our galaxy, the solar system, and Earth can be estimated using modern scientific methods. The age of the universe can be derived from the observed relationship between the velocities of and the distances separating the galaxies. The velocities of distant galaxies can be measured very accurately, but the measurement of distances is more uncertain. Over the past few decades, measurements of the Hubble expansion have led to estimated ages for the universe of between 7 billion and 20 billion years, with the most recent and best measurements within the range of 10 billion to 15 billion years. The age of the Milky Way galaxy has been calculated in two ways. One involves studying the observed stages of evolution of different-sized stars in globular clusters. Globular clusters occur in a faint halo surrounding the center of the Galaxy, with each cluster containing from a hundred thousand to a million stars. The very low amounts of elements heavier than hydrogen and helium in these stars indicate that they must have formed early in the history of the Galaxy, before large amounts of heavy elements were created inside the initial generations of stars and later distributed into the interstellar medium through supernova explosions (the Big Bang itself created primarily hydrogen and helium atoms). Estimates of the ages of the stars in globular clusters fall within the range of 11 billion to 16 billion years. A second method for estimating the age of our galaxy is based on the present abundances of several long-lived radioactive elements in the solar system. Their abundances are set by their rates of production and distribution through exploding supernovas. According to these calculations, the age of our galaxy is between 9 billion and 16 billion years. Thus, both ways of estimating the age of the Milky Way galaxy agree with each other, and they also are consistent with the independently derived estimate for the age of the universe. Radioactive elements occurring naturally in rocks and minerals also provide a means of estimating the age of the solar system and Earth. Several of these elements decay with half lives between 700 million and more than 100 billion years (the half life of an element is the time it takes for half of the element to decay radioactively into another element). Using these time-keepers, it is calculated that meteorites, which are fragments of asteroids, formed between 4.53 billion and 4.58 billion years ago (asteroids are small "planetoids" that revolve around the sun and are remnants of the solar nebula that gave rise to the sun and planets). The same radioactive time-keepers applied to the three oldest lunar samples returned to Earth by the Apollo astronauts yield ages between 4.4 billion and 4.5 billion years, providing minimum estimates for the time since the formation of the moon. The oldest known rocks on Earth occur in northwestern Canada (3.96 billion years), but well-studied rocks nearly as old are also found in other parts of the world. In Western Australia, zircon crystals encased within younger rocks have ages as old as 4.3 billion years, making these tiny crystals the oldest materials so far found on Earth. The best estimates of Earth's age are obtained by calculating the time required for development of the observed lead isotopes in Earth's oldest lead ores. These estimates yield 4.54 billion years as the age of Earth and of meteorites, and hence of the solar system. The origins of life cannot be dated as precisely, but there is evidence that bacteria-like organisms lived on Earth 3.5 billion years ago, and they may have existed even earlier, when the first solid crust formed, almost 4 billion years ago. These early organisms must have been simpler than the organisms living today. Furthermore, before the earliest organisms there must have been structures that one would not call "alive" but that are now components of living things. Today, all living organisms store and transmit hereditary information using two kinds of molecules: DNA and RNA. Each of these molecules is in turn composed of four kinds of subunits known as nucleotides. The sequences of nucleotides in particular lengths of DNA or RNA, known as genes, direct the construction of molecules known as proteins, which in turn catalyze biochemical reactions, provide structural components for organisms, and perform many of the other functions on which life depends. Proteins consist of chains of subunits known as amino acids. The sequence of nucleotides in DNA and RNA therefore determines the sequence of amino acids in proteins; this is a central mechanism in all of biology. Experiments conducted under conditions intended to resemble those present on primitive Earth have resulted in the production of some of the chemical components of proteins, DNA, and RNA. Some of these molecules also have been detected in meteorites from outer space and in interstellar space by astronomers using radiotelescopes. Scientists have concluded that the "building blocks of life" could have been available early in Earth's history. An important new research avenue has opened with the discovery that certain molecules made of RNA, called ribozymes, can act as catalysts in modern cells. It previously had been thought that only proteins could serve as the catalysts required to carry out specific biochemical functions. Thus, in the early prebiotic world, RNA molecules could have been "autocatalytic"--that is, they could have replicated themselves well before there were any protein catalysts (called enzymes). Laboratory experiments demonstrate that replicating autocatalytic RNA molecules undergo spontaneous changes and that the variants of RNA molecules with the greatest autocatalytic activity come to prevail in their environments. Some scientists favor the hypothesis that there was an early "RNA world," and they are testing models that lead from RNA to the synthesis of simple DNA and protein molecules. These assemblages of molecules eventually could have become packaged within membranes, thus making up "protocells"--early versions of very simple cells. For those who are studying the origin of life, the question is no longer whether life could have originated by chemical processes involving nonbiological components. The question instead has become which of many pathways might have been followed to produce the first cells. Will we ever be able to identify the path of chemical evolution that succeeded in initiating life on Earth? Scientists are designing experiments and speculating about how early Earth could have provided a hospitable site for the segregation of molecules in units that might have been the first living systems. The recent speculation includes the possibility that the first living cells might have arisen on Mars, seeding Earth via the many meteorites that are known to travel from Mars to our planet. Of course, even if a living cell were to be made in the laboratory, it would not prove that nature followed the same pathway billions of years ago. But it is the job of science to provide plausible natural explanations for natural phenomena. The study of the origin of life is a very active research area in which important progress is being made, although the consensus among scientists is that none of the current hypotheses has thus far been confirmed. The history of science shows that seemingly intractable problems like this one may become amenable to solution later, as a result of advances in theory, instrumentation, or the discovery of new facts. Creationist Views of the Origin of the Universe, Earth, and Life Many religious persons, including many scientists, hold that God created the universe and the various processes driving physical and biological evolution and that these processes then resulted in the creation of galaxies, our solar system, and life on Earth. This belief, which sometimes is termed "theistic evolution," is not in disagreement with scientific explanations of evolution. Indeed, it reflects the remarkable and inspiring character of the physical universe revealed by cosmology, paleontology, molecular biology, and many other scientific disciplines. The advocates of "creation science" hold a variety of viewpoints. Some claim that Earth and the universe are relatively young, perhaps only 6,000 to 10,000 years old. These individuals often believe that the present physical form of Earth can be explained by "catastrophism," including a worldwide flood, and that all living things (including humans) were created miraculously, essentially in the forms we now find them. Other advocates of creation science are willing to accept that Earth, the planets, and the stars may have existed for millions of years. But they argue that the various types of organisms, and especially humans, could only have come about with supernatural intervention, because they show "intelligent design." In this booklet, both these "Young Earth" and "Old Earth" views are referred to as "creationism" or "special creation." There are no valid scientific data or calculations to substantiate the belief that Earth was created just a few thousand years ago. This document has summarized the vast amount of evidence for the great age of the universe, our galaxy, the solar system, and Earth from astronomy, astrophysics, nuclear physics, geology, geochemistry, and geophysics. Independent scientific methods consistently give an age for Earth and the solar system of about 5 billion years, and an age for our galaxy and the universe that is two to three times greater. These conclusions make the origin of the universe as a whole intelligible, lend coherence to many different branches of science, and form the core conclusions of a remarkable body of knowledge about the origins and behavior of the physical world. Nor is there any evidence that the entire geological record, with its orderly succession of fossils, is the product of a single universal flood that occurred a few thousand years ago, lasted a little longer than a year, and covered the highest mountains to a depth of several meters. On the contrary, intertidal and terrestrial deposits demonstrate that at no recorded time in the past has the entire planet been under water. Moreover, a universal flood of sufficient magnitude to form the sedimentary rocks seen today, which together are many kilometers thick, would require a volume of water far greater than has ever existed on and in Earth, at least since the formation of the first known solid crust about 4 billion years ago. The belief that Earth's sediments, with their fossils, were deposited in an orderly sequence in a year's time defies all geological observations and physical principles concerning sedimentation rates and possible quantities of suspended solid matter. Geologists have constructed a detailed history of sediment deposition that links particular bodies of rock in the crust of Earth to particular environments and processes. If petroleum geologists could find more oil and gas by interpreting the record of sedimentary rocks as having resulted from a single flood, they would certainly favor the idea of such a flood, but they do not. Instead, these practical workers agree with academic geologists about the nature of depositional environments and geological time. Petroleum geologists have been pioneers in the recognition of fossil deposits that were formed over millions of years in such environments as meandering rivers, deltas, sandy barrier beaches, and coral reefs. The example of petroleum geology demonstrates one of the great strengths of science. By using knowledge of the natural world to predict the consequences of our actions, science makes it possible to solve problems and create opportunities using technology. The detailed knowledge required to sustain our civilization could only have been derived through scientific investigation. The arguments of creationists are not driven by evidence that can be observed in the natural world. Special creation or supernatural intervention is not subjectable to meaningful tests, which require predicting plausible results and then checking these results through observation and experimentation. Indeed, claims of "special creation" reverse the scientific process. The explanation is seen as unalterable, and evidence is sought only to support a particular conclusion by whatever means possible. Evidence Supporting Biological Evolution A long path leads from the origins of primitive "life," which existed at least 3.5 billion years ago, to the profusion and diversity of life that exists today. This path is best understood as a product of evolution. Contrary to popular opinion, neither the term nor the idea of biological evolution began with Charles Darwin and his foremost work, On the Origin of Species by Means of Natural Selection (1859). Many scholars from the ancient Greek philosophers on had inferred that similar species were descended from a common ancestor. The word "evolution" first appeared in the English language in 1647 in a nonbiological connection, and it became widely used in English for all sorts of progressions from simpler beginnings. The term Darwin most often used to refer to biological evolution was "descent with modification," which remains a good brief definition of the process today. Darwin proposed that evolution could be explained by the differential survival of organisms following their naturally occurring variation--a process he termed "natural selection." According to this view, the offspring of organisms differ from one another and from their parents in ways that are heritable--that is, they can pass on the differences genetically to their own offspring. Furthermore, organisms in nature typically produce more offspring than can survive and reproduce given the constraints of food, space, and other environmental resources. If a particular off spring has traits that give it an advantage in a particular environment, that organism will be more likely to survive and pass on those traits. As differences accumulate over generations, populations of organisms diverge from their ancestors. Darwin's original hypothesis has undergone extensive modification and expansion, but the central concepts stand firm. Studies in genetics and molecular biology--fields unknown in Darwin's time--have explained the occurrence of the hereditary variations that are essential to natural selection. Genetic variations result from changes, or mutations, in the nucleotide sequence of DNA, the molecule that genes are made from. Such changes in DNA now can be detected and described with great precision. Genetic mutations arise by chance. They may or may not equip the organism with better means for surviving in its environment. But if a gene variant improves adaptation to the environment (for example, by allowing an organism to make better use of an available nutrient, or to escape predators more effectively--such as through stronger legs or disguising coloration), the organisms carrying that gene are more likely to survive and reproduce than those without it. Over time, their descendants will tend to increase, changing the average characteristics of the population. Although the genetic variation on which natural selection works is based on random or chance elements, natural selection itself produces "adaptive" change--the very opposite of chance. Scientists also have gained an understanding of the processes by which new species originate. A new species is one in which the individuals cannot mate and produce viable descendants with individuals of a preexisting species. The split of one species into two often starts because a group of individuals becomes geographically separated from the rest. This is particularly apparent in distant remote islands, such as the Gal?pagos and the Hawaiian archipelago, whose great distance from the Americas and Asia means that arriving colonizers will have little or no opportunity to mate with individuals remaining on those continents. Mountains, rivers, lakes, and other natural barriers also account for geographic separation between populations that once belonged to the same species. Once isolated, geographically separated groups of individuals become genetically differentiated as a consequence of mutation and other processes, including natural selection. The origin of a species is often a gradual process, so that at first the reproductive isolation between separated groups of organisms is only partial, but it eventually becomes complete. Scientists pay special attention to these intermediate situations, because they help to reconstruct the details of the process and to identify particular genes or sets of genes that account for the reproductive isolation between species. A particularly compelling example of speciation involves the 13 species of finches studied by Darwin on the Gal?pagos Islands, now known as Darwin's finches. The ancestors of these finches appear to have emigrated from the South American mainland to the Gal?pagos. Today the different species of finches on the island have distinct habitats, diets, and behaviors, but the mechanisms involved in speciation continue to operate. A research group led by Peter and Rosemary Grant of Princeton University has shown that a single year of drought on the islands can drive evolutionary changes in the finches. Drought diminishes supplies of easily cracked nuts but permits the survival of plants that produce larger, tougher nuts. Droughts thus favor birds with strong, wide beaks that can break these tougher seeds, producing populations of birds with these traits. The Grants have estimated that if droughts occur about once every 10 years on the islands, a new species of finch might arise in only about 200 years. The following sections consider several aspects of biological evolution in greater detail, looking at paleontology, comparative anatomy, biogeography, embryology, and molecular biology for further evidence supporting evolution. The Fossil Record Although it was Darwin, above all others, who first marshaled convincing evidence for biological evolution, earlier scholars had recognized that organisms on Earth had changed systematically over long periods of time. For example, in 1799 an engineer named William Smith reported that, in undisrupted layers of rock, fossils occurred in a definite sequential order, with more modern-appearing ones closer to the top. Because bottom layers of rock logically were laid down earlier and thus are older than top layers, the sequence of fossils also could be given a chronology from oldest to youngest. His findings were confirmed and extended in the 1830s by the paleontologist William Lonsdale, who recognized that fossil remains of organisms from lower strata were more primitive than the ones above. Today, many thousands of ancient rock deposits have been identified that show corresponding successions of fossil organisms. Thus, the general sequence of fossils had already been recognized before Darwin conceived of descent with modification. But the paleontologists and geologists before Darwin used the sequence of fossils in rocks not as proof of biological evolution, but as a basis for working out the original sequence of rock strata that had been structurally disturbed by earthquakes and other forces. In Darwin's time, paleontology was still a rudimentary science. Large parts of the geological succession of stratified rocks were unknown or inadequately studied. Darwin, therefore, worried about the rarity of intermediate forms between some major groups of organisms. Today, many of the gaps in the paleontological record have been filled by the research of paleontologists. Hundreds of thousands of fossil organisms, found in well-dated rock sequences, represent successions of forms through time and manifest many evolutionary transitions. As mentioned earlier, microbial life of the simplest type was already in existence 3.5 billion years ago. The oldest evidence of more complex organisms (that is, eucaryotic cells, which are more complex than bacteria) has been discovered in fossils sealed in rocks approximately 2 billion years old. Multicellular organisms, which are the familiar fungi, plants, and animals, have been found only in younger geological strata. The following list presents the order in which increasingly complex forms of life appeared: Life Form Millions of Years Since First Known Appearance (Approximate) Microbial (procaryotic cells) 3,500 Complex (eucaryotic cells) 2,000 First multicellular animals 670 Shell-bearing animals 540 Vertebrates (simple fishes) 490 Amphibians 350 Reptiles 310 Mammals 200 Nonhuman primates 60 Earliest apes 25 Australopithecine ancestors of humans 4 Modern humans 0 .15 (150,000 years) So many intermediate forms have been discovered between fish and amphibians, between amphibians and reptiles, between reptiles and mammals, and along the primate lines of descent that it often is difficult to identify categorically when the transition occurs from one to another particular species. Actually, nearly all fossils can be regarded as intermediates in some sense; they are life forms that come between the forms that preceded them and those that followed. The fossil record thus provides consistent evidence of systematic change through time--of descent with modification. From this huge body of evidence, it can be predicted that no reversals will be found in future paleontological studies. That is, amphibians will not appear before fishes, nor mammals before reptiles, and no complex life will occur in the geological record before the oldest eucaryotic cells. This prediction has been upheld by the evidence that has accumulated until now: no reversals have been found. Common Structures Inferences about common descent derived from paleontology are reinforced by comparative anatomy. For example, the skeletons of humans, mice, and bats are strikingly similar, despite the different ways of life of these animals and the diversity of environments in which they flourish. The correspondence of these animals, bone by bone, can be observed in every part of the body, including the limbs; yet a person writes, a mouse runs, and a bat flies with structures built of bones that are different in detail but similar in general structure and relation to each other. Scientists call such structures homologies and have concluded that they are best explained by common descent. Comparative anatomists investigate such homologies, not only in bone structure but also in other parts of the body, working out relationships from degrees of similarity. Their conclusions provide important inferences about the details of evolutionary history, inferences that can be tested by comparisons with the sequence of ancestral forms in the paleontological record. The mammalian ear and jaw are instances in which paleontology and comparative anatomy combine to show common ancestry through transitional stages. The lower jaws of mammals contain only one bone, whereas those of reptiles have several. The other bones in the reptile jaw are homologous with bones now found in the mammalian ear. Paleontologists have discovered intermediate forms of mammal-like reptiles (Therapsida) with a double jaw joint--one composed of the bones that persist in mammalian jaws, the other consisting of bones that eventually became the hammer and anvil of the mammalian ear. The Distribution of Species Biogeography also has contributed evidence for descent from common ancestors. The diversity of life is stupendous. Approximately 250,000 species of living plants, 100,000 species of fungi, and one million species of animals have been described and named, each occupying its own peculiar ecological setting or niche; and the census is far from complete. Some species, such as human beings and our companion the dog, can live under a wide range of environments. Others are amazingly specialized. One species of a fungus (Laboulbenia) grows exclusively on the rear portion of the covering wings of a single species of beetle (Aphaenops cronei) found only in some caves of southern France. The larvae of the fly Drosophila carcinophila can develop only in specialized grooves beneath the flaps of the third pair of oral appendages of a land crab that is found only on certain Caribbean islands. How can we make intelligible the colossal diversity of living beings and the existence of such extraordinary, seemingly whimsical creatures as the fungus, beetle, and fly described above? And why are island groups like the Gal?pagos so often inhabited by forms similar to those on the nearest mainland but belonging to different species? Evolutionary theory explains that biological diversity results from the descendants of local or migrant predecessors becoming adapted to their diverse environments. This explanation can be tested by examining present species and local fossils to see whether they have similar structures, which would indicate how one is derived from the other. Also, there should be evidence that species without an established local ancestry had migrated into the locality. Wherever such tests have been carried out, these conditions have been confirmed. A good example is provided by the mammalian populations of North and South America, where strikingly different native organisms evolved in isolation until the emergence of the isthmus of Panama approximately 3 million years ago. Thereafter, the armadillo, porcupine, and opossum--mammals of South American origin--migrated north, along with many other species of plants and animals, while the mountain lion and other North American species made their way across the isthmus to the south. The evidence that Darwin found for the influence of geographical distribution on the evolution of organisms has become stronger with advancing knowledge. For example, approximately 2,000 species of flies belonging to the genus Drosophila are now found throughout the world. About one-quarter of them live only in Hawaii. More than a thousand species of snails and other land mollusks also are found only in Hawaii. The biological explanation for the multiplicity of related species in remote localities is that such great diversity is a consequence of their evolution from a few common ancestors that colonized an isolated environment. The Hawaiian Islands are far from any mainland or other islands, and on the basis of geological evidence they never have been attached to other lands. Thus, the few colonizers that reached the Hawaiian Islands found many available ecological niches, where they could, over numerous generations, undergo evolutionary change and diversification. No mammals other than one bat species lived in the Hawaiian Islands when the first human settlers arrived; similarly, many other kinds of plants and animals were absent. The Hawaiian Islands are not less hospitable than other parts of the world for the absent species. For example, pigs and goats have multiplied in the wild in Hawaii, and other domestic animals also thrive there. The scientific explanation for the absence of many kinds of organisms, and the great multiplication of a few kinds, is that many sorts of organisms never reached the islands, because of their geographic isolation. Those that did reach the islands diversified over time because of the absence of related organisms that would compete for resources. Similarities During Development Embryology, the study of biological development from the time of conception, is another source of independent evidence for common descent. Barnacles, for instance, are sedentary crustaceans with little apparent similarity to such other crustaceans as lobsters, shrimps, or copepods. Yet barnacles pass through a free-swimming larval stage in which they look like other crustacean larvae. The similarity of larval stages supports the conclusion that all crustaceans have homologous parts and a common ancestry. Similarly, a wide variety of organisms from fruit flies to worms to mice to humans have very similar sequences of genes that are active early in development. These genes influence body segmentation or orientation in all these diverse groups. The presence of such similar genes doing similar things across such a wide range of organisms is best explained by their having been present in a very early common ancestor of all of these groups. New Evidence from Molecular Biology The unifying principle of common descent that emerges from all the foregoing lines of evidence is being reinforced by the discoveries of modern biochemistry and molecular biology. The code used to translate nucleotide sequences into amino acid sequences is essentially the same in all organisms. Moreover, proteins in all organisms are invariably composed of the same set of 20 amino acids. This unity of composition and function is a powerful argument in favor of the common descent of the most diverse organisms. In 1959, scientists at Cambridge University in the United Kingdom determined the three-dimensional structures of two proteins that are found in almost every multicelled animal: hemoglobin and myoglobin. Hemoglobin is the protein that carries oxygen in the blood. Myoglobin receives oxygen from hemoglobin and stores it in the tissues until needed. These were the first three-dimensional protein structures to be solved, and they yielded some key insights. Myoglobin has a single chain of 153 amino acids wrapped around a group of iron and other atoms (called "heme") to which oxygen binds. Hemoglobin, in contrast, is made of up four chains: two identical chains consisting of 141 amino acids, and two other identical chains consisting of 146 amino acids. However, each chain has a heme exactly like that of myoglobin, and each of the four chains in the hemoglobin molecule is folded exactly like myoglobin. It was immediately obvious in 1959 that the two molecules are very closely related. During the next two decades, myoglobin and hemoglobin sequences were determined for dozens of mammals, birds, reptiles, amphibians, fish, worms, and molluscs. All of these sequences were so obviously related that they could be compared with confidence with the three-dimensional structures of two selected standards--whale myoglobin and horse hemoglobin. Even more significantly, the differences between sequences from different organisms could be used to construct a family tree of hemoglobin and myoglobin variation among organisms. This tree agreed completely with observations derived from paleontology and anatomy about the common descent of the corresponding organisms. Similar family histories have been obtained from the three-dimensional structures and amino acid sequences of other proteins, such as cytochrome c (a protein engaged in energy transfer) and the digestive proteins trypsin and chymotrypsin. The examination of molecular structure offers a new and extremely powerful tool for studying evolutionary relationships. The quantity of information is potentially huge--as large as the thousands of different proteins contained in living organisms, and limited only by the time and resources of molecular biologists. As the ability to sequence the nucleotides making up DNA has improved, it also has become possible to use genes to reconstruct the evolutionary history of organisms. Because of mutations, the sequence of nucleotides in a gene gradually changes over time. The more closely related two organisms are, the less different their DNA will be. Because there are tens of thousands of genes in humans and other organisms, DNA contains a tremendous amount of information about the evolutionary history of each organism. Genes evolve at different rates because, although mutation is a random event, some proteins are much more tolerant of changes in their amino acid sequence than are other proteins. For this reason, the genes that encode these more tolerant, less constrained proteins evolve faster. The average rate at which a particular kind of gene or protein evolves gives rise to the concept of a "molecular clock." Molecular clocks run rapidly for less constrained proteins and slowly for more constrained proteins, though they all time the same evolutionary events. The figure on this page compares three molecular clocks: for cytochrome c proteins, which interact intimately with other macromolecules and are quite constrained in their amino acid sequences; for the less rigidly constrained hemoglobins, which interact mainly with oxygen and other small molecules; and for fibrinopeptides, which are protein fragments that are cut from larger proteins (fibrinogens) when blood clots. The clock for fibrinopeptides runs rapidly; 1 percent of the amino acids change in a little longer than 1 million years. At the other extreme, the molecular clock runs slowly for cytochrome c; a 1 percent change in amino acid sequence requires 20 million years. The hemoglobin clock is intermediate. The concept of a molecular clock is useful for two purposes. It determines evolutionary relationships among organisms, and it indicates the time in the past when species started to diverge from one another. Once the clock for a particular gene or protein has been calibrated by reference to some event whose time is known, the actual chronological time when all other events occurred can be determined by examining the protein or gene tree. An interesting additional line of evidence supporting evolution involves sequences of DNA known as "pseudogenes." Pseudogenes are remnants of genes that no longer function but continue to be carried along in DNA as excess baggage. Pseudogenes also change through time, as they are passed on from ancestors to descendants, and they offer an especially useful way of reconstructing evolutionary relationships. With functioning genes, one possible explanation for the relative similarity between genes from different organisms is that their ways of life are similar--for example, the genes from a horse and a zebra could be more similar because of their similar habitats and behaviors than the genes from a horse and a tiger. But this possible explanation does not work for pseudogenes, since they perform no function. Rather, the degree of similarity between pseudogenes must simply reflect their evolutionary relatedness. The more remote the last common ancestor of two organisms, the more dissimilar their pseudogenes will be. The evidence for evolution from molecular biology is overwhelming and is growing quickly. In some cases, this molecular evidence makes it possible to go beyond the paleontological evidence. For example, it has long been postulated that whales descended from land mammals that had returned to the sea. From anatomical and paleontological evidence, the whales' closest living land relatives seemed to be the even-toed hoofed mammals (modern cattle, sheep, camels, goats, etc.). Recent comparisons of some milk protein genes (beta-casein and kappa-casein) have confirmed this relationship and have suggested that the closest land-bound living relative of whales may be the hippopotamus. In this case, molecular biology has augmented the fossil record. Creationism and the Evidence for Evolution Some creationists cite what they say is an incomplete fossil record as evidence for the failure of evolutionary theory. The fossil record was incomplete in Darwin's time, but many of the important gaps that existed then have been filled by subsequent paleontological research. Perhaps the most persuasive fossil evidence for evolution is the consistency of the sequence of fossils from early to recent. Nowhere on Earth do we find, for example, mammals in Devonian (the age of fishes) strata, or human fossils coexisting with dinosaur remains. Undisturbed strata with simple unicellular organisms predate those with multicellular organisms, and invertebrates precede vertebrates; nowhere has this sequence been found inverted. Fossils from adjacent strata are more similar than fossils from temporally distant strata. The most reasonable scientific conclusion that can be drawn from the fossil record is that descent with modification has taken place as stated in evolutionary theory. Special creationists argue that "no one has seen evolution occur." This misses the point about how science tests hypotheses. We don't see Earth going around the sun or the atoms that make up matter. We "see" their consequences. Scientists infer that atoms exist and Earth revolves because they have tested predictions derived from these concepts by extensive observation and experimentation. Furthermore, on a minor scale, we "experience" evolution occurring every day. The annual changes in influenza viruses and the emergence of antibiotic-resistant bacteria are both products of evolutionary forces. Indeed, the rapidity with which organisms with short generation times, such as bacteria and viruses, can evolve under the influence of their environments is of great medical significance. Many laboratory experiments have shown that, because of mutation and natural selection, such microorganisms can change in specific ways from those of immediately preceding generations. On a larger scale, the evolution of mosquitoes resistant to insecticides is another example of the tenacity and adaptability of organisms under environmental stress. Similarly, malaria parasites have become resistant to the drugs that were used extensively to combat them for many years. As a consequence, malaria is on the increase, with more than 300 million clinical cases of malaria occurring every year. Molecular evolutionary data counter a recent proposition called "intelligent design theory." Proponents of this idea argue that structural complexity is proof of the direct hand of God in specially creating organisms as they are today. These arguments echo those of the 18th century cleric William Paley who held that the vertebrate eye, because of its intricate organization, had been specially designed in its present form by an omnipotent Creator. Modern-day intelligent design proponents argue that molecular structures such as DNA, or molecular processes such as the many steps that blood goes through when it clots, are so irreducibly complex that they can function only if all the components are operative at once. Thus, proponents of intelligent design say that these structures and processes could not have evolved in the stepwise mode characteristic of natural selection. However, structures and processes that are claimed to be "irreducibly" complex typically are not on closer inspection. For example, it is incorrect to assume that a complex structure or biochemical process can function only if all its components are present and functioning as we see them today. Complex biochemical systems can be built up from simpler systems through natural selection. Thus, the "history" of a protein can be traced through simpler organisms. Jawless fish have a simpler hemoglobin than do jawed fish, which in turn have a simpler hemoglobin than mammals. The evolution of complex molecular systems can occur in several ways. Natural selection can bring together parts of a system for one function at one time and then, at a later time, recombine those parts with other systems of components to produce a system that has a different function. Genes can be duplicated, altered, and then amplified through natural selection. The complex biochemical cascade resulting in blood clotting has been explained in this fashion. Similarly, evolutionary mechanisms are capable of explaining the origin of highly complex anatomical structures. For example, eyes may have evolved independently many times during the history of life on Earth. The steps proceed from a simple eye spot made up of light-sensitive retinula cells (as is now found in the flatworm), to formation of individual photosensitive units (ommatidia) in insects with light focusing lenses, to the eventual formation of an eye with a single lens focusing images onto a retina. In humans and other vertebrates, the retina consists not only of photoreceptor cells but also of several types of neurons that begin to analyze the visual image. Through such gradual steps, very different kinds of eyes have evolved, from simple light-sensing organs to highly complex systems for vision. Human Evolution Studies in evolutionary biology have led to the conclusion that human beings arose from ancestral primates. This association was hotly debated among scientists in Darwin's day. But today there is no significant scientific doubt about the close evolutionary relationships among all primates, including humans. Many of the most important advances in paleontology over the past century relate to the evolutionary history of humans. Not one but many connecting links--intermediate between and along various branches of the human family tree--have been found as fossils. These linking fossils occur in geological deposits of intermediate age. They document the time and rate at which primate and human evolution occurred. Scientists have unearthed thousands of fossil specimens representing members of the human family. A great number of these cannot be assigned to the modern human species, Homo sapiens. Most of these specimens have been well dated, often by means of radiometric techniques. They reveal a well-branched tree, parts of which trace a general evolutionary sequence leading from ape-like forms to modern humans. Paleontologists have discovered numerous species of extinct apes in rock strata that are older than four million years, but never a member of the human family at that great age. Australopithecus, whose earliest known fossils are about four million years old, is a genus with some features closer to apes and some closer to modern humans. In brain size, Australopithecus was barely more advanced than apes. A number of features, including long arms, short legs, intermediate toe structure, and features of the upper limb, indicate that the members of this species spent part of the time in trees. But they also walked upright on the ground, like humans. Bipedal tracks of Australopithecus have been discovered, beautifully preserved with those of other extinct animals, in hardened volcanic ash. Most of our Australopithecus ancestors died out close to two-and-a-half million years ago, while other Australopithecus species, which were on side branches of the human tree, survived alongside more advanced hominids for another million years. Distinctive bones of the oldest species of the human genus, Homo, date back to rock strata about 2.4 million years old. Physical anthropologists agree that Homo evolved from one of the species of Australopithecus. By two million years ago, early members of Homo had an average brain size one-and-a-half times larger than that of Australopithecus, though still substantially smaller than that of modern humans. The shapes of the pelvic and leg bones suggest that these early Homo were not part-time climbers like Australopithecus but walked and ran on long legs, as modern humans do. Just as Australopithecus showed a complex of ape-like, human-like, and intermediate features, so was early Homo intermediate between Australopithecus and modern humans in some features, and close to modern humans in other respects. The earliest stone tools are of virtually the same age as the earliest fossils of Homo. Early Homo, with its larger brain than Australopithecus, was a maker of stone tools. The fossil record for the interval between 2.4 million years ago and the present includes the skeletal remains of several species assigned to the genus Homo. The more recent species had larger brains than the older ones. This fossil record is complete enough to show that the human genus first spread from its place of origin in Africa to Europe and Asia a little less than two million years ago. Distinctive types of stone tools are associated with various populations. More recent species with larger brains generally used more sophisticated tools than more ancient species. Molecular biology also has provided strong evidence of the close relationship between humans and apes. Analysis of many proteins and genes has shown that humans are genetically similar to chimpanzees and gorillas and less similar to orangutans and other primates. DNA has even been extracted from a well-preserved skeleton of the extinct human creature known as Neanderthal, a member of the genus Homo and often considered either as a subspecies of Homo sapiens or as a separate species. Application of the molecular clock, which makes use of known rates of genetic mutation, suggests that Neanderthal's lineage diverged from that of modern Homo sapiens less than half a million years ago, which is entirely compatible with evidence from the fossil record. Based on molecular and genetic data, evolutionists favor the hypothesis that modern Homo sapiens, individuals very much like us, evolved from more archaic humans about 100,000 to 150,000 years ago. They also believe that this transition occurred in Africa, with modern humans then dispersing to Asia, Europe, and eventually Australasia and the Americas. Discoveries of hominid remains during the past three decades in East and South Africa, the Middle East, and elsewhere have combined with advances in molecular biology to initiate a new discipline--molecular paleoanthropology. This field of inquiry is providing an ever-growing inventory of evidence for a genetic affinity between human beings and the African apes. Opinion polls show that many people believe that divine intervention actively guided the evolution of human beings. Science cannot comment on the role that supernatural forces might play in human affairs. But scientific investigations have concluded that the same forces responsible for the evolution of all other life forms on Earth can account for the evolution of human beings. Conclusion Science is not the only way of acquiring knowledge about ourselves and the world around us. Humans gain understanding in many other ways, such as through literature, the arts, philosophical reflection, and religious experience. Scientific knowledge may enrich aesthetic and moral perceptions, but these subjects extend beyond science's realm, which is to obtain a better understanding of the natural world. The claim that equity demands balanced treatment of evolutionary theory and special creation in science classrooms reflects a misunderstanding of what science is and how it is conducted. Scientific investigators seek to understand natural phenomena by observation and experimentation. Scientific interpretations of facts and the explanations that account for them therefore must be testable by observation and experimentation. Creationism, intelligent design, and other claims of supernatural intervention in the origin of life or of species are not science because they are not testable by the methods of science. These claims subordinate observed data to statements based on authority, revelation, or religious belief. Documentation offered in support of these claims is typically limited to the special publications of their advocates. These publications do not offer hypotheses subject to change in light of new data, new interpretations, or demonstration of error. This contrasts with science, where any hypothesis or theory always remains subject to the possibility of rejection or modification in the light of new knowledge. No body of beliefs that has its origin in doctrinal material rather than scientific observation, interpretation, and experimentation should be admissible as science in any science course. Incorporating the teaching of such doctrines into a science curriculum compromises the objectives of public education. Science has been greatly successful at explaining natural processes, and this has led not only to increased understanding of the universe but also to major improvements in technology and public health and welfare. The growing role that science plays in modern life requires that science, and not religion, be taught in science classes. Appendix Frequently Asked Questions* What is evolution? Evolution in the broadest sense explains that what we see today is different from what existed in the past. Galaxies, stars, the solar system, and Earth have changed through time, and so has life on Earth. Biological evolution concerns changes in living things during the history of life on Earth. It explains that living things share common ancestors. Over time, biological processes such as natural selection give rise to new species. Darwin called this process "descent with modification," which remains a good definition of biological evolution today. Isn't evolution just an inference? No one saw the evolution of one-toed horses from three-toed horses, but that does not mean that we cannot be confident that horses evolved. Science is practiced in many ways besides direct observation and experimentation. Much scientific discovery is done through indirect experimentation and observation in which inferences are made, and hypotheses generated from those inferences are tested. For instance, particle physicists cannot directly observe subatomic particles because the particles are too small. They make inferences about the weight, speed, and other properties of the particles based on other observations. A logical hypothesis might be something like this: If the weight of this particle is Y, when I bombard it, X will happen. If X does not happen, then the hypothesis is disproved. Thus, we can learn about the natural world even if we cannot directly observe a phenomenon--and that is true about the past, too. In historical sciences like astronomy, geology, evolutionary biology, and archaeology, logical inferences are made and then tested against data. Sometimes the test cannot be made until new data are available, but a great deal has been done to help us understand the past. For example, scorpionflies (Mecoptera) and true flies (Diptera) have enough similarities that entomologists consider them to be closely related. Scorpionflies have four wings of about the same size, and true flies have a large front pair of wings but the back pair is replaced by small club-shaped structures. If two-winged flies evolved from scorpionfly-like ancestors, as comparative anatomy suggests, then an intermediate true fly with four wings should have existed--and in 1976 fossils of such a fly were discovered. Furthermore, geneticists have found that the number of wings in flies can be changed through mutations in a single gene. Something that happened in the past is thus not "off limits" for scientific study. Hypotheses can be made about such phenomena, and these hypotheses can be tested and can lead to solid conclusions. Furthermore, many key mechanisms of evolution occur over relatively short periods and can be observed directly--such as the evolution of bacteria resistant to antibiotics. Evolution is a well-supported theory drawn from a variety of sources of data, including observations about the fossil record, genetic information, the distribution of plants and animals, and the similarities across species of anatomy and development. Scientists have inferred that descent with modification offers the best scientific explanation for these observations. Is evolution a fact or a theory? The theory of evolution explains how life on Earth has changed. In scientific terms, "theory" does not mean "guess" or "hunch" as it does in everyday usage. Scientific theories are explanations of natural phenomena built up logically from testable observations and hypotheses. Biological evolution is the best scientific explanation we have for the enormous range of observations about the living world. Scientists most often use the word "fact" to describe an observation. But scientists can also use fact to mean something that has been tested or observed so many times that there is no longer a compelling reason to keep testing or looking for examples. The occurrence of evolution in this sense is a fact. Scientists no longer question whether descent with modification occurred because the evidence supporting the idea is so strong. Don't many famous scientists reject evolution? No. The scientific consensus around evolution is overwhelming. Those opposed to the teaching of evolution sometimes use quotations from prominent scientists out of context to claim that scientists do not support evolution. However, examination of the quotations reveals that the scientists are actually disputing some aspect of how evolution occurs, not whether evolution occurred. For example, the biologist Stephen Jay Gould once wrote that "the extreme rarity of transitional forms in the fossil record persists as the trade secret of paleontology." But Gould, an accomplished paleontologist and eloquent educator about evolution, was arguing about how evolution takes place. He was discussing whether the rate of change of species is slow and gradual or whether it takes place in bursts after long periods when little change occurs--an idea known as punctuated equilibrium. As Gould writes in response, "This quotation, although accurate as a partial citation, is dishonest in leaving out the following explanatory material showing my true purpose--to discuss rates of evolutionary change, not to deny the fact of evolution itself." Gould defines punctuated equilibrium as follows: Punctuated equilibrium is neither a creationist idea nor even a non-Darwinian evolutionary theory about sudden change that produces a new species all at once in a single generation. Punctuated equilibrium accepts the conventional idea that new species form over hundreds or thousands of generations and through an extensive series of intermediate stages. But geological time is so long that even a few thousand years may appear as a mere "moment" relative to the several million years of existence for most species. Thus, rates of evolution vary enormously and new species may appear to arise "suddenly" in geological time, even though the time involved would seem long, and the change very slow, when compared to a human lifetime. If humans evolved from apes, why are there still apes? Humans did not evolve from modern apes, but humans and modern apes shared a common ancestor, a species that no longer exists. Because we share a recent common ancestor with chimpanzees and gorillas, we have many anatomical, genetic, biochemical, and even behavioral similarities with these African great apes. We are less similar to the Asian apes--orangutans and gibbons--and even less similar to monkeys, because we share common ancestors with these groups in the more distant past. Evolution is a branching or splitting process in which populations split off from one another and gradually become different. As the two groups become isolated from each other, they stop sharing genes, and eventually genetic differences increase until members of the groups can no longer interbreed. At this point, they have become separate species. Through time, these two species might give rise to new species, and so on through millennia. Why can't we teach creation science in my school? The courts have ruled that "creation science" is actually a religious view. Because public schools must be religiously neutral under the U.S. Constitution, the courts have held that it is unconstitutional to present creation science as legitimate scholarship. In particular, in a trial in which supporters of creation science testified in support of their view, a district court declared that creation science does not meet the tenets of science as scientists use the term (McLean v. Arkansas Board of Education). The Supreme Court has held that it is illegal to require that creation science be taught when evolution is taught (Edwards v. Aguillard). In addition, district courts have decided that individual teachers cannot advocate creation science on their own (Peloza v. San Juan Capistrano School District and Webster v. New Lennox School District). (See Teaching About Evolution and the Nature of Science, Appendix A. National Academy of Sciences, Washington, D.C. 1998.) Teachers' organizations such as the National Science Teachers Association, the National Association of Biology Teachers, the National Science Education Leadership Association, and many others also have rejected the science and pedagogy of creation science and have strongly discouraged its presentation in the public schools. In addition, a coalition of religious and other organizations has noted in "A Joint Statement of Current Law" that "in science class, [schools] may present only genuinely scientific critiques of, or evidence for, any explanation of life on Earth, but not religious critiques (beliefs unverifiable by scientific methodology)." (See Teaching About Evolution and the Nature of Science, Appendices B and C, National Academy of Sciences, Washington, D.C., 1998.) Some argue that "fairness" demands the teaching of creationism along with evolution. But a science curriculum should cover science, not the religious views of particular groups or individuals. If evolution is taught in schools, shouldn't creationism be given equal time? Some religious groups deny that microorganisms cause disease, but the science curriculum should not therefore be altered to reflect this belief. Most people agree that students should be exposed to the best possible scholarship in each field. That scholarship is evaluated by professionals and educators in those fields. Scientists as well as educators have concluded that evolution--and only evolution--should be taught in science classes because it is the only scientific explanation for why the universe is the way it is today. Many people say that they want their children to be exposed to creationism in school, but there are thousands of different ideas about creation among the world's people. Comparative religions might comprise a worthwhile field of study, but not one appropriate for a science class. Furthermore, the U.S. Constitution states that schools must be religiously neutral, so legally a teacher cannot present any particular creationist view as being more "true" than others. *Adapted from Teaching About Evolution and the Nature of Science by the National Academy of Sciences (Washington, D.C.: National Academy Press, 1998). Recommended Readings Evolution Dawkins, Richard 1996 Climbing Mount Improbable, W.W. Norton: New York and London. An authoritative and elegant account of the evolutionary explanation of the "design" of organisms. Fortey, Richard 1998 Life: A Natural History of the First Four Billion Years of Life on Earth, Alfred P. Knopf: New York. A lively account of the history of life on Earth. Gould, Stephen J. 1992 The Panda's Thumb, W.W. Norton: New York. Gould's Natural History columns have been collected into a series of books including Hen's Teeth and Horse's Toes, An Urchin in the Storm, Eight Little Piggies, The Flamingo's Smile, and Bully for Brontosaurus. All are good popular introductions to the basic ideas behind evolution, and extremely readable. Horner, John R., and Edwin Dobb 1997 Dinosaur Lives: Unearthing an Evolutionary Saga, HarperCollins: New York. What it's like to uncover fossilized bones, eggs, and more, plus Horner's views on dinosaurs. Howells, W.W. 1997 Getting Here: The Story of Human Evolution, Compass Press: Washington, D.C. A very readable survey of human evolution by one of the fathers of physical anthropology. Johanson, Donald C., Lenora Johanson, and Blake Edgar 1994 Ancestors: In Search of Human Origins, Villard Books: New York. The companion volume to Johanson's NOVA series, "In Search of Human Origins." Mayr, Ernst 1991 One Long Argument: Charles Darwin and the Genesis of Modern Evolutionary Thought, Harvard University Press: Cambridge, MA. An easily understandable distillation of Charles Darwin's scientific contributions. National Academy of Sciences 1998 Teaching About Evolution and the Nature of Science, National Academy Press: Washington, DC. An engaging, conversational, and well-structured framework for understanding and teaching evolution. Nesse, Randolph, and George C. Williams 1996 Why We Get Sick: The New Science of Darwinian Medicine, Vintage Books: New York. The principle of natural selection as applied to modern-day health and disease. Helps to illustrate evolution as an ongoing phenomenon. Tattersall, Ian 1998 Becoming Human, Harcourt Brace: New York. A description of the current state of understanding about the differences between Neanderthals and Homo sapiens. Weiner, Jonathan 1994 The Beak of the Finch: A Story of Evolution in Our Time, Alfred P. Knopf: New York. Discussion of basic evolutionary principles and how they are illustrated by ongoing evolution on the Gal?pagos Islands. Whitfield, Philip 1993 From So Simple a Beginning, Macmillan: New York. A large-format, beautifully illustrated book explaining evolution from genetic, fossil, and geological perspectives. A good general introduction for nonspecialists. Zimmer, Carl 1999 At the Water's Edge: Macroevolution and the Transformation of Life, Free Press: New York. Some creatures moved from water to land (the evolution land vertebrates) and others from land to water (the evolution of whales from land animals). Zimmer clearly explains these two events in the history of vertebrates and what might have brought them about. Evolution: Books for Children and Young Adults Cole, Joanna, and Juan Carlos Barberis 1987 The Human Body: How We Evolved, Illustrated by Walter Gaffney-Kessell, William Morrow and Company: New York. This book traces the evolution of humans, from early prehistoric ancestors to modern tool-users. Grades 4-7. Lauber, Patricia, and Douglas Henderson 1994 How Dinosaurs Came to Be, Simon and Schuster: New York. A description of the ancestors to the dinosaurs. Grades 4-7. Matsen, Brad, and Ray Troll 1994 Planet Ocean: A Story of Life, the Sea, and Dancing to the Fossil Record, 10 Speed Press: Berkeley, CA. Whimsically illustrated tour of history for older kids and adults. Grades junior high to high school. McNulty, Faith 1999 How Whales Walked into the Sea, Illustrated by Ted Lewin, Scholastic Trade: New York. This wonderfully illustrated book describes the evolution of whales from land mammals. Grades K-3. Stein, Sara 1986 The Evolution Book, Workman Publishing Co., Inc.: New York. A hands-on, project-oriented survey of evolution and its mechanisms. Grades 4-8. Troll, Ray, and Brad Matsen 1996 Raptors Fossils Fins and Fangs: A Prehistoric Creature Feature, Tricycle Press: Berkeley, CA. A light-hearted trip through time ("Good Gracious -- Cretaceous!") with similes kids will like ("Pterosaurs -- some as big as jet fighters."). Grades 4-6. Origin of the Universe and Earth Dalrymple, G. Brent 1991 The Age of the Earth, Stanford University Press: Stanford, CA. A comprehensive discussion of the evidence for the ages of the Earth, moon, meteorites, solar system, Galaxy, and universe. Longair, Malcolm S. 1996 Our Evolving Universe, Cambridge University Press: New York. A brief discussion of the origin and evolution of the universe. Silk, Joseph 1994 A Short History of the Universe, Scientific American Library: New York. Popular treatment of the evolution of the universe. Weinberg, Steven 1993 The First Three Minutes: A Modern View of the Origin of the Universe, Basic Books: New York. An explanation of what happened during the Big Bang. Evolution and Creationism Controversy Godfrey, Laurie, ed. 1983 Scientists Confront Creationism, W.W. Norton: New York. A collection of articles by scientists analyzing and refuting arguments of creation science. Kitcher, Philip 1982 Abusing Science: The Case Against Creationism, MIT Press: Cambridge, MA. A philosophical as well as scientific analysis of creation science. Matsumura, Molleen 1995 Voices for Evolution, National Center for Science Education, Inc: Berkeley, CA. A collection of statements supporting the teaching of evolution from many different types of organizations: scientific, civil liberties, religious, and educational. Numbers, Ronald 1992 The Creationists: The Evolution of Scientific Creationism, University of California Press: Berkeley, CA. A thorough history of the American creationist movement. Pennock, Robert T. 1999 Tower of Babel: The Evidence Against the New Creationism, MIT Press: Cambridge, MA. A philosopher of science analyzes the newer "intelligent design" theory and "theistic science" creationism. Skehan, James W. 1986 Modern Science and the Book of Genesis, National Science Teachers Association: Washington, DC. Written by a geologist (former Director of the Weston Seismological Observatory) and bible scholar, trained as a Jesuit priest. Strahler, Arthur 1987 Science and Earth History: The Evolution/Creation Controversy, Prometheus Press: Buffalo, NY. A comprehensive analysis of creationist scientific claims. Toumey, Christopher P. 1994 God's Own Scientists: Creationists in a Secular World, Rutgers University Press: New Brunswick, NJ. An anthropologist's view of creationism as a belief system upholding the moral authority of both science and religion. Reviewers This report has been reviewed by individuals chosen for their diverse perspectives and technical expertise, in accordance with procedures approved by the National Research Council's Report Review Committee. The purpose of the independent review is to provide candid and critical comments that will assist the authors and the National Academy of Sciences in making their published report as sound as possible and to ensure that the report meets institutional standards for objectivity, evidence, and responsiveness to the study charge. The contents of the review comments and draft manuscript remain confidential to protect the integrity of the deliberative process. We wish to thank the following individuals for their participation in the review of this report: John Baldeschwieler J. Stanley Johnson Professor and Professor of Chemistry Division of Chemistry and Chemical Engineering California Institute of Technology Pasadena, California John E. Dowling Maria Moors Cabot Professor of Natural Science The Biological Laboratories Harvard University Cambridge, Massachusetts Marye Anne Fox Chancellor North Carolina State University Raleigh, North Carolina Wilford Gardner Dean Emeritus College of Natural Resources University of California Berkeley, California Timothy Goldsmith Professor of Biology Department of Molecular, Cellular, and Developmental Biology Yale University New Haven, Connecticut Avram Goldstein Professor of Pharmacology, Emeritus Stanford University Stanford, California Ursula Goodenough Professor Department of Biology Washington University Saint Louis, Missouri Robert Griffiths Professor of Physics Carnegie Mellon University Pittsburgh, Pennsylvania Norman Horowitz Professor Emeritus Division of Biology California Institute of Technology Pasadena, California Susan Kidwell Professor Department of Geophysical Sciences University of Chicago Chicago, Illinois David Pilbeam Henry Ford II Professor of Social Sciences Peabody Museum Harvard University Cambridge, Massachusetts Luis Sequeira J.C. Walker Professor Emeritus Department of Plant Pathology University of Wisconsin Madison, Wisconsin Phillip Tobias Professor Emeritus Department of Anatomical Sciences University of Witwatersrand Medical School Johannesburg, Republic of South Africa And other anonymous reviews. While the individuals listed above have provided many constructive comments and suggestions, responsibility for the final content of this report rests solely with the authoring committee and the National Academy of Sciences. Council of the National Academy of Sciences Bruce Alberts President National Academy of Sciences Washington, DC Mary Ellen Avery Professor of Pediatrics Harvard Medical School Boston, Massachusetts Lewis M. Branscomb Professor Emeritus John F. Kennedy School of Government Harvard University Cambridge, Massachusetts Ralph J. Cicerone Chancellor University of California Irvine, California Marye Anne Fox Chancellor North Carolina State University Raleigh, North Carolina Ralph E. Gomory President Alfred P. Sloan Foundation New York, New York Ronald L. Graham Chief Scientist AT&T Labs Florham Park, New Jersey Jack Halpern Louis Block Distinguished Professor Emeritus Department of Chemistry University of Chicago Chicago, Illinois David M. Kipnis Distinguished University Professor Washington University School of Medicine Saint Louis, Missouri Daniel E. Koshland Jr. Professor in the Graduate School Department of Molecular and Cellular Biology University of California Berkeley, California Peter Raven Director Missouri Botanical Garden Saint Louis, Missouri Sherwood F. Rowland Donald Bren Research Professor of Chemistry and Earth System Science Department of Chemistry University of California Irvine, California William J. Rutter Chairman Chiron Corporation Emeryville, California Luis Sequeira J.C. Walker Professor Emeritus Department of Plant Pathology University of Wisconsin Madison, Wisconsin Carla J. Shatz Investigator Howard Hughes Medical Institute Professor Department of Molecular and Cellular Biology University of California Berkeley, California Jean D. Wilson Charles Cameron Sprague Distinguished Chair in Biomedical Science University of Texas Southwestern Medical Center Dallas, Texas Robert H. Wurtz Chief Laboratory of Sensorimotor Research National Institutes of Health Bethesda, Maryland Credits Front cover and title page: Hurricane Andrew over the Gulf of Mexico, August 1992, NOAA. Back cover: Map of the world by Isidore of Seville [A.D. 560-636], redrawn and published in 1898 in Mappaemundi: Die altesten Weltkarten, a six-volume work compiled by Konrad Miller. Library of Congress, Geography and Map Division. page iv: Entrance to National Academy of Sciences building, Carol M. Highsmith, photographer. page v: Marble seal of the National Academy of Sciences, David Patterson, photographer. page vi: Marty Stouffer, 1991/PNI. page x: Young stars, Hubble Space Telescope, NASA. page 3: background: Ken Graham/PNI; insets: photograph of Edwin Hubble: National Academy of Sciences; Hubble Deep Field, Hubble Space Telescope, NASA. page 4: Young stellar disks in infrared, Hubble Space Telescope, NASA. page 6: left: DNA, Dr. A. Lesk, Laboratory of Molecular Biology/Science Photo Library; right: RNA, Ken Eward/Science Source, Photo Researchers, Inc. page 9: Charles Darwin, National Library of Medicine, National Institutes of Health. page 9: Gal?pagos Islands, Archive Photos, 1994/PNI page 11: Darwin's finches. Drawing by K. Thalia Grant. From The Beak of the Finch by Jonathan Weiner. 1994 by Jonathan Weiner. Reprinted by permission of Alfred A. Knopf, Inc. page 12: Paria River, Utah. Grand Staircase/Escalante National Monument, Tom Till. pages 12-13: Illustration of layers of sedimentary rock, Joyce Pendola, courtesy Natural History. page 14: Illustration by Leigh Coriale Design and Illustration, adapted from Patterns in Evolution: The New Molecular View by Lewin, Scientific American Library. Used with permission by W.H. Freeman and Company. page 16: top, Ron Sanford, 1994/PNI; bottom left, Marty Stouffer, 1991/PNI; bottom right, Erwin Bauer, Peggy Bauer, 1990/PNI. page 18: Myoglobin, Irving Geis. page 19: Cytochrome c. Illustration by Leigh Coriale Design and Illustration, adapted from the Journal of Molecular Evolution, Vol. I, 37, 1971. pages 20-21: Drawings of Mammalian land ancestor, Balaenoptera, by N. Haver. Drawings of Ambulocetus, Rodhocetus, by N. Haver, Sinauer Associates. page 22: Illustration adapted from The Cambridge Encyclopedia of Life Sciences. Reprinted with permission of Cambridge University Press. page 24: Drawings by Darwen Hennings, Wadsworth Publishing Company. page 36: Detail, Paria River, Utah. Grand Staircase National Momument Tom Hill. From checker at panix.com Wed Sep 21 01:41:11 2005 From: checker at panix.com (Premise Checker) Date: Tue, 20 Sep 2005 21:41:11 -0400 (EDT) Subject: [Paleopsych] Francis Fukuyama: Human biomedicine and the problem of governance. Message-ID: Francis Fukuyama: Human biomedicine and the problem of governance. Perspectives in Biology and Medicine, Spring 2005 v48 i2 p195(6) I APPRECIATE THE OPPORTUNITY given me by the editors of Perspectives on Biology and Medicine to reflect on my experience on the President's Council on Bioethics and the work it has produced. In light of the fact that I never submitted a personal statement at the time of the publication of the Council's 2002 report Human Cloning and Human Dignity, I particularly welcome this opportunity to clarify my views on this subject. Contrary to the Council's critics, our group from the beginning had a healthy balance of views on the tortured "moral status of the embryo" question. Unsurprisingly for a panel nominated by a pro-fife President, there was a strong minority committed to the view that full human status began at conception. But there was another equally strong minority that believed strongly that embryos had no particular moral status and could be cloned or otherwise used for stem cell research. I found myself, with several other members of the Council, in between these camps. I believe that human embryos have an intermediate moral status: they are not the moral equivalents of infants, nor are they simply clumps of cells like any other tissue sample that can be used and discarded at will. There are other things that have a similar intermediate moral status. Human cadavers, for example, can be used instrumentally in the training of medical students or for research, but they cannot be disposed of at will and must be treated at all times with a certain degree of respect. For reasons that I lay out more systematically in my book Our Posthuman Future (2002), I believe that full moral status is something that is gradually acquired over time (both developmental and evolutionary). Human beings acquire it gradually during prenatal development, and do not stop acquiring it even at birth. This gradual acquisition of moral status is reflected in the fact that we give full political rights only to adults and not to children. The implication of an embryo's intermediate moral status is that it can be used instrumentally, but only for serious purposes and with a certain degree of respect due an entity that has the potential to become a full human being. I believe that embryos can be used as a source of stem cells for research, but that the process ought to be under social control to ensure that these serious purposes are met. This means, at a minimum, a regulatory system that keeps track of embryos and makes sure that they are not used for purposes other than serious scientific research (e.g., implantation to produce a child). Such a regulatory system would solve the enforcement problem with respect to reproductive cloning, which was one of the reasons I had earlier supported a broad cloning ban. While I have a number of ethical objections to reproductive cloning, I do not oppose research cloning per se. My objections are largely consequential. I am concerned about the precedent that this kind of cloning will have for other types of research further down the road: having permitted the creation of cloned embryos in order to harvest stem cells, will we at some point want to clone fetuses to harvest complete organs and tissues? No one today advocates this, but people's moral principles tend not to be deeply grounded when there are research incentives weighing on the other side. Not long ago those advocating stem cell research swore that they would draw the line at research cloning, but soon found the latter quite acceptable when the need arose. I voted, with a majority of the Council, in favor of a recommendation to the President to support a total ban on reproductive cloning and a four-year moratorium on research cloning. While other members of the Council who shared my views on the intermediate moral status of embryos voted to permit research cloning, I supported the moratorium for a number of prudential reasons. First, it seemed to me that a four-year moratorium would not seriously slow down the pace of research in this area. Supporters of stem cell research and research cloning are guilty of going way beyond what they know about the likely benefits of these activities, and of minimizing the length of time that would be required to realize them. Given President Bush's August 9, 2001, decision that liberalized existing rules on federally funded stem cell research, it would require at least four years to categorize existing lines and fund significant research on them. By that point, we would be in a much better position to know exactly how promising this research was, and whether there were other approaches, such as work on adult stem cells, that would yield comparable results. If it became clear at the end of four years that we needed to move ahead, then I would support lifting the moratorium. A second prudential reason for having a four-year moratorium had to do with the length of time needed to set up an appropriate regulatory system. One of my consistent interests during my tenure on the Council has been to promote an updating and modernization of the U.S. regulatory system for human biomedicine, particularly as it relates to embryo research. As noted earlier, my view that human embryos have an intermediate moral status implies that while embryos can be used instrumentally for research, this activity ought to be closely supervised by a public authority. While supporters of research cloning pay lip service to the need for a regulatory framework, they do not take seriously the amount of time and effort required to create a good one. Four years to me was the minimum amount of time needed to establish such a regulatory body; similar efforts in Britain (to create the Human Fertilisation and Embryology Authority, HFEA) and in Canada (the Assisted Reproduction Agency of Canada) had taken even longer. More than two years have passed since the Council issued its cloning report, and nothing has happened. The two bills introduced into Congress in 2001, the first banning both reproductive and research cloning and the second banning only reproductive cloning, remain stuck in the Senate. The Council's proposal for a four-year moratorium, which was offered as a means of breaking this deadlock, was not taken up; there has been no work on designing a new regulatory system; and there has been no change in President Bush's stem cell policy. The Council made a second effort to get around the logjam with its last report Reproduction and Responsibility (2004). That report defined a number of reproductive possibilities that everyone on the Council agreed were morally unacceptable today, and it urged Congress to move forward with possible legislative action in a way that would not prejudice a decision on the acceptability of embryo research or cloning. This too, unfortunately, has also fallen afoul of embryo politics and is unlikely to lead to any legislative action. Even the modest steps outlined in Reproduction and Responsibility would have been only the beginning of a more comprehensive regulatory approach that in my view is needed. Technology will continue to move quickly and pose new ethical challenges in the coming years, as well as substantial opportunities for new cures and treatments. Congress cannot intervene legislatively in response to each new technological development that comes along; it does not have the time, interest, or expertise to debate any but the most important. What is needed is the ability to delegate to a regulatory agency responsibility for making decisions on more routine kinds of issues, based on broad guidelines set by Congress. This is the model followed by Britain's HFEA, as well as the regulatory authorities in any number of other industrialized democracies. Our current regulatory system is a patchwork of excellent--indeed, sometimes excessive--regulation, and in other areas underregulation. The Federal Drug Adminstration represents a "gold standard" in the regulation of drugs and medical devices; on the other hand, it does not regulate the practice of medicine and therefore leaves entire areas like assisted reproduction essentially unregulated. In addition, the FDA's statute allows it to regulate only on the basis of safety and efficacy, and not on normative grounds. The National Institutes of Health, by contrast, does have a broader authority to regulate on the basis of ethical concerns, but its jurisdiction extends only to federally funded research. It remains my belief that the United States needs to rethink its entire system for regulating human biomedicine in light of technological advances that are either here today or will arrive over the coming years. As a general rule, we should not invite more regulation than is absolutely necessary so as not to hold back important scientific inquiry. But regulation can also facilitate research and future individual choice by assuring the public that the former is being done responsibly and within the bounds set by the broader community. What a revised regulatory system would look like is well beyond the scope of the present article. I am currently directing a foundation-sponsored study on the governance of human biotechnologies, which will outline options and elaborate a concrete proposal at greater length. Briefly, I believe that the institutional design of a new regulator must address several different concerns. First, the type of decisions delegated to the regulatory authority need to be ones on which there is broad societal consensus, though not necessarily detailed knowledge or agreement. The regulator must be shielded from having to pass judgment on highly controversial issues such as the moral status of embryos, since this is a matter that needs to be adjudicated by the country's central political institutions (i.e., Congress and the courts). The same institution can facilitate or prohibit stem cell research, research cloning, and a host of other procedures. British regulators facilitate research cloning; Canadian regulators prohibit it. But there has to be agreement on the ends that the regulation is meant to serve before there can be delegated authority. Second, there needs to be a broadening of participation in the regulator's decision-making procedures. In a liberal democracy like the United States, the people, through their elected representatives, are ultimately sovereign, but they typically delegate substantial regulatory powers to the epistemic communities involved in a particular scientific or industrial sectors. This stands to reason, since the epistemic communities are the ones who possess knowledge about the field and have a strong self-interest in the nature of state oversight. This is nowhere more true than in a field like human biomedicine. A new institutional design will have to continue to vest great authority in the existing epistemic communities (i.e., the scientific research community, the medical community, and the pharmaceutical and biotechnology industries); the question is whether it is possible to broaden participation in agency oversight to include other societal stakeholders, such that the regulator would be more broadly representative of the whole community. Members of the epistemic communities are highly knowledgeable, but their interests do not necessarily represent the interests of society as a whole. There are many approaches that have been taken to the question of public participation, from the lay members of institutional review boards to the public notice and comment period that is part of the Administrative Procedures Act. In Britain, the government-appointed board of the HFEA is supposed to represent a mixture of views broadly representative of British society (though whether it achieves this end is contested). None of these is fully satisfactory: those members of the general public that do not have a direct interest in the subject are usually insufficiently knowledgeable to participate meaningfully, and there is always the question of who selects the members of the board and how the selection process can be shielded from excessive political influence. In the area of human biomedicine, there is a particular problem that the existing organized interest groups (the scientific research community, the biotech industry, and various pro-life groups) are much more politically polarized than the American public in general. Therefore opening up "public participation" to input from these groups alone will not necessarily provide clear insight into what the American public really wants. Designs for new institutions typically do not spring from the pens of public policy analysts; they are highly dependent on historical precedents and traditions, require extensive discussion and consultation, and are subject to the give-and-take of politics. It was my hope that the President's Council, like Britain's Warnock Commission in the 1980s that paved the way for the HFEA, could have been the vehicle for at least initiating a discussion of new institutions. There are a host of political reasons why this was not possible, beginning with the fact that abortion and embryo politics are much more controversial issues here than in the United Kingdom. (1) It may be that serious legislative action on this front will have to await a scandal or setback on the scale of thalidomide or Willowbrook before a political consensus develops in favor of action. It would be far better, of course, if we could be proactive in anticipating such problems, but that is unfortunately not the way our democracy has always worked in the past. This, then, will have to be a topic to be taken up by future bodies bioethics advisory bodies similar to the President's Council. I would like to conclude by saying something about the nature of debate and discussion on the President's Council. A number of critics, including some members of the Council, have charged that the Council was stacked with pro-life ideologues who were hostile to science, and that opposing views were not fairly represented. As my account of the debate over the cloning report indicates, this was far from the truth. There was always a remarkable diversity of opinion among the Council members. In my personal experience, the Council's chairman, Leon Kass, and his staff were unfailingly evenhanded in their treatment of Council members and bent over backwards to take account of their views in Council documents. Some of the most genuinely interesting discussions that I have ever participated in--not just of bioethics but of philosophic and pubic policy issues more broadly--took place around the Council's table. One has only to read through the transcripts of the meetings to see the richness of the discussion, and how so much of it transcended the kinds of ideological debates to which Americans have become accustomed. Several of the Council's publications, including Beyond Therapy (President's Council 2003b) and the Being Human reader (President's Council 2003a), contain writing of a subtlety and nuance that I never thought I would see in a document published by the U.S. government. Whatever my frustrations with not being able to go further on regulatory issues, it has been an honor and a pleasure to serve on the Council and to be part of its work. (1) In saying this, I do not in any way mean to blame only the pro-life side for preventing consideration of this issue. Pro-choice advocates would be equally suspicious of a broad effort by the government to monitor and regulate the area of assisted reproduction. REFERENCES Fukuyama, F. 2002. Our posthuman future: Consequences of the biotechnology revolution. New York: Farrar, Straus and Giroux. President's Council on Bioethics. 2002. Human cloning and human dignity: An ethical inquiry. Washington, DC: President's Council on Bioethics. Repr. New York: Public Affairs, 2002. http://www.bioethics.gov/reports/cloningreport/index.html. President's Council of Bioethics. 2003a. Being human: Readings from the President's Council on Bioethics. Washington, DC: President's Council on Bioethics. President's Council on Bioethics. 2003b. Beyond therapy: Biotechnology and the pursuit of happiness. Washington, DC: President's Council on Bioethics. Repr. New York: HarperCollins, 2003. http://www.bioethics.gov/reports/beyondtherapy/index.html. President's Council on Bioethics. 2004. Reproduction and responsibility: The regulation of new biotechnologies. Washington, DC: President's Council on Bioethics. http://www.bioethics.gov/reports/reproductionandresponsibility/index.html. Bernard L. Schwartz Professor of International Political Economy at the Paul H. Nitze School of Advanced International Studies, Johns Hopkins University; Director of the Human Biotechnology Governance Project, 1619 Massachusetts Avenue NW, Washington, DC 20036. E-mail: fukuyama at jhu.edu. Web site for the Human Biotechnology Governance Project: http://www.biotechgov.org. From checker at panix.com Wed Sep 21 01:42:17 2005 From: checker at panix.com (Premise Checker) Date: Tue, 20 Sep 2005 21:42:17 -0400 (EDT) Subject: [Paleopsych] Foreign Affairs: Francis Fukuyama: Re-Envisioning Asia Message-ID: Francis Fukuyama: Re-Envisioning Asia Foreign Affairs, Jan-Feb 2005 v84 i1 p75 UNDER NEW MANAGEMENT A key task facing the second Bush administration is devising the proper security architecture for eastern Asia. The United States is confronting several immediate problems, including the North Korean nuclear standoff, tension between China and Taiwan, and Islamist terrorism in Southeast Asia. But a forward-looking foreign policy does not simply manage crises; it shapes the context for future policy choices through the creation of international institutions. Eastern Asia has inherited a series of alliances from the early days of the Cold War. These partnerships remain important as a means of providing predictability and deterrence. But a decade and a half after the fall of the Berlin Wall, it is increasingly evident that they do not fit the configuration of politics now taking shape. The White House has an opportunity to create a visionary institutional framework for the region. In the short term, it can do so by turning the six-party talks on North Korea into a permanent five-power organization that would meet regularly to discuss various security issues in the region, beyond the North Korean nuclear threat. In the long term, Washington will need to consider ways of linking this security dialogue to the various multilateral economic forums now in existence or under consideration, such as the Association of Southeast Asian Nations (ASEAN); the ASEAN-plus-three group, which was formed in the wake of the Asian economic crisis and includes China, Japan, and South Korea; and the developing free-trade areas. Asian multilateralism will be critical not just for coordinating the region's booming economies, but also for damping down the nationalist passions lurking beneath the surface of every Asian country. TIES THAT BIND Unlike Europe, Asia lacks strong multilateral political institutions. Europe has the EU and NATO, as well as groups such as the Organization for Security and Cooperation in Europe (osce) and the Council of Europe. Asia's only counterparts are ASEAN, the ASEAN Regional Forum on security matters, and the Asia-Pacific Economic Cooperation forum (APEC)--all of which are far weaker organizations. ASEAN does not include China or the other major players in Northeast Asia, and APEC is no more than a consultative body. Asian security is ensured not by multilateral treaties, but by a series of bilateral relationships centering on Washington, in particular the U.S.-Japan Security Treaty and the U.S.- South Korean relationship. The reasons for this difference between Europe and Asia lie in history: European countries are linked by similar cultural origins and their shared experience in the twentieth century, to the point that they have been relinquishing important elements of national sovereignty to the EU. By contrast, there is a much higher degree of distrust among the major players in Asia. This suspicion is driven partly by a changing power balance, as Japan is eclipsed by China, but primarily by memories of the Pacific war. After 1945, both Germany and Japan needed to convince their neighbors that they were no longer threats. The new West Germany did so by ceding sovereignty to a series of multilateral organizations; Japan did so by ceding sovereignty in security affairs to the United States. Security ties thus took on a hub-and-spoke structure in Asia, with Washington playing a central mediating and balancing role. These bilateral ties remain crucial, particularly the U.S.-Japanese relationship. The U.S. nuclear guarantee and U.S. forces stationed in Japan reassure the rest of Asia that Japan will not rearm in a major way. But this Cold War system of security checks and balances is eroding as new generations take power and face changing environments. The first problem concerns the United States' relationship with South Korea. With the ascendancy of left-wing Presidents Kim Dae Jung and Roh Moo Hyun over the past decade, a new generation of Koreans has grown up seeking reconciliation rather than confrontation with North Korea. Many young South Koreans today regard the United States as a greater threat to their security than the regime of Kim Jong Il. This bizarre perception is based on extraordinary illusions. The North Korean dictatorship is one of the most inhumane and dangerous that has ever existed, but the Bush administration misplayed its hand at the beginning of its first term by undercutting President Kim Dae Jung's "sunshine" policy of Korean reconciliation--triggering a generational revolt among younger South Koreans against Cold War verities. The reflexive gratitude that South Koreans who lived through the war against the North feel toward the United States is simply absent among the younger generation, which, like its German counterpart, grew up in peace and prosperity. On the surface, the U.S.-South Korean alliance still looks strong: the current Roh Moo Hyun government has sought to demonstrate its commitment to the relationship by sending military forces to Iraq. But misunderstanding could easily emerge and then spiral as Koreans blame the United States for excessive belligerence toward Pyongyang and the United States reacts to what it perceives as South Korean ingratitude. Preoccupied with terrorism and the Middle East, Washington has already repositioned its forces away from the demilitarized zone between the two Koreas and is planning to draw down its forces in the region. The United States' relationship with Japan is also changing in ways that are extremely unsettling to the rest of Asia. Prompted by the nuclear threat from Pyongyang, Tokyo is reconsidering the need for more robust defensive forces. Japan's dispatch of peacekeepers to Iraq and its recent confrontations with the North Korean navy demonstrate a willingness to behave like what opposition leader Ichiro Ozawa has called a "normal country." There is a growing consensus in Japan that Article 9 of its postwar constitution--which dictates that it cannot wage war and cannot maintain armed forces--should be revised, even if the process stretches out over a number of years. Although political ties between Washington and Tokyo are stronger today than they have been in many years, the Cold War father-child dependency will inevitably be replaced by something resembling an alliance of equals. Japan's new posture is to be welcomed. In fact, the United States has been pushing Tokyo to embrace such a new role since the last decade of the Cold War. It is perverse that a country with the world's third- largest economy remains militarily and psychologically dependent on Washington. But the rest of Asia--particularly China and the two Koreas, which were heavily victimized by Japan throughout the first half of the twentieth century--prefers that Japan stay militarily weak. These countries will not welcome the emergence of a stronger and more independent neighbor. Although a Japan with a revised Article 9 should not threaten the rest of Asia, its former victims may not trust in that fact. Japanese rearmament must therefore progress slowly and be managed delicately, with plenty of open communication between Tokyo and other Asian governments. And then there is China. The world's fastest-growing economy (and one of its largest) has thus far remained largely outside any security pact or alliance, excepting its membership in global institutions such as the UN and the World Trade Organization (WTO). But this relative isolation also is likely to change. In recent years, the Chinese have proposed a blizzard of new Asian multilateral economic arrangements, which could ultimately serve security purposes as well. Beijing's plans have included two agreements with ASEAN (ASEAN plus one and ASEAN plus three, with Japan and North Korea), as well as China-ASEAN and East Asian free- trade areas. Clearly, the Chinese are exerting leadership to ensure that their status in the international political arena matches their growing economic power. Sensing a geoeconomic threat, the Japanese have responded with their own trade pacts, such as the Japan-Singapore free- trade area negotiated by Prime Minister Junichiro Koizumi. China has always presented a great conundrum for the United States. It is the kind of power Washington deals with the least well: a nation that is neither clearly friend nor clearly foe, simultaneously a strategic threat and a critical trade and investment partner. The result has been an inconsistent relationship of pragmatic cooperation punctuated by periodic crises, such as the U.S. bombing of the Chinese embassy in Belgrade in 1999 and the Chinese downing of a U.S. spy plane in 2001. The future of this relationship depends on how Chinese politics evolve: whether China provokes a showdown with Taiwan and uses its economic might to achieve Asian hegemony, or develops into an increasingly pluralistic society in which economic interests dictate continuing good relations with its neighbors. In the meantime, the United States can adopt one of two approaches: either it can seek to isolate China and mobilize the rest of Asia into a coalition to contain growing Chinese power, or it can try to incorporate China into a series of international institutions designed to channel Chinese ambitions and elicit cooperation. Despite its appeal among U.S. conservatives, isolating Beijing is a nonstarter. Even if the United States somehow knew that China were a long-term strategic threat on a par with the former Soviet Union, no U.S. ally would enlist in an anti-Chinese coalition any time in the near future. Japan, South Korea, Australia, and ASEAN members all have complex relationships with China that involve varying degrees of cooperation and conflict; absent overt Chinese aggression, none is going to be willing to jeopardize those ties. Incorporating China into existing global institutions has already proved very effective. In 2001, when the question of Chinese membership in the WTO came up, some argued that China would only subvert the WTO by breaking its rules. As it is, being a part of the WTO has promoted the rule of law by giving Chinese reformers an excuse to make systemic domestic changes. These modifications--which were in China's self- interest anyway--include replacing the traditional system of corrupt, nepotistic business dealings with more transparent and open rules. As Evan Medeiros and Taylor Fravel have pointed out, over the past decade China has shifted its posture from that of an aggrieved victim of Western imperialism to that of an increasingly responsible member of the international community. THE MULTILATERAL IMPERATIVE Asia needs to develop a new set of multilateral organizations in parallel with the existing bilateral organizations. Over time, a new set of institutions can take over many of the functions performed by bilateral agreements. But this new multilateralism cannot come into being without the strong support of the United States, which is why a creative re-evaluation of Asia must be a top priority for George W. Bush in his second term. Washington clearly derives some benefits from the present system of U.S.-centric bilateral alliances. The United States gains unique sanction for its military and political presence in the region and is in a strong position to prevent the emergence of hostile coalitions. Washington also often serves as the conduit for messages and security plans sent from one Asian capital to another, giving it leverage. Balanced against these considerations is a simple but strong reason for promoting a multilateral system. With the end of the Cold War and the continuing economic development of eastern Asia, power relationships are changing in ways that have unlocked nationalist passions and rivalries. The potential for misunderstanding and conflict among South Korea, Japan, and China will be significant in the coming years--but it can be mitigated if multiple avenues of discussion exist between the states. Several recent incidents have brought latent tensions to the surface. Despite burgeoning trade between China and South Korea, relations recently became strained when government-sponsored Chinese researchers asserted that the ancient kingdom of Koguryo, which 2,000 years ago stretched along the current China-North Korea border, was once under Chinese control. The ensuing fight had to be papered over with a five- point accord negotiated by the countries' foreign ministries. Beijing's motives for allowing publication of the article are unclear, but they may have been related to rising nationalism in China and loose talk in Seoul about founding a "greater Korea" that would include not just the North and the South but also the more than 2 million ethnic Koreans currently living in Manchuria. Meanwhile, the growing economic interdependence of China and Japan has not mitigated nationalist passions, but exacerbated them. At an Asian Cup soccer game in August 2004 in Beijing, Chinese fans screamed, "Kill! Kill! Kill!" at the winning Japanese team, forcing it to flee China. This event followed on the heels of several other ugly and apparently spontaneous displays of anti-Japanese feeling and outrage over the use of hired female "companions" in southern China by 300 Japanese businessmen. Heightening security concerns threaten the Japanese-South Korean relationship and could spark an arms race. Ten years ago, while doing research in Tokyo, I was told by a number of officers in the Japanese Self-Defense Forces that in the event of Korean unification, the combined military of North and South Korea would be close to ten times the size of Japan's. If Korean troop strength did not fall dramatically at that point, they said, Japan would have to take appropriate defensive measures. Not only does this risk remain, but today there is the added factor of North Korea's nuclear weapons--and what a potentially united Korea would do with them. In a recent Tokyo Shimbun poll, 83 of 724 members of the Japanese Diet said publicly that Japan should consider becoming a nuclear power in light of the North Korean threat, an assertion that would have been unthinkable just a few years ago. Asia is not about to descend into a downward spiral of nationalist fervor, but the potential for dangerous miscommunication clearly exists. Establishing a multilateral structure would help greatly by giving Northeast Asia's major powers a forum for talking directly to one another. Nato, with its regular schedule of ministerial meetings, has performed this service in Europe for several decades. Defense ministers lay out their spending plans and force structures, and foreign ministers explain their respective nation's political actions. If the Chinese and Korean governments are worried about the meaning of Japanese rearmament, or if the Japanese and Chinese leaderships are concerned about Korea's postunification intentions, a multilateral forum would give them an opportunity to defuse anxieties and articulate expectations. WHIPLASH The U.S. stance on multilateralism in Asia has been erratic and contradictory. The United States sponsored organizations such as the Southeast Asian Treaty Organization and APEC. But when Malaysian Prime Minister Mahathir bin Mohamad sought to counter APEC in 1989 with a proposal for an East Asian Economic Caucus that would exclude the United States, it was firmly rejected by Washington as a scheme to keep "white" powers out of the Asian club. During the early 1990s, the Clinton administration promoted an informal Northeast Asia Cooperation Dialogue between the countries that are now participating in the six-party talks. This process continues today, but it has never been elevated to a formal level. Many of the more recent proposals for eastern Asian multilateral institutions have focused on economic issues stemming from the 1997-98 financial crisis. In the view of many eastern Asian countries, the United States and U.S.-influenced international institutions such as the International Monetary Fund (IMF) and the World Bank exploited the crisis to push a pro-market agenda on Asia. When Japan proposed an Asian IMF in 1999, Washington summarily rejected the idea but offered nothing in its place to act as an institutional coordinating mechanism capable of mitigating a future crisis. As a result, nations in the region have been building new multilateral organizations on their own. These include the Chiang Mai Initiative, which allows the central banks from 13 countries to swap reserves in the event of a speculative attack, and the ASEAN-plus-three forum. So far, the United States has either ignored or been indifferent to these developments. In an ironic twist, however, Washington has stumbled into a new Asian multilateral framework: the ongoing six-party talks on Korean security and nuclear weapons involving the United States, North and South Korea, Japan, China, and Russia. Washington embraced this arrangement after Pyongyang, in the wake of the collapse of the 1994 Agreed Framework, insisted on talking directly to the Americans about the future of its nuclear programs. U.S. policymakers correctly saw this as an effort to divide the United States from its South Korean ally and insisted on multilateral talks instead. Over time, another important motive emerged: only China had the economic leverage to bring North Korea to the bargaining table. Indeed, Beijing strong-armed Pyongyang into accepting the six-party format by briefly cutting off its energy supplies. The multilateral security framework that has unexpectedly emerged in Northeast Asia provides an excellent opportunity for institutional innovation. If and when the immediate crisis over North Korea's nuclear program passes, a permanent five-power organization could serve as a direct channel for communication between China, Japan, South Korea, Russia, and the United States. The new group would not be a NATO-like military alliance, but would instead resemble the OSCE--with 55 member states, the world's largest regional security organization--and deal with second-order security issues. PARTY OF FIVE A five-power forum would be particularly useful in dealing with several foreseeable problems. The first is a sudden collapse of the North Korean regime. In the short run, such an implosion would cause huge difficulties: coordinating relief efforts, dealing with refugees, paying for reconstruction, and containing any violence that might ensue. Over the long run, the political deck in Northeast Asia would be reshuffled: the rationale for the U.S.-South Korean alliance would disappear, and tensions between a unified Korea and Japan and China could rise for reasons already indicated--all of which would be easier to tackle in a pre-existing multilateral setting. Another issue is Japanese rearmament. Japan will not revise Article 9 this year or the next, but the handwriting is on the wall. Although rearmament should not threaten China and Korea, they will have many incentives to hype a new Japanese threat; China, in particular, has used anti-Japanese sentiment to bolster the communist regime's nationalist credentials. Germany, which rearmed and has been moving down a similar path toward "normalcy," moderated the threat by encasing its sovereignty in several international institutions, including NATO, the EU, and the UN. A Japanese return to normality will seem much less threatening if done within a regional security organization as well as a continuing bilateral relationship with the United States. But the new group's relevance wouldn't stop there. A fully nuclear North Korea, a possible Asian arms race, the implications of Chinese military modernization-- these are just a few of the potential problems a five-power body could tackle. At the same time, such a permanent forum would not be an appropriate venue for other important matters. It would not help deter a Chinese threat to Taiwan, though it could conceivably provide a forum for resolving a crisis in the Taiwan Strait. Nor would the five-power organization be able to directly influence security problems in Southeast Asia. Whether it may one day do so by admitting more members is a question for the future. There will be substantial practical obstacles to transforming the current six-party talks into a permanent organization. To start, hard- liners in the United States will immediately object that the six-party format has already proved ineffective: after three rounds of meetings in August 2003, February 2004, and June 2004, the negotiations seem to be going nowhere. In fact, the North Koreans used the first meeting to announce their intention to test a nuclear weapon, and they have generally thumbed their noses at U.S. efforts to constrain their nuclear program. Washington hoped to use the multilateral approach to isolate Pyongyang; instead, the North Koreans have turned the tables on the Americans and lined up support from China and South Korea for a more accommodating line. Given this track record, and Chinese ambivalence toward the North Korean threat, why make this particular format permanent? The answer is that the United States needs allies--the same reason the six-party talks came into existence in the first place. Those who are hawkish on North Korea seem to think that once the diplomatic track has played itself out, Washington can use the threat of force to pressure Pyongyang to back down. Although military options at this point seem off the table even for the hawks, hope remains that the United States can somehow bring about North Korean regime change by means other than war; unilaterally impose a tough embargo that will keep nuclear materials bottled up and increase pressure on the North; or frighten the Chinese and the South Koreans into cooperating on a more confrontational policy. By itself, however, the United States does not have sufficient leverage to implement any of these strategies. Alone, Washington cannot force the North to back away from its nuclear program or cajole Beijing and Seoul into an anti-North Korea alliance, given their domestic policy preferences. The current multilateral negotiations, for all their limitations, remain the best U.S. option. The Bush administration hard- liners began talks with the assumption that no negotiated solution could work, given the failure of the 1994 Agreed Framework, and therefore have never sought to define a realistic new deal. Perhaps if the White House does this during Bush's second term, Pyongyang, rather than Washington, will become the isolated power. The second major obstacle to creating a permanent five-power organization is North Korea itself, which does not belong in any responsible community of nations, given its human rights and security record. Pressing ahead too rapidly to convert narrowly focused six-party negotiations into a permanent five-power organization could undermine the current talks and lead to North Korean obstructionism on all fronts. The trick will be to isolate Pyongyang within the six-party format while making the other five powers comfortable with the prospect of working together over the long term. North Korea's current refusal to return to the talks may even present an occasion for a five-power meeting without Pyongyang. The larger goal aside, this strategy is something Washington should work toward to increase the pressure on Pyongyang. Eventually, the United States may be able to put new issues on the table for the five powers to discuss. If the transition to a permanent five-power structure can somehow be made, other issues will have to be addressed as well. Should other countries in the region, such as India, New Zealand, Australia, or any of the ASEAN members, be added? Should there be an official link between the new group and the ASEAN Regional Forum, or should individual ASEAN states be considered for membership? Finally, there is the question of how a security forum of five powers or more would relate to the Asian multilateral economic groups already taking shape or being proposed, such as the Chiang Mai Initiative or ASEAN plus three. Should the United States support regional economic integration even if it does not have a seat at the table, as it has supported the EU? Or should Washington regard economic multilateralism as a threat and weaken these initiatives in favor of global organizations such as the Bretton Woods institutions or the WTO? Whether the United States likes it or not, the countries of eastern Asia have a strong incentive to increase their formal multilateral economic cooperation: global institutions such as the IMF are distrusted as overly dominated by the United States and unresponsive to Asian concerns. Washington would better serve its interests by supporting and shaping the evolution of these institutions from the outside, rather than by playing an obstructionist role. The United States can cement its formal role in eastern Asia by maintaining its network of bilateral alliances and by working toward a new multilateral security organization. Ultimately, Washington's relationship with Asian multilateral organizations would mirror the relationships it has with the EU and NATO--dealing with one from the outside and the other from the inside. Whatever multilateral institutions take shape in Asia will never achieve the strength and cohesion of their European counterparts, but the United States should regard them as hedges against the possible unraveling of the existing bilateral security system. CLIMBING OUT The final and perhaps most urgent reason for the Bush administration to re-envision its approach to Asian diplomacy has as much to do with the United States' status in the world as with its standing in eastern Asia. The Iraq war has isolated Washington in unprecedented ways and convinced a large part of the world that the United States--not Islamist terrorism--is the biggest threat to global security. To climb out of this hole, the White House needs to start thinking creatively about legitimacy and international organizations. Considering that it has already snubbed the UN and refused to participate in the International Criminal Court or the Kyoto Protocol, Washington must now consider alternatives to international cooperation that better suit its interests. The United States will be better served by endorsing a series of overlapping and occasionally competitive multilateral organizations than by putting all its eggs in a single basket such as the UN. A permanent five-power organization in eastern Asia would help provide the foundation for the new order in that region--a small building block in a larger multi-multilateral edifice. The idea of permanently institutionalizing the six-party talks has been discussed with increasing frequency in Washington policy circles in recent months. Such an organization will not come about, however, unless President George W. Bush decides to take the initiative to make it happen. The advent of a new term for Bush and his administration provides a fortuitous opportunity to reconceive the United States' long- term political architectures. Being the sole superpower bestows a certain responsibility for the global public good. It means not just exercising hard military power against rogue states, but also shaping the international environment in anticipation of new political demands. The United States stepped up to this challenge after 1945; it should do so again in the post-September 11 world. From checker at panix.com Wed Sep 21 22:25:35 2005 From: checker at panix.com (Premise Checker) Date: Wed, 21 Sep 2005 18:25:35 -0400 (EDT) Subject: [Paleopsych] Newswise: Patriarchal Attitudes, Practices and Discrepancy in Life Expectancy Message-ID: Patriarchal Attitudes, Practices and Discrepancy in Life Expectancy http://www.newswise.com/p/articles/view/514505/ Source: [1]British Medical Journal Released: Tue 13-Sep-2005, 12:50 ET Embargo expired: Wed 14-Sep-2005, 18:05 ET Description Systematic male dominance - patriarchy - explains half the discrepancy in life expectancy between the sexes, suggests research. JOURNAL OF EPIDEMIOLOGY AND COMMUNITY HEALTH [Is patriarchy the source of men's higher mortality? J Epidemiol Community Health 2005; 59: 873-6] Newswise -- Systematic male dominance - patriarchy - explains half the discrepancy in life expectancy between the sexes, suggests research spanning four continents in the Journal of Epidemiology and Community Health. The researchers base their findings on a comparison of the rates of female murders and male death rates from all causes in 51 countries across Europe, Australasia, Asia, North and South America. Rates of violence against women are used to indicate the extent of societal male dominance over women, otherwise known as patriarchy. The wealth of a country, as indicated by the Gross Domestic Product (GDP) per head of the population, was also taken into consideration, as socioeconomic factors are strongly linked to health. The results showed that women lived longer than men in every single country included in the study, with murder rates among both sexes and GDP strongly linked to death rates in men. GDP accounted for 13.6% of the variation in death rates among men. But this was nowhere near as high as the female murder rates, which accounted for 48.8% of the variation in death rates among men. Male murder rates accounted for just 3.5%. The higher the rate of female murders, and therefore the greater the level of patriarchy, the higher were the death rates among men and therefore the shorter their life expectancy, the figures showed. "Our data suggest that oppression and exploitation harm the oppressors as well as those they oppress," conclude the authors, adding that the higher death rate among men, and hence their shorter life expectancy, is "a preventable social condition, which can potentially be tackled through global social policy." They cite the way that children and young people are currently socialised into patriarchal gender roles, such as those emphasising excessive risk taking, aggression, and the suppression of emotions by boys and young men, as examples that need to be tackled. Click here to view the paper in full: [2]http://press.psprings.co.uk/jech/october/873_ch30387.pdf Journal web site: [3]http://jech.bmjjournals.com/ References 1. http://www.newswise.com/institutions/view/?id=5481 2. http://press.psprings.co.uk/jech/october/873_ch30387.pdf 3. http://jech.bmjjournals.com/ From checker at panix.com Wed Sep 21 22:25:43 2005 From: checker at panix.com (Premise Checker) Date: Wed, 21 Sep 2005 18:25:43 -0400 (EDT) Subject: [Paleopsych] NYT: Help for Aging Parents, and for Yourself Message-ID: Help for Aging Parents, and for Yourself http://www.nytimes.com/2005/09/18/business/yourmoney/18lunch.html By [3]CLAUDIA H. DEUTSCH ELINOR GINZLER knows that her parents were lucky - and that she was, too. Her mom died in her sleep at the age of 73 - "20 years too early, but she died the way she wanted to," Ms. Ginzler recalled. Her dad died at 83 - and, even though he had battled cancer for five years, he did not suffer much. "The last time it recurred, they gave him three to six months," Ms. Ginzler said. "He died three weeks later, and he died in his own home." Ms. Ginzler knows that such peaceful, mercifully quick endings are rare. Far more often, children see their parents' dignity, hope and, in too many cases, their assets, stripped away with their strength. And the children suffer, too, barraged with feelings of responsibility, guilt, inadequacy and, yes, resentment, as the plight of their parents comes to dominate their lives. It doesn't have to be that way, Ms. Ginzler says. "You really can help aging parents live their lives to the fullest without fully sacrificing yourself," she said. Her personal luck aside, she knows what she is talking about. At 52, she finds that a growing number of her contemporaries are trying desperately to fit the emotional, physical and financial demands of aging parents into their already oversubscribed lives. Ms. Ginzler's work affords no respite from the issue. Since 1998 she has worked for AARP, a group whose raison d'?tre is to help people age without angst. She keeps seeing adult children make the same mistakes - like insensitivity to a parent's embarrassment. Ms. Ginzler recalls one friend who insisted on bathing her ailing father: "She thought it was a sign of love, but to him, being bathed by any woman, let alone his daughter, was a humiliation." Another mistake is not recognizing that parents are stinting on themselves to leave ample estates. "You have to say, 'I wouldn't enjoy your money if I thought you hadn't gotten the painkillers and care you needed,' " she said. Still another error is talking in front of a comatose parent as though he or she were not there. "You don't know that they can't hear and understand, so assume that they can," she said. Perhaps worst of all, Ms. Ginzler said, too many baby boomers mistakenly believe they must choose among their own needs, their newer family's needs and those of their parents. To persuade them differently, Ms. Ginzler and an AARP colleague, Hugh Delehanty, wrote "Caring for Your Parents" (AARP Books, 2005), a guide to help boomers help parents and themselves. "Your parents raised you, and their end-of-life is giveback time," Ms. Ginzler said during a recent lunch at Periyali, a Mediterranean restaurant in the Flatiron district of Manhattan. "But that doesn't mean you can't compromise. Your mom wants you five days, your kid wants you five days, so you keep one for yourself and give each of them two." Ms. Ginzler learned about such time juggling early on. She and her older brother, Edward, grew up in Morris Plains, N.J., raised by a stay-at-home mom and a self-employed accountant father. Every Sunday, as far back as she can remember, the Ginzlers drove to New York, alternating between visiting her mom's parents in Brooklyn and her dad's mom in the Bronx. She recalls her maternal grandfather, who lived with the Ginzlers for a while, as a "health food freak" who lived to be 98. In his last decade or so, whenever he caught a bad cold or felt a new ache, he would wonder aloud if the end was near. "It freaked me out, but he said, 'Hey, I lived a rich life; I'm O.K. with this,' " she said. "He made me understand aging." Still, she never thought that it would become a vocation. She remembers being a 60's-era hippie who wanted to make the world a better place, "but I always liked kids, and figured that's where I'd make the difference," she said. After getting a degree in French literature at the University of Pennsylvania, she earned a master's in counseling at the University of Maryland. Upon graduation in 1975, she began working in group homes for delinquent boys. Along the way, she met Walter Gross, then a meteorologist, now a computer specialist. They married in 1978, and have two sons, both in their 20's. After Ms. Ginzler's first son, Ben, was born, she went to work part time, with an agency that placed older people as volunteers in local programs. In 1987, when her younger son, Daniel, was 3, she quit to become a full-time volunteer at Daniel's nursery school. A year later, she was back in the salaried world, as a care management supervisor for a government agency on aging. In 1998, she answered an ad for a project specialist at AARP. Her current post is director for livable communities, a new AARP group aimed at helping older people do their aging where they wish - be it at home, with their children, in assisted living or even in a nursing home. No matter the place, Ms. Ginzler said, there are many ways to make children feel less burdened, and parents feel less burdensome. Here are some of her suggestions: Be creative if your spouse or child feels neglected when you spend time with Mom or Dad. For a spouse, reinstate romantic courtship. "Regress to the Saturday night date, meet at 8, dress up, make it a real event," Ms. Ginzler said. For a child, enlist technology to help you seem to be two places at once. Does her soccer game coincide with a doctor's appointment for your mom? Send a proxy to videotape the game, then watch it together later. Give "caring coupons" as Christmas and birthday presents. Instead of an unneeded coat or coffee pot, give Dad a coupon for a summer's worth of lawn mowing, or give mom a chit for a year of drives to the doctor. "If it's a gift, they won't feel like you're taking over their life or making them feel like invalids," Ms. Ginzler said. Do not treat every behavioral aberration as incipient dementia. Ms. Ginzler remembers finding her previously dapper dad wearing wrinkled, stained clothes. She feared senility; it turned out that he could no longer navigate the stairs to the basement washing machine. "He needed a cleaning service, not a doctor," she recalled. Set contact rules for family members and friends. Answering their well-intentioned questions about a parent's health can wipe out your spare time. Refuse to take phone calls on Sundays and set up a "phone tree" on other days - so that every person you speak with is responsible for passing along information to several others. You can also send regular e-mail updates. Trust but verify. If your parent is in a hospital or nursing home, or if you have hired a home care aide, "be a frequent and unpredictable visitor," Ms. Ginzler said. If you do not like what you see, talk to the attendant, then to the supervisor, then to the hospital or home administrator. If that does not work, file a complaint with the state department of health, which often runs the agency that licenses nursing homes. Or call (800) 677-1116 - the federal hot line for aging services - to find out how to contact your local long-term-care ombudsman. (Memorize that number: it can direct you to all kinds of area services for the aging.) Apply the lessons you learn to your own inevitable death. Do not just tell others if you do not want to be kept alive artificially; execute a living will, and show it to people other than the person you've designated as your health care proxy. "A spouse or child could too easily say, I know she said 'no heroic measures,' but she's my mom, or my wife, I can't pull the plug," Ms. Ginzler said. And recall the seemingly trivial things that drove you crazy after your parents died. When Ms. Ginzler saw how much time it took to liquidate her parents' home, she began throwing out a lot of her own superfluous items. "I kept thinking, 'I don't want my kids to go through this,' " she said. From checker at panix.com Wed Sep 21 22:25:50 2005 From: checker at panix.com (Premise Checker) Date: Wed, 21 Sep 2005 18:25:50 -0400 (EDT) Subject: [Paleopsych] The Week: The evolution of the Big Easy Message-ID: The evolution of the Big Easy http://www.theweekmagazine.com/article.asp?id=1110 [I love the last line.] Its French Quarter is actually Spanish, many of its streets are below sea level, and many of its former public officials and judges are in jail. How did New Orleans become the nations most eccentric city? 9/16/2005 Why was the city built below sea level? Founded in a marsh in 1718, Nouvelle-Orl?ans has always been a victim of its location. The French chose the site, on a crescent of soggy land extending into the Mississippi River, because it was the last landing place before the river emptied into the Gulf of Mexico; they envisioned it as a booming port serving fur trappers and other traders, and a fitting capital for Frances burgeoning North American empire. But in its first four years of existence, the settlement was leveled four times by hurricanes. Engineers begged the French commander, Jean Baptiste Le Moyne de Bienville, to relocate above the swamp, calling it a place where God never intended a city to be built and where only the madness of commercial lust could ever have tempted men to occupy. But de Bienville refused, unwilling to forsake its strategic location. How long was the city under French control? Fewer than 50 years. New Orleans was ceded to Spain in 1763, along with the rest of Louisiana, when France lost the Seven Years War. (As quid pro quo, Britain took Florida from Spain.) The famous French Quarter, with its ancien r?gime street names, such as Bourbon and Royal, is actually Spanish in design, the French-built city having burned down in the fire of 1788. Yet French influence lived on in Creole society--a heady mix of French, Spanish, black, and Catholic cultures that made New Orleans unique among American cities. Over the decades, the French influence was reinforced by the influx of aristocrats escaping the 1789 French Revolution and French colonists and slaves fleeing the 1809 slave revolution in Haiti. Was New Orleans always deeply segregated? No. Racial segregation in French and Spanish colonies was far less strict than in British ones, so it became a haven for mulattoes escaping from plantations. As in many of its colonies, Spain fostered the growth of a free black population to fill service, shopkeeping, and other important economic roles (though entry to the clergy, the professions, and government was barred). As a result, the city gave rise to a prosperous class of free blacks, some of them slave-owners themselves. At the start of the 19th century, with most African-Americans in this country in bondage, a third of the black residents of New Orleans were free. When did New Orleans become part of the U.S.? When Napoleon conquered Spain in 1800, New Orleans returned to French rule, but merely two years later, Napoleon decided to sell all the territory west of the Mississippi to the United States for just $15 million, or 3 cents an acre. (The Louisiana Purchase instantly doubled the size of the U.S.) Britain made a vain effort to seize the city in the War of 1812, but it was repulsed by a ragtag army of Anglo-Americans, Creoles, freemen, and slaves led by Gen. Andrew Jackson at the Battle of New Orleans. The glorious victory inaugurated a golden era: Steamboats laden with cotton and sugar poured into the city, at that point the main port of entry for ships bringing slaves to work the plantations. The population doubled: By 1840 it was 102,000, making it the fourth-largest city in the U.S. Why was this period so glorious? The flood of immigrants in the antebellum era--Irish and Germans added to the mix--contributed to a freewheeling, raucous blend of culture, language, religion, and cuisine that gave New Orleans renown as the city that care forgot. By 1840, both blacks and whites began pouring into the streets every year to celebrate Mardi Gras. Wealthy white landowners took their mulatto mistresses to mixed-race quadroon balls (for people of one-quarter black ancestry) or octaroon balls (for those one-eighth black), adding to the citys reputation for glamour, tolerance, elegance, and wickedness. Life in the swamp remained hazardous: Another hurricane flooded the city in 1849, and mosquitoes caused 23 separate outbreaks of yellow fever, with an 1853 epidemic killing 8,000 people. But New Orleanians partied on, with even funerals having a festive, musical air. One observer said that residents possessed a love of life that borders on defiance. When did the good times end? The Norths victory in the Civil War made New Orleans what it had never been before: a segregated city. In the Souths angry backlash against Reconstruction, in which Northerners legalized interracial marriage and gave blacks full legal rights, segregation and white supremacy permeated all aspects of life. In 1892, Homer Plessy, a 30-year-old octaroon shoemaker, was jailed for sitting in the white carriage of a New Orleans train and refusing to leave. His appeal went all the way to the U.S. Supreme Court. In its infamous Plessy v. Ferguson ruling, the court not only upheld the conviction, but laid down the separate but equal doctrine that was used to justify segregation in the South for half a century. Did the city remain segregated? Largely. But in the citys impoverished black neighborhoods, the culture that transformed New Orleans into a tourist mecca was born. Musicians such as Joe King Oliver and Louis Armstrong blended the blues, hymns, and dance tunes into a new musical form called jazz; after honing their chops in Chicago in the 1930s, jazz musicians came home and began attracting flocks of tourists. By the 1990s, more than 10 million visitors poured into to the city every year, lured by jazz bars, Mardis Gras, the French Quarters Creole and Cajun restaurants, and the drunken reveling on Bourbon Street. Whether theyll continue to come remains in question, though jazz musician Joe Lastie says the city has a spirit that cant be conquered--or drowned. You cant keep New Orleans down, he says. Were always going to bounce back. The capital of corruption For more than a century, New Orleans has been one of the most corrupt cities in the country. The undisputed champion of the political arts was Huey Long, alias Kingfish, whose populist program of road building and free schoolbooks propelled him to the governorship of Louisiana in 1928 and, later, to the U.S. Senate. The levels of graft in his administration were outrageously high even by the standards of Louisiana politics; and though he survived being impeached on charges of bribery, he was assassinated in 1935 by the son-in-law of a political opponent. His legacy remains, and Louisiana ranks third in the number of elected officials convicted of crimes. In recent years, 14 state judges were convicted of corruption, and more than 50 police officers were convicted of crimes that included rape, murder, and robbery. Two are currently on death row. As a former congressman once said: Half of Louisiana is under water, and the other half is under indictment. From checker at panix.com Wed Sep 21 22:25:57 2005 From: checker at panix.com (Premise Checker) Date: Wed, 21 Sep 2005 18:25:57 -0400 (EDT) Subject: [Paleopsych] The Week: Katrina: Anatomy of a man-made disaster. Message-ID: Katrina: Anatomy of a man-made disaster. http://www.theweekmagazine.com/article.asp?id=1111 [Note this: "Hundreds of volunteer firefighters were detained in Atlanta for days of training on community relations and sexual harassment."] 9/16/2005 Walter Maestri, emergency manager of Jefferson Parish, La., had dreaded this call for a decade, said Susan Glasser in The Washington Post. It was Friday night, Aug. 26, and Max Mayfield, director of the National Hurricane Center, was on the phone. Walter, said Mayfield, get ready. This could be the one. Hurricane Katrina, churning across the Gulf of Mexico, was hungrily sucking up energy from the warm water. It was growing into a real monster, Mayfield said, a Category 4 or 5, and it was headed for New Orleans. Maestri uttered just three words: Oh, my God. New Orleans, built below sea level, had long expected a storm like Katrina, said Keith OBrien in The Boston Globe. In theory, the city, state, and federal governments were prepared to evacuate the city and minimize the damage. An elaborate disaster plan existed on paper. But in the critical hours between the first warningsmore than two days before the hurricane made landfallto the breaching of the citys levees on Monday, government officials at every levellocal, state, and federalmisjudged, miscommunicated, and underestimated both the power of the storm and the seriousness of the aftermath. The cascading series of failures left about 80,000 stranded in the city for days, without adequate food or water, and may have contributed to hundreds of deaths. The plan was flawed from the start, said Andrew Martin in the Chicago Tribune. Last year, local, state, and federal officials ran a mock hurricane drill for New Orleans, and it became clear that about one-fourth of New Orleans 485,000 residents would not be able to evacuate the city on their own. But in July, Mayor C. Ray Nagin and other city officials quietly decided it would be too difficult to provide enough buses or other transportation to evacuate 100,000 people. In a DVD distributed in poor neighborhoods, the city effectively told residents they were on their own. Little help came from Washington, said Mark Thompson in Time. As Katrina bore down on the city, Nagin frantically pleaded for 700 buses from FEMA. FEMA delivered only 100. FEMA director Michael Brown ordered 1,000 federal workers into the region, but gave them two full days to report. Browns boss, Secretary of Homeland Security Michael Chertoff, also acted with no urgency, said the Chicago Tribune. He waited 32 hours after Katrina hit the city to declare the hurricane an incident of national significance, which theoretically could have brought the full weight of the federal government to bear. By then, the levees had failed, 80 percent of the city was underwater, and nearly 50,000 people had taken refuge in the Superdome and the citys Convention Center. When FEMA finally clanked into operation, said Eric Lipton in The New York Times, it actually impeded rescue efforts with its bureaucratic red tape. Hundreds of volunteer firefighters were detained in Atlanta for days of training on community relations and sexual harassment. The agency wouldnt let water trucks make a delivery for stranded victims because they didnt have a tasker number. Evacuation vehicles from neighboring states were denied entry, one sheriff complained, because dispatchers were working on the paperwork. City officials compounded the problem, said Deroy Murdock in National Review Online, by blocking relief efforts by the Salvation Army and Red Cross. The two agencies were able and eager to deliver water, food, medicine, and other relief to the citys stranded residents. But local authorities turned them away, fearing that the supplies would encourage residents to settle in for the long haul instead of leaving. As the crisis worsened, said Karen Tumulty in Time, Gov. Kathleen Blanco appeared dazed and unsteady. After the city flooded, she asked President Bush for everything you got, but without specifics. Blanco simply assumed Washington would know what to do. She turned out to be wrong. President Bush didnt realize how bad things were until Thursdaythree days after the city was swamped, said Evan Thomas in Newsweek. The day the levees failed, he went through his prepared schedule, making a speech about Medicare, attending a birthday party with Sen. John McCain, and clowning around with a gift guitar. At a press conference, he told FEMAs Brown, Brownie, youre doing a great job. Four days into the crisis, Bush finally touched down in the region on Air Force One, and met with Mayor Nagin, Gov. Blanco, and a group of other officials. The meeting was heated, with Nagin slamming his fist on a table and demanding, Whos in charge? Officials gave Bush chapter and verse on FEMAs multiple screw-ups. According to one witness, the president just shook his head, as if he couldnt believe what he was hearing. From checker at panix.com Wed Sep 21 22:34:43 2005 From: checker at panix.com (Premise Checker) Date: Wed, 21 Sep 2005 18:34:43 -0400 (EDT) Subject: [Paleopsych] WP: Raunchiness Is Powerful? C'mon, Girls Message-ID: Raunchiness Is Powerful? C'mon, Girls http://www.washingtonpost.com/wp-dyn/content/article/2005/09/17/AR2005091700044_pf.html By Ariel Levy Sunday, September 18, 2005; B05 Afew years ago I noticed something weird: Raunch was invading my life. I would turn on the television and see babes in tight, tiny uniforms bouncing up and down on trampolines. I'd change the channel and see Oprah had a stripper on her show, teaching her how to wiggle. I'd walk down the street and see teens and young women -- and the occasional wild fifty-something -- sporting T-shirts emblazoned with the Playboy bunny. Britney Spears was becoming increasingly popular and increasingly unclothed, and her undulating body ultimately became so familiar to me that I felt like we used to go out. Watching these developments, it struck me that men -- the traditional target market for sex in its many forms -- were only half the equation here. It was women who, across the country, were choosing to firm their thighs by attending Cardio Striptease workouts. It was women -- usually young, always unpaid -- who, by agreeing to flash their breasts or make out with their friends on camera, were making a killing for the insanely popular "Girls Gone Wild" franchise. Even my best friend from college, who is the kind of feminist who used to take part in "Take Back the Night" marches on campus, had become fascinated by porn stars and strippers. Apparently, where decades ago the women's movement saw objectification, contemporary women are seeing inspiration. The going wisdom is that we now are liberated enough to get implants, we're empowered enough to start lap dancing. Gloria Steinem and her compatriots were either wrong about these things, or just reacting to them in a different time, when different rules applied. Partly, this more recent attitude is a rebellion against the rigidity of the politically correct '80s. Partly, it's a corollary of an ever more pervasive American consumerism, which tells us that if only we buy enough stuff -- bigger and better body parts, tinier and tighter clothes -- we will be able to buy passion. The question is, when we pick up that "Porn Star" T-shirt, what are we really buying? Take Jenna Jameson. The most popular adult film performer on planet Earth, she has proved to be one of raunch culture's most effective proselytizers. In 2004, her memoir, "How to Make Love Like a Porn Star," spent six weeks on the New York Times best-seller list. In it, Jameson writes that "being in the industry can be a great experience" because "you can actually become a role model for women." She's definitely right about the second part. I've spent the last two years interviewing women for a book about how they relate to raunch. In that time, what I have heard over and over again is that all of this -- Playboy, porn, strippers, thongs -- is good for us. When I went to Miami on spring break with "Girls Gone Wild," a 19-year-old who'd taken her clothes off for the cameras told me, "It shows confidence . . . the only way I could see somebody not doing this is if they were planning a career in politics." When I went to Oakland, Calif., to talk to high school students, one girl remarked, "To dress the skankiest, that would be the one way we all compete. Since seventh grade, the skankier, the smaller, the more cleavage, the better." In Hollywood, Cardio Striptease creator Jeff Costa proudly told me he had a mother bring a troupe of girls to his class for a sweet 16 party. Which helps explain how a company like Playboy Enterprises, despite a faltering flagship publication that in August announced a $2.3 million second-quarter loss, still turns a comfortable profit. Licensing, for instance, is going extremely well because of the army of women and girls eager to sport the rabbit head logo on their underpants or tank tops or jammies, as an advertisement for their own independence and sass. (When a reporter asked in 2003 if he was concerned about Playboy merchandise being marketed to teenagers, Hugh Hefner replied, "I don't care if a baby holds up a Playboy bunny rattle.") But let's think about this for a second. That little bunny logo that's supposed to symbolize our kicky empowerment is also the emblem of a man who said in 1967, "I do not look for equality between man and woman . . . I like innocent, affectionate, faithful girls" -- and plenty of them. Judging by his new reality series on E!, "The Girls Next Door," Hef's views haven't changed much. He still surrounds himself with a small stable of girlfriends, each of whom must be involved exclusively with him, each of whom has a 9 p.m. curfew. And these are the women who are going to teach us about liberation? Jameson is unwittingly poignant on this dichotomy. In her book, she insists that being in porn is "one of the few jobs for women where you can get to a certain level, look around, and feel so powerful, not just in the work environment but as a sexual being." But there's a reason that Jameson's tome is subtitled, "A Cautionary Tale." When she describes her actual sexual experiences, they sound carnivorous and dissociated: "Sexuality became a tool for so much more than just connecting with a boy I was attracted to," she writes. "I realized it could serve any purpose I needed. It was a weapon I could exploit mercilessly." Not once in that description of her sexual life does she use the word pleasure, to say nothing of love. Making love like a porn star -- which is supposed to make us feel so powerful -- doesn't really sound hot or wild or fun, it sounds like a relentless routine. It sounds like a job. And, of course, for Jameson, as for other women in the sex industry, it is. Strippers, porn stars and Playboy Playmates are women whose job it is to fake lust, to imitate actual arousal. We're supposed to imitate an imitation of our own sexuality and call that empowerment? Seriously? It's an amazing stroke of illogic, but somehow we have accepted the proposition. In truth, though, raunch culture is not about a real or unbridled exploration of what turns women on or makes us happy, it's about one particular -- and particularly commercial -- shorthand for sexiness, with an emphasis on performance over pleasure, formula over authenticity. It's ironic that we think of this as adult entertainment, because really, reducing sex to polyester underpants and implants is pretty adolescent. A uthor's e-mail : [2]ariel at ariellevy.net Ariel Levy's first book, "Female Chauvinist Pigs: Women and the Rise of Raunch Culture" (Free Press) is out this month. From checker at panix.com Wed Sep 21 22:34:49 2005 From: checker at panix.com (Premise Checker) Date: Wed, 21 Sep 2005 18:34:49 -0400 (EDT) Subject: [Paleopsych] Psychology Today: A Taste of Genius Message-ID: A Taste of Genius 2005.7-8 First, the summary from the "Magazine and Journal Reader" feature of the daily bulletin from the Chronicle of Higher Education, 5.8.2 A glance at the current issue of Psychology Today: Food for thought A healthy diet may help to strengthen the brain, and not just the body, writes Lauren Aaronson, a regular contributor to the magazine. In examining why that is so, she also lists eight foods that can help carry people into their 80s, and offers six ways to help keep the brain in shape. Eating cold-water fish, like salmon, is one way, writes Ms. Aaronson, because they contain nutrients that foster supple cellular membranes -- the walls that regulate the flow of molecules into neurons, the main cells of the brain. Absent a healthy flow, she says, the brain has a more difficult time learning new or remembering old information. Sugary food like grapes and honey can also help improve memory, says Ms. Aaronson, because the brain is one of the few organs that draw almost all of their energy from glucose. Experiments have shown, she says, "that a dose of glucose-sweetened lemonade boosts recall of events, words, movements, drawings, and faces, among other things, with effects lasting long enough to get you through a two-hour exam." Carbohydrate-rich foods like breads and pastas have similar effects, she says, much to the chagrin of Atkins dieters. "There's no question that proper feeding primes our brains to reach their fullest potential and maintain their wits for a lifetime," writes Ms. Aaronson. As the line between food and medicine becomes blurred, though, keeping the U.S. Food and Drug Administration's pharmaceutical standards away from foods may become a challenge in the future, she says. _________________________________________________________________ Taste Of Genius , By: Aaronson, Lauren, Psychology Today, Jul/Aug2005, Vol. 38, Issue 4 FROM AVOCADO TO ZUCCHINI, DIET IS THE NEW DRUG, AS SCIENTISTS DISCOVER THAT THE BENEFITS OF FOODS GO WAY BEYOND BASIC NUTRITION, IT MAY BE THAT YOUR BRAIN HAS THE MOST TO GAIN FROM WHAT YOU EAT. WHEN I WAS 7 YEARS OLD, I READ IN THE SEQUEL to Little Women that oatmeal made you smart. So I demanded that my mother feed me oatmeal on the day of my spelling test. I ate oatmeal before every test I ever took from elementary school through grad school. I even made my mother mail me oatmeal when I had a big exam during my semester abroad; later I thanked both my mom and Quaker Oats: I got a perfect score. My mother chalked up my success to superstition. But I still believe that the oatmeal itself made a difference. And now it looks like science will prove me and the book's heroine, Jo March, right. Like just about anything we eat, oatmeal influences the way our brains function. Food, after all, gives our bodies the raw materials to build everything from noses to neurons and the ability to operate them efficiently. Some materials make for better outcomes than others, as a flood of recent studies attest. Fibrous oatmeal, for instance, slowly and steadily ushered the cereal's cargo of carbohydrates into my system as glucose. My brain snapped up that sugar from the bloodstream and deployed it both as fuel to power its operations and as a component of key chemical messengers, the very neurotransmitters that carry thoughts and memories. Oatmeal revved up my brain and stabilized my mood, memory and concentration--all without the spiky highs or crashing lows of foods like candy bars that dump their payload of sugar quickly. Unbeknownst to me, my morning oatmeal also supplied ferulic acid. A potent antioxidant lurking in the germ and bran of grains, ferulic acid appears to be a general protector of brain cells, keeping them stipple and responsive by nullifying toxins that stiffen them with age--and possibly even reversing some of the cognitive decline of aging. A bowlful of gruel is hardly the fashionable food of choice. But oatmeal sits, however lumpily, at the cutting edge of a revolution in the way we think about food. Nutritional science is demonstrating that some edibles--call them functional foods--do far more than provide essential nutrients for normal maintenance and development. They furnish biologically active components that create high-class physiologic effects, such as disarming toxins, and impart health benefits. They have the capacity to reduce disease risk--"with minimal involvement of health professionals," the nation's food scientists say. "Food has a greater impact on health than previously known," declares a report released last March by the Institute of Food Technologists. "New evidence-based science linking diet to disease and disease prevention" has "blurred the line between food and medicine." Nutrients influence body processes at the molecular level, turning our very genes on and off. The emerging understanding of molecular nutrition, says the IFT, "has the potential to revolutionize diet, nutrition and food products, and health care." Scarcely a week goes by now when scientists don't make some discovery about the health-enhancing properties of food, from the cancer-fighting abilities of brussel sprouts to the anti-Alzheimer's effects of anchovies. For the nation's nutritional scientists, that presents a significant problem: There's no longer a clear boundary between foods and drugs. In some cases--antioxidant-rich cranberry juice, for example--the health claims for nutrients actually have to be soft-pedaled, lest they trigger regulations that require foods to undergo the same approval process as drugs. The IFT is urging the Food and Drug Administration to adopt reasonable procedures for demonstrating safety and efficacy of foods that are, well, more than foods--what some people call "nutraceuticals." Oatmeal in fact inspired one of the earliest druglike claims for a food. In 1997 the Quaker Oats box began touting the cholesterol-lowering effects of the cereal after the FDA evaluated studies linking whole grains to reductions in the blood fat. Food-boosted health now goes way beyond the heart, all the way to the head. Of course, brain virtuosity also hinges on physical and mental activity, as well as on factors not yet understood. But there's no question that proper feeding primes our brains to reach their fullest potential and maintain their wits for a lifetime. Everyday nutrients are involved in a dazzling array of sophisticated actions at tile molecular level. Nevertheless, the latest research on functional foods highlights six strategic lines of defense on the route from mouth to mind. Some functional food superstars make use of more than one mechanism. The Telltale Heart: Jogging the Hind Food doesn't have to reach your head to improve your memory. "There's getting to be a general consensus that what is good for your heart is good for your brain," says lames Joseph, a neuroscientist at the Human Nutrition Research Center at Tufts University in Massachusetts. "Your brain accounts for just 2 percent of your body weight, but it eats up about 20 percent of your oxygen intake. Since it's such a hungry organ, your brain depends on a strong cardiovascular system to ferry in supplies. Healthy blood pressure and cholesterol levels keep your arteries clear, leaving them free to transport nutrients to your brain. Clear arteries also reduce risk of stroke, which kills neurons when a blocked or ruptured vessel cuts off blood flow. Any steps you take to improve the delivery of oxygen to your heart--that two-mile jog, for example--automatically pump up your brain. The steps include well-known dietary cardiovascular strengtheners like fiber-rich foods, which lower cholesterol; leafy greens rich in B vitamins and folate, which reduce levels of vessel-harming homocysteine; omega-3 fatty acids, which may prevent arrhythrajas; and exercise, which reduces blood pressure, helps control blood-fat levels and keeps weight in check. Brain-boosting steps may also include downright counterintuitive measures. Women who have one alcoholic drink a day--be it wine, beer or that cosmopolitan--have a much lower risk of cognitive decline than either teetotalers or heavy drinkers, according to a recent study at the Harvard School of Public Health. It's the effect of alcohol on your blood. By elevating levels of "good" cholesterol, thereby lowering the risk of stroke, small amounts of alcohol may protect both your cardiovascular system and the brain it serves. Sweet Memory: Oat Cuisine Your brain is the only organ that draws nearly all its energy from glucose, the sugar in ripe grapes and honey, for example, and produced in quantity from pasta and other starchy carbohydrates. That sweet substance also fuels the formation of sweet memories, or at least reliable ones. Lab experiments reveal that a dose of glucose-sweetened lemonade boosts recall of events, words, movements, drawings and faces, among other things, with effects lasting long enough to get you through a two-hour exam. Other research extends these findings from doctored drinks to regular food. Any carbohydrate-rich dish, such as a bagel or a thick slice of bread, may prompt similar memory enhancements for healthy adults. While a candy bar provides a burst of brain energy, that flash quickly subsides and your blood-sugar level plummets, fueling the desire for another ride on the blood-sugar roller coaster. Both body and brain may do better with foods that score low on the glycemic index, a rating that measures how fast and how high a food increases blood glucose levels after it's consumed. Because they surrender their starches slowly, such fiber-rich foods as barley, beans and Jo March's oatmeal all provide a steadier--and less fattening-stream of energy than a Snickers bar. Over the long term, less-fattening foods benefit body, blood and brain. A healthy weight helps prevent diabetes and impaired glucose tolerance, conditions associated with a decline in cognitive capacity. Compared to glucose-intolerant adults, people with a well-maintained energy supply hang on longer to their memories. Signal Savers: Salmon Tales Neurons, the main cells in your brain, are a bit like New York City: bustling with activity but walled off from the outside by rivers and membranes, respectively. For neurons to survive and contribute to the world, their walls need to let vital goods pass in and out. A healthy cell in its prime has a supple membrane that allows important molecules to cross unimpeded, as if over the Brooklyn Bridge at midnight. As a cell ages, though, the materials in the membrane stiffen and make it less pliable. With bridges and tunnels closing down, toll-boothlike receptors on the surface of the membrane don't collect as many incoming signals from message-carrying neurotransmitters as they should. You might feel such effects as sluggishness in learning the new and recalling the old, poor sleep, lowered pain threshold. Impaired body-temperature regulation could make you uncomfortable in ordinary settings. The neuronal membrane is made up primarily of fats, the very same fats that you fork into your mouth. In fact, your brain has your body's second-highest concentration of fat, right after actual fatty tissue--think butt and belly--itself. The kinds of fats in the foods you eat influence the character of your cell membranes. Cholesterol and saturated fats harden membranes, while essential polyunsaturated fatty acids--omega-3s and omega-6s--render them supple. A healthy mix of essential fatty acids seems to enhance learning by facilitating the smooth passage of signals through neuronal membranes. Most Americans take in plenty of omega-6 fatty acids, via nearly ubiquitous soy and corn oils. But the typical American diet lacks sufficient omega-3s, notes David I. Mostofsky, a neuroscientist at Boston University. Within days after you add omega-3s to your diet, membranes are rejuvenated in composition. Salmon and other coldwater fish and (perhaps less appetizingly) algae are rich sources of the omega-3 fats that your body can utilize directly from food--known as EPA (eicosapentaenoic acid) and DHA (docosahexaenoic acid). Walnuts and flaxseed are rich in a related substance, alpha-linolenic acid, which can be converted more or less efficiently to EPA and DHA in the body. Human breast milk is rich in all three fatty acids, and DHA provides critical insulation for an infant's developing nervous system. Under the direction of Gregory Cole, the Alzheimer's Disease Research Center at the University of California at Los Angeles is preparing to test whether omega-3 fatty acids can deter Alzheimer's disease, the number one cause of cognitive decline. So far, the fats have successfully fended off dementia in lab rats. Tests of omega-3s are underway for a variety of other conditions, ranging from sleep disorders and anxiety, to depression and impaired immune responses. "Nobody fully understands why they should have so many different functions," muses Mostofsky. Perhaps it's because the neuronal membrane controls access to all other nerve-cell functions. Power Makers: Meat and Milk Once nutrients make their way into a neuron, small furnaces within the cells turn them into energy by combusting glucose and oxygen. The cellular furnaces, known as mitochondria, create energy less efficiently as you age. As mitochondria sputter, cells have less energy to power critical activities like the repair of everyday damage and replication of DNA. The downslide in cell metabolism likely contributes to the cognitive decline seen with age. What's more, mitochondria spew cell-damaging free radicals of oxygen into their environment, and the more they age, the more renegade free radicals they generate. "It's like an old car engine that's spitting out black smoke," explains Bruce Ames, professor of biochemistry and molecular biology at the University of California at Berkeley. Give mitochondria a tune-up, reasons Ames, and they'll turn out more energy and fewer free radicals. He focuses on two food-based mitochondrial boosters: acetyl-L-carnitine and lipoic acid. Acetyl-L-carnitine, a version of an amino acid found in meat, milk and avocado, helps shuttle laity acids into mitochondria. Lipoic acid, found in beef, spinach and broccoli, quenches free radicals. In cahoots, these compounds in high concentrations combat memory loss in lab rats, Ames reports. He's testing a combination pill for people. Another dietary factor--or, more precisely, lack thereof--may beef up mitochondrial function. Low caloric intake, shown to prolong the life span of rats, also seems to revive mitochondrial function, perhaps because mitochondria with fewer nutrients to burn emit fewer free radicals. It may also do more--promote the growth of entirely new neurons. Mark P Mattson, chief of the neurosciences lab at the National Institute on Aging and flag bearer for low-cal research, advises people who are overweight to cut back on calories. The benefits are less clear for people who are already of normal weight. Youth Keepers: The Berry Elixir Along with an increase in free radicals, aging can bring a decrease in the body's ability to deal with these errant molecules, which have a particular affinity for attacking cell membranes. And in a vicious cycle, the damage caused by unchecked free radicals--a.k.a., oxidative stress-ends up compounding the aging process. "Every major disease that kills people in this country has an oxidative stress component," says Joseph of Tufts University. For him and other researchers interested in preserving the brain's ability to function, oxidative stress seems like a natural enemy. The scientists also have natural allies: fruits and vegetables. They contain a whole circus troupe of antioxidants, colorful substances that arrest free radicals by nullifying their rogue electrons. Some antioxidants are already familiar faces in nutrition. Vitamins C and E, for instance, are the main antioxidants in the brain. But they're getting new respect as potential keepers of brain health, since they appear to guard aging neuronal membranes from free radicals. Other antioxidants have only recently stepped into the limelight. Many are natural chemicals that plants have evolved to protect themselves from disease. The major antioxidants in plants, known as flavonoids, come in two main flavors: anthocyanins, in brightly colored fruits, and their colorless cousins, anthoxanthins, found notably in green tea (epigallocatechins) and soybeans (isoflavones). With a rich load of anthocyanins, the most colorful produce also tends to be the most potent. Blueberries, blackberries and cranberries offer more antioxidant protection than the legendarily nutritious brussel sprout. Their antioxidants also have anti-inflammatory actions, soothing overagitated immune cells that may hamper brain activity. In practice, the petite blueberry reverses some of the effects of aging on the brain, boosting short-term memory and spatial learning--reviving the ability, of doddery lab rats to move through mazes. People who eat a cup of blueberries a day perform well on tests of motor skills. Purple grape juice, similarly rich in flavonoids, similarly boosts performance. Grape juice and red wine also promise to stave off heart disease, in part because of the activity of another powerful antioxidant, resveratrol. The garishly yellow spice turmeric excites researchers beyond its vibrancy of color. Its active component, curcumin, has both antioxidant and anti-inflammatory properties; further, it may act as an iron chelator, removing harmful buildup of metal associated with neurodegenerative diseases. Recent lab tests suggest that it directly targets the brain plaques linked to Alzheimer's disease. "It's binding to the supposedly toxic stuff," and disarms it, explains Greg Cole of UCLA. Adding to the excitement surrounding the flashy turmeric seed, Cole is currently evaluating the effectiveness of curcumin as an Alzheimer's treatment. His trials may help explain why India, where turmeric commonly tints curry dishes, has a particularly low rate of Alzheimer's. Whether hot curry dishes or cool glasses of juice, whole foods may hold on to nutrient benefits that concentrated supplements cannot. "There is some evidence that the vitamins from food are more efficient," says D. Allan Butterfield, a biochemist at the Sanders-Brown Center on Aging at the University of Kentucky. Signs of Success: Eggheads Some of the latest research in nutrition zeroes in on the classiest work of the brain: learning and remembering. Nutrients provide the building blocks of neurotransmitters, the chemicals that carry messages between brain cells--messages like "Grow" or "Pssst, pass this on." Both glucose and vitamin B1, for instance, feed production of acetylcholine, a neurotransmitter that appears to play a large role in memory formation. Adequate amounts of B1, as well as the other B vitamins, are necessary for everyday brain health. Still, upping levels of nutrients may boost specific neurotransmitters in your brain, abetting the cell-to-cell signaling that delivers, say, a memory to mind. "If I give you more choline [a fatlike essential nutrient related to the B vitamins, found in eggs, liver and soybeans], your brain cells will immediately make more acetylcholine," says Richard Wurtman, professor of neuropharmacology and director of the Clinical Research Center at Massachusetts Institute of Technology. In lab rats a surge in acetylcholine leads to memory improvements a month later. Normal human brain function requires adequate intake of choline, but it's not clear whether extra choline brings extra cognitive benefits--although, interestingly, Alzheimer's patients show drastically reduced acetylcholine levels. Some familiar antioxidants may also boost learning by abetting the signaling that takes place inside brain cells and modulating the expression of genes, influencing the survival and growth of nerve cells. Blueberries appear to help rat brains from within, in ways that even surpass their antioxidant activity, says scientist Joseph of Tufts University. Green tea may similarly protect neurons through related signaling mechanisms. Both edibles are still under investigation, since cell signaling ranks among the newest frontiers at the intersection between nutrition and neuroscience. As hot as the frontier now is, it turns out that the idea that food may be our brain's best medicine isn't novel at all. Jo March, the oatmeal muse who altered my eating habits for life, got there well before the scientists. "Don't worry, my dear," she says in Jo's Boys. "That active brain of yours was starving for good food; it has plenty now." From checker at panix.com Wed Sep 21 22:34:54 2005 From: checker at panix.com (Premise Checker) Date: Wed, 21 Sep 2005 18:34:54 -0400 (EDT) Subject: [Paleopsych] NYT: It Isn't Easy Being a Genius Message-ID: It Isn't Easy Being a Genius http://www.nytimes.com/2005/09/19/opinion/19collins.html By JIM COLLINS Boston LET me begin by making something very clear: I'm not a genius. Tomorrow, 25 people are going to find themselves making similar protestations - at least most of them are - after the MacArthur Foundation announces its latest class of fellows for its so-called genius award. And as someone who once received one of those awards, here's a little insight into what the new fellows experienced over the last few days and what they're going to have to deal with. Two years ago, I received a call. The person on the other end of the line asked if I was Jim Collins and if I was alone. For a moment, I thought I was receiving an obscene phone call. The caller then told me I had been selected as a MacArthur fellow. I laughed, convinced this was another well-orchestrated prank by one of my former college roommates. The caller tried to reassure me, and eventually gave me a number to call to confirm the award. The number had a Chicago area code, the home of the MacArthur Foundation. Maybe this was legit. I called the number and was assured by the folks on the other end that I really had been selected for the award. They then told me I couldn't tell anyone, except my immediate family, until the announcement in a few days. That night, my wife and I told our young children about the award. Our daughter quickly chimed in that she too was a genius, but her brother was not, because he didn't know all of his colors and he could count only to 10. The foundation avoids using the term "genius," and stresses that the award (worth $500,000) is for creativity. Most people, however, play up the genius label. I got my first taste of this the morning the awards were announced. As I left home to get coffee, my neighbor leaned from his second-story window, still in his pajamas, and yelled: "Hey, Jimmy Neutron! I didn't know I was living next to a genius." Within days, I began to receive requests from family, friends and strangers to evaluate various pet theories, some well founded, some half-baked, ranging from the therapeutic benefits of magnets to the location of the missing dark matter in the universe. People sought me out for answers and insights, usually prefacing their question with, "You're a genius": "We just saw 'War of the Worlds': are there aliens out there?" "What's the difference between an alligator and a crocodile?" "Does it really take seven years to digest chewing gum?" "How do you weigh someone's soul?" Some wanted my advice on which stocks to buy. Interestingly, the only time I felt like a genius was in 1999 and early 2000, when I was investing in high-tech stocks. In April 2000, I began my "Flowers for Algernon" post-brilliance, post-Nasdaq-bubble decline, and quickly picked up the nickname "idiot," several years before the Red Sox made it popular. So I don't give out stock tips. But here's a little advice to the new fellows. If you're an academic, expect your colleagues to assume that all of your papers are being accepted - little will they know that your work still gets rejected regularly. And expect not to have a lot of fun with board games. Trivial Pursuit has never been the same. My team always assumes it has the competitive advantage. But once I miss a few questions, my teammates turn on me: "What's the matter with you? You're supposed to be a genius!" The other team chimes in: "Clearly, the MacArthur Foundation made a mistake." These unrealistically high expectations extend even to children's games. After my daughter recently beat me at Candyland, she looked at me, disenchanted, and said, "Dad, I thought you were supposed to be a genius." I tried to explain that the MacArthur award was for creativity, not genius, and that my creative work did not encompass the selection of colored cards from a randomly shuffled deck. My daughter just slowly shook her head and walked out of the room. Congratulations new MacArthur fellows, you geniuses. Jim Collins, a bioengineer and 2003 MacArthur fellow, is a professor at Boston University. From Euterpel66 at aol.com Sat Sep 24 14:32:18 2005 From: Euterpel66 at aol.com (Euterpel66 at aol.com) Date: Sat, 24 Sep 2005 10:32:18 EDT Subject: [Paleopsych] women over 30 Message-ID: <12c.66e0092e.3066bd72@aol.com> This was written by Andy Rooney from CBS 60 Minutes. Andy Rooney says: As I grow in age, I value women who are over 30 most of all. Here are just a few reasons why: A woman over 30 will never wake you in the middle of the night to ask, "What are you thinking?" She doesn't care what you think. If a woman over 30 doesn't want to watch the game, she doesn't sit around whining about it. She does something she wants to do. And, it's usually something more interesting. A woman over 30 knows herself well enough to be assured in who she is, what she is, what she wants and from whom. Few women past the age of 30 give a damn what you might think about her or what she's doing. Women over 30 are dignified. They seldom have a screaming match with you at the opera or in the middle of an expensive restaurant. Of course, if you deserve it, they won't hesitate to shoot you, if they think they can get away with it. Older women are generous with praise, often undeserved. They know what it's like to be unappreciated. A woman over 30 has the self-assurance to introduce you to her women friends. A younger woman with a man will often ignore even her best friend because she doesn't trust the guy with other women. Women over 30 couldn't care less if you're attracted to her friends because she knows her friends won't betray her. Women get psychic as they age. You never have to confess your sins to a woman over 30. They always know. A woman over 30 looks good wearing bright red lipstick. This is not true of younger women. Once you get past a wrinkle or two,a woman over 30 is far sexier than her younger counterpart. Older women are forthright and honest. They'll tell you right off if you are a jerk if you are acting like one! You don't ever have to wonder where you stand with her. Yes, we praise women over 30 for a multitude of reasons. Unfortunately, it's not reciprocal. For every stunning, smart, well-coiffed hot woman of 30+, there is a bald, paunchy relic in yellow pants making a fool of his 30+ self with some 22-year-old waitress. Ladies, I apologize. For all those men who say, "Why buy the cow when you can get the milk for free", here is an update for you. Nowadays 80% of women are against marriage, why? Because women realize it's not worth buying an entire Pig, just to get a little sausage. Lorraine Rice Believe those who are seeking the truth. Doubt those who find it. ---Andre Gide http://hometown.aol.com/euterpel66/myhomepage/poetry.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at panix.com Sun Sep 25 01:54:37 2005 From: checker at panix.com (Premise Checker) Date: Sat, 24 Sep 2005 21:54:37 -0400 (EDT) Subject: [Paleopsych] NYT: More Horrible Than Truth: News Reports Message-ID: More Horrible Than Truth: News Reports http://www.nytimes.com/2005/09/19/business/media/19carr.html DISASTER has a way of bringing out the best and the worst instincts in the news media. It is a grand thing that during the most terrible days of Hurricane Katrina, many reporters found their gag reflex and stopped swallowing pat excuses from public officials. But the media's willingness to report thinly attributed rumors may also have contributed to a kind of cultural wreckage that will not clean up easily. First, anyone with any knowledge of the events in New Orleans knows that terrible things with non-natural causes occurred: there were assaults, shots fired at a rescue helicopter and, given the state of the city's police department, many other crimes that probably went unreported. But many instances in the lurid libretto of widespread murder, carjacking, rape, and assaults that filled the airwaves and newspapers have yet to be established or proved, as far as anyone can determine. And many of the urban legends that sprang up - the systematic rape of children, the slitting of a 7-year-old's throat - so far seem to be just that. The fact that some of these rumors were repeated by overwhelmed local officials does not completely get the news media off the hook. A survey of news reports in the LexisNexis database shows that on Sept. 1, the news media's narrative of the hurricane shifted. The Fox News anchor, John Gibson, helped set the scene: "All kinds of reports of looting, fires and violence. Thugs shooting at rescue crews. Thousands of police and National Guard troops are on the scene trying to get the situation under control. Thousands more on the way. So heads up, looters." A reporter, David Lee Miller, responded: "Hi, John. As you so rightly point out, there are so many murders taking place. There are rapes, other violent crimes taking place in New Orleans." After the interview, Mr. Gibson did acknowledge that "we have yet to confirm a lot of that." Later that night on MSNBC, Tucker Carlson grabbed the flaming baton and ran with it. "People are being raped," he said in a conversation with the Rev. Al Sharpton. "People are being murdered. People are being shot. Police officers being shot." Some journalists did find sources. About 10 p.m. that same evening, Greta Van Susteren of Fox interviewed Dr. Charles Burnell, an emergency room physician who was providing medical care in the Superdome. "Well, we had several murders. We had three murders last night. We had a total of six rapes last night. We had the day before I think there were three or four murders. There were half a dozen rapes that night," he told Ms. Van Susteren. (Dr. Burnell did not return several calls asking for comment.) On the same day, The New York Times referred to two rapes at the Superdome, quoting a woman by name who said she was a witness. It is a fact that many died at the convention center and Superdome (7 and 10 respectively, according to the most recent reports from the coroner), but according to a Sept. 15 report in The Chicago Tribune, it was mostly from neglect rather than overt violence. According to the Tribune article, which quoted Capt. Jeffery Winn, the head of the city's SWAT team, one person at the convention center died from multiple stab wounds and one National Guardsman was shot in the leg. On Sept. 8, Lt. Dave Banelli, head of the sex crimes unit, told a CNN correspondent, Drew Griffin, that his division had reports of two attempted rapes at the Superdome. The caveat here is that rape is a notoriously underreported crime, perhaps more so under the chaotic circumstances. The journalists who dwelled on some of the more improbable stories out of New Orleans might be held to account, except that they eventually received confirmation from both the mayor and the police chief. Appearing on "Oprah" on Sept. 6, Chief Eddie Compass said of the Superdome: "We had little babies in there, some of the little babies getting raped." Mayor C. Ray Nagin concurred: "They have people standing out there, have been in that frickin' Superdome for five days watching dead bodies, watching hooligans killing people, raping people." But the night before, Chief Compass had told The Guardian, "We don't have any substantiated rapes. We will investigate if they come forward." Many of the more toxic rumors seem to have come from evacuees, half-crazed with fear sitting through night after night in the dark. Victims, officials and reporters all took one of the most horrific events in American history and made it worse than it actually was. Although I was not in New Orleans, I was at the World Trade Center towers site the afternoon of Sept. 11, 2001. People had seen unimaginable things, but a small percentage, many still covered in ash, told me tales that were worse than what actually happened. Mothers throwing babies out of the towers, men getting in fights on the ledges, human heads getting blown out of the buildings, all of which took place so high up in the air that it was hard to distinguish the falling humans from the falling wreckage. "There is a timeless primordial appeal of the story of a city in chaos and people running loose," said Carl Smith, a professor of English and American studies at Northwestern University and the author of "Urban Disorder and the Shape of Belief." He says that urban chaos narratives offered "the fulfillment of some timely ideas and prejudices about the current social order." In New Orleans, the misinformation extracted a terrible toll in another way. An international press eager to jump on American pathology played the unfounded reports for all they were worth, with hundreds of news outlets regurgitating tales of lawlessness. "They're Going to Kill or Rape Us, Get Us Out" read the headline in The Daily Star, a British tabloid. "Tourist Tells of Murder and Rape," was one headline in The Australian. "Snipers Shoot at Hospitals. Evacuees Raped, Beaten," The Ottawa Citizen reported. "I think that citizens of New Orleans have been stigmatized in a way that is going to make it difficult to be accepted wherever they go," said Jonathan Simon, who teaches criminal law at the University of California, Berkeley. Howard Witt, the Southwest bureau chief of The Chicago Tribune, wrote early on that much of what he had been told, even by public officials, did not check out. And he found himself inundated by rumors. "The Web and talk radio fueled these rumors in the days following the storm, and the evacuees themselves contributed to the misinformation because they were so scared," he said by telephone from Baton Rouge, La. With the grid down and accurate information at a premium, a game of toxic telephone supplanted logic. "I talked to a friend and, after the flood, they heard on the radio that a gang of 400 armed black looters were coming over the bridge to Hanrahan, where he lived," said Ken Bode, a professor of journalism at DePauw University and a former correspondent for NBC. "He and his neighbors were sitting in the street with guns and they decided to load up all they could and caravan out. He said the looters never got there because the National Guard turned them back." There was no band of looters coming their way, but other things that sound too horrible to be true did happen. The widely reported and seemingly fantastical story about a man shooting at a rescue helicopter was confirmed. And the police in Gretna, La., did in fact turn back hundreds of fleeing refugees on the Crescent City Connector. On Sept. 15, The Chicago Tribune had an extensive report detailing how thugs took some measure of control over people and supplies at the convention center. The [3]Washington Post published a vivid article on the same day detailing how grave the situation in the convention center became, but again, the issue of whether people were murdered was left open. And yes, true story, a Louisiana congressman under investigation by the Federal Bureau of Investigation hitched a ride on a National Guard truck to his flood-damaged home to pick up, among other things, a box of documents. A rescue helicopter was diverted from picking up survivors after the truck became stuck. Even now, the real, actual events in New Orleans in the past three weeks surpass the imagination. Who needs urban myths when the reality was so brutal? From checker at panix.com Sun Sep 25 01:54:44 2005 From: checker at panix.com (Premise Checker) Date: Sat, 24 Sep 2005 21:54:44 -0400 (EDT) Subject: [Paleopsych] CHE: Disaster Could Have Been Far Worse, Says Sociologist Who Thinks New Orleans 'Lucked Out' Message-ID: Disaster Could Have Been Far Worse, Says Sociologist Who Thinks New Orleans 'Lucked Out' News bulletin from the Chronicle of Higher Education, 5.9.19 http://chronicle.com/free/2005/09/2005091904n.htm [2]By JENNIFER HOWARD The devastation caused by Hurricane Katrina came as a shock but it wasn't a surprise, at least not to Lee Clarke, an associate professor of sociology at Rutgers University at New Brunswick. In Worst Cases: Terror and Catastrophe in the Popular Imagination (University of Chicago Press, forthcoming in November), he lays out what could happen if New Orleans were hit by a major hurricane. He argues that instead of weighing the probabilities -- playing the odds -- policy makers need to take a hard look at worst-case possibilities: What if the Category 5 hurricane does hit a major and highly vulnerable population center? "As a colleague of mine puts it," he says, "things that have never happened before happen all the time." In an interview, he explains why he thinks the city "lucked out" this time around. Q. Explain the difference between what you call possibilistic and probabilistic worst-case thinking. A. Probabilism says, in normal language, What are the chances X is going to happen? If the chance is really low, you don't really need to worry about that very much. That's probabilistic thinking. Possibilistic thinking says, even if the chance is low, what are the consequences if that chance plays out? ... If you're 30,000 feet in the sky and your plane gets into a lot of trouble, it's not the probabilities that matter, it's the possibilities. Q. In the book you say that, contrary to popular opinion, "disasters aren't special. They are as normal as love, joy, triumph, and misery." Why is it useful to be able to imagine the worst as an ordinary part of life? A. It increases the chances that you'll be able to control things. It's a dangerous world. And there's no reason we can't be more prepared than we are. That doesn't mean we have to live on the knife edge of terror and anxiety. If you can afford it, you buy life insurance when you have kids. ... That's not dwelling on the worst case; it's looking at the possibilities square in the face. Q. Is it fair to say you weren't surprised by what happened in New Orleans? A. No, I was not surprised. This was an easy one. ... What's really frightening about this one ... is that it was easy to see this one coming, and many people did. Q. How could we have been better prepared, knowing what the possibilities were? A. The body count, whatever it ends up being, didn't need to be as high. Evacuation could have happened sooner. I'm putting aside the effort that could have been taken to strengthen those levees. ... But this is exactly the problem. Possibilistic thinking says, Let's empty New Orleans on Thursday, 'cause look at this thing -- it's a monster. ... Soon after it crossed over Florida it turned into the storm from hell. But the risk, from a decision-making point of view, is, What if we empty New Orleans and [the storm] takes a right turn and goes back into Tampa? Taking action on the basis of a possibilistic approach is going to cost us. Q. Could it have been even worse? A. Worst cases could always be worse. ... They lucked out. I mean, the Mississippi pretty much stayed put. ... I sound macabre, don't I? But I think it's a book of hope. Q. You talk in the book about "disorganizing for risk." How can we better prepare ourselves, as individuals and as a society, to deal with worst-case scenarios? A. It means thinking about preparedness at the organization level, in our social networks, in our communities. ... We need our organizations to be more prepared, but we also need to expand our conception about what critical infrastructure is. We usually think of it in engineering terms: We need to protect the power grid, the water supply. And all of that is true. But we're a highly interdependent society. ... In central New Jersey, where I live, some of us worry about chemical plants. ... If something should happen, most likely the safest thing for most of us to do is shelter in place. That means the first-grade teacher, she becomes the first responder. Q. What role should government play in disaster preparation and response? A. We need government to be prepared, as much as it can be, especially when events like Katrina come along. You have to have large organizations involved in the response, just because the tasks are so overwhelming. ... But bureaucracies are inherently normal. They're just not organized to deal with extreme events. ... Command and control doesn't work in disasters. Assuming that big organizations or the military ... are always going to ride to the rescue is a dangerous, dangerous assumption. Q. In New Orleans, did social networks fail along with bureaucracy? A. Lots of things failed. Poverty, that's just so important. The larger principle there is -- and again it flows from seeing disaster as normal -- that we die as we live, in patterns. So 75 percent of the victims at the World Trade Center collapse are middle class ... white men. You wouldn't call that racial or institutional discrimination, but it was certainly a pattern. ... We have very strong evidence in disaster research that class is very important. ... It's not everything, but it's not nothing. Q. If we're always imagining the worst, won't we live paralyzed by fear? A. I'm not suggesting that we throw out probabilism, only that we have a balance. ... People can handle a lot more scary things than we give them credit for. Q. What keeps you up nights? A. Worrying if I'm going to be interesting in my "Introduction to Sociology" class tomorrow. [Laughs.] I worry a little bit about near-earth objects because I think there are very few institutional interests that will push to pay attention to that. I worry about bird flu. It's only a matter of time. And I worry about our trains -- the most prosaic of technologies -- we're utterly dependent on them, and they're just a disaster waiting to happen. Or a target waiting to happen. A single chlorine car near Los Angeles puts four million people at risk. References 2. mailto:jennifer.howard at chronicle.com From checker at panix.com Sun Sep 25 01:54:49 2005 From: checker at panix.com (Premise Checker) Date: Sat, 24 Sep 2005 21:54:49 -0400 (EDT) Subject: [Paleopsych] CHE: Hurricanes Have Grown More Intense Since 1970, Researchers Say, and Global Warming Is a Prime Suspect Message-ID: Hurricanes Have Grown More Intense Since 1970, Researchers Say, and Global Warming Is a Prime Suspect News bulletin from the Chronicle of Higher Education, 5.9.19 http://chronicle.com/free/2005/09/2005091605n.htm [2]By RICHARD MONASTERSKY The number of intense hurricanes developing around the globe has climbed markedly in the past 35 years, according to researchers who mined storm data from the tropical ocean basins and are publishing their findings in a paper in today's issue of Science. While the total number of tropical cyclones has waxed and waned over the decades with no overall change, the proportion that reach the status of Category 4 and 5 hurricanes -- the strongest storms -- has climbed from 20 percent to 35 percent since 1970, according to Peter J. Webster, a professor of earth and atmospheric sciences at the Georgia Institute of Technology, and his colleagues. The researchers say they cannot tell whether the increase is part of a natural cycle or an indication that global warming has altered the behavior of storms. But they note that the temperature of the sea surface has climbed in all tropical ocean regions over the same period and that such changes, in theory, could spawn stronger storms. Judith A. Curry, chairwoman of Georgia Tech's School of Earth and Atmospheric Sciences, says that when researchers see sea-surface temperatures and hurricane intensity "relentlessly rising," it gives them confidence that "these two things are connected and that there is probably a substantial contribution from greenhouse warming and it's not just natural variability." "This could be a very long-period cycle, and it could go down in the next 30 years," says another author of the paper, Greg J. Holland, director of the Mesoscale and Microscale Meteorology Division at the National Center for Atmospheric Research. "Unfortunately I don't think that." The new findings are consistent with a study, published last month in Nature by Kerry A. Emanuel, an atmospheric scientist at the Massachusetts Institute of Technology ([3]The Chronicle, September 8). Mr. Emanuel reported that the total power released by tropical storms in the North Atlantic and North Pacific Oceans had increased substantially in the past 30 years, a trend that dovetails with the changes expected to occur because of global warming. Forecasts using computer climate models have suggested that warming of the tropical oceans could spur the growth of more strong hurricanes, such as Katrina, which was a Category 4 storm when it made landfall. More Katrinas Mr. Webster and his colleagues say the random nature of hurricanes makes it impossible to discern whether the long-term change they have identified has influenced any one storm. But the trend they detected suggests there are more Katrina-type storms now than in the past. "Who knows?" he asks. "Had this trend not been there, then Katrina may have been a Category 2 or a Category 3 and done a lot less damage." Some scientists have criticized Mr. Emanuel's report on hurricane power, and the new paper is also likely to generate a scientific storm. Christopher W. Landsea, science-and-operations officer at the National Hurricane Center, in Miami, says the new study relies on incomplete storm data that were collected using different techniques over the years. He questions the veracity of the reported increase in intense storms. "My conclusion is that it's an artifact of the database," he says. Mr. Webster and his colleagues analyzed the number, intensity, and duration of tropical storms. They chose to limit their study to the period of satellite storm observations, which began in 1970. For each storm, meteorologists used the satellite images to estimate wind speeds by examining the cloud patterns in the storm and characteristics of the cyclone's eye. Mr. Landsea says that the techniques and the satellites have changed with time, making it difficult to identify any trend over the decades in storm wind speeds. "They didn't take into account the changes in methodology," he says of Mr. Webster and his colleagues. Unlike other ocean basins, however, the Atlantic has good wind measurements taken by aircraft that flew through the center of the hurricanes. Mr. Landsea notes that the Atlantic data show the smallest change over time. The most intense storms went from being 20 percent of the total in the late 1970s and 1980s to being 25 percent of the total from 1990 through 2004. That change, he says, is consistent with a cyclical pattern in hurricane activity seen in the Atlantic since the late 1920s. But Mr. Holland responds that his team took into account the changes in the data. He also notes that storms have grown more intense in all ocean basins, including the well-studied Atlantic and the eastern North Pacific, where meteorologists have some aircraft measurements to check against the satellite data. "We've done the best we can," he says. "If it is a factor of the database, it is a very, very remote chance that is the case." References 2. mailto:rich.monastersky at chronicle.com 3. http://chronicle.com/daily/2005/09/2005090803n.htm From checker at panix.com Sun Sep 25 01:54:59 2005 From: checker at panix.com (Premise Checker) Date: Sat, 24 Sep 2005 21:54:59 -0400 (EDT) Subject: [Paleopsych] The Times: Psychopaths Make The Best Market Traders Message-ID: Psychopaths Make The Best Market Traders http://www.timesonline.co.uk/article/0,,542-1786905,00.html September 19, 2005 Industrial psychology With the right diagnosis the job is assured The jokey maxim is as tired as it is common: ???You don???t have to be mad to work here, but it helps.??? Months of exhaustive research by three US universities have now proved the aphorism true. Psychopaths, they found, make the best market traders. Normally we associate the condition with a conveniently deranged murderer who is a television staple. Psychopaths are people unable to empathise with human emotion, those who feel no qualms of guilt or flashes of sympathy. Who better, therefore, to invest our money or trade our shares? No caution, hesitancy, fear of risk or reckless enthusiasm. The American researchers found that the suppression of emotion, usually the result of a brain lesion, also creates splendid lawyers and company chief executives. Shareholders will applaud the company boss who looks on redundancies with equanimity or is sufficiently conscience-free to market the latest execrable product while claiming, straight-faced, that it is a boon to mankind. Psychiatrists have a glowing future as career guidance officers. No longer need patients worry about a diagnosis of neurosis ??? let them find jobs as clergy, tourist guides or anything where a little extra concern would help. Pyromaniac firemen would enjoy job satisfaction, agoraphobics would make fine town planners, and anyone with an addictive personality is already far down the track to becoming a supermodel. Luckiest of all are those with Munchausen???s syndrome: a future awaits as university researchers. And in the meantime, give your investment advisers psychometric tests: if they pass, sack them. From checker at panix.com Sun Sep 25 01:55:04 2005 From: checker at panix.com (Premise Checker) Date: Sat, 24 Sep 2005 21:55:04 -0400 (EDT) Subject: [Paleopsych] New Statesman: John Pilger: The Rise of the Democratic Police State Message-ID: John Pilger: The Rise of the Democratic Police State http://www.thetruthseeker.co.uk/www.newstatesman.co.uk August 18, 2005 [Thanks to Laird for this.] Thomas Friedman is a famous columnist on the New York Times. He has been described as "a guard dog of US foreign policy". Whatever America's warlords have in mind for the rest of humanity, Friedman will bark it. He boasts that "the hidden hand of the market will never work without a hidden fist". He promotes bombing countries and says world war three has begun. Friedman's latest bark is about free speech, which his country's constitution is said to safeguard. He wants the State Department to draw up a blacklist of those who make "wrong" political statements. He is referring not only to those who advocate violence, but those who believe American actions are the root cause of the current terrorism. The latter group, which he describes as "just one notch less despicable than the terrorists", includes most Americans and Britons, according to the latest polls. Friedman wants a "War of Ideas report" which names those who try to understand and explain, for example, why London was bombed. These are "excuse makers" who "deserve to be exposed". He borrows the term "excuse makers" from James Rubin, who was Madeleine Albright's chief apologist at the State Department. Albright, who rose to secretary of state under President Clinton, said that the death of half a million Iraqi infants as a result of an American-driven blockade was a "price" that was "worth it". Of all the interviews I have filmed in official Washington, Rubin's defence of this mass killing is unforgettable. Farce is never far away in these matters. The "excuse makers" would also include the CIA, which has warned that "Iraq [since the invasion] has replaced Afghanistan as the training ground for the next generation of 'professionalised ??? terrorists'." On to the Friedman/Rubin blacklist go the spooks! Like so much else during the Blair era, this McCarthyite rubbish has floated across the Atlantic and is now being recycled by the prime minister as proposed police-state legislation, little different from the fascist yearnings of Friedman and other extremists. For Friedman's blacklist, read Tony Blair's proposed database of proscribed opinions, bookshops, websites. The British human rights lawyer Linda Christian asks: "Are those who feel a huge sense of injustice about the same causes as the terrorists - Iraq, Afghanistan, the war on terrorism, Guantanamo Bay, Abu Ghraib - to be stopped from speaking forthrightly about their anger? Because terrorism is now defined in our law as actions abroad, will those who support liberation movements in, for example, Kashmir or Chechnya be denied freedom of expression?" Any definition of terrorism, she points out, should "encompass the actions of terrorist states engaged in unlawful wars." Of course, Blair is silent on western state terrorism in the Middle East and elsewhere; and for him to moralise about "our values" insults the fact of his blood-crime in Iraq. His budding police state will, he hopes, have the totalitarian powers he has longed for since 2001 when he suspended habeas corpus and introduced unlimited house arrest without trial. The Law Lords, Britain's highest judiciary, have tried to stop this. Last December, Lord Hoffmann said that Blair's attacks on human rights were a greater threat to freedom than terrorism. On 26 July, Blair emoted that the entire British nation was under threat and abused the judiciary in terms, as Simon Jenkins noted, "that would do credit to his friend Vladimir Putin". What we are seeing in Britain is the rise of the democratic police state. Should you be tempted to dismiss all this as esoteric or merely mad, travel to any Muslim community in Britain, especially in the north west and sense the state of siege and fear. On 15 July, Blair's Britain of the future was glimpsed when the police raided the Iqra Learning Centre and book store near Leeds. The Iqra Trust is a well-known charity that promotes Islam worldwide as "a peaceful religion which covers every walk of life." The police smashed down the door, wrecked the shop and took away anti-war literature which they described as "anti-western". Among this was, reportedly, a DVD of the Respect Party MP George Galloway addressing the US Senate and a New Statesman article of mine illustrated by a much-published photograph of a Palestinian man in Gaza attempting to shield his son from Israeli bullets before the boy was shot to death. The photograph was said to be "working people up", meaning Muslim people. Clearly, David Gibbons, this journal's esteemed art director, who chose this illustration, will be called before the Blair Incitement Tribunal. One of my books, The New Rulers of the World, was also apparently confiscated. It is not known whether the police have yet read the chapter that documents how the Americans, with help from MI6 and the SAS, created, armed and bankrolled the terrorists of the Islamic Mujahideen, not least Osama Bin Laden. The raid was deliberately theatrical, with the media tipped off. Two of the alleged 7 July bombers had been volunteers in the shop almost four years ago. "When they became hardliners", said a community youth worker. "They left and have never been back and they???ve had nothing to do with the shop." The raid was watched by horrified local people. who are now scared, angry and bitter. I spoke to Muserat Sujawal, who has lived in the area for 31 years and is respected widely for her management of the nearby Hamara Community Centre. She told me, "There was no justification for the raid. The whole point of the shop is to teach how Islam is a community-based religion. My family has used the shop for years, buying, for example, the Arabic equivalent of Sesame Street. They did it to put fear in our hearts." James Dean, a Bradford secondary school teacher, said, "I am teaching myself Urdu because I have multi-ethnic classes, and the shop has been very helpful with tapes." The police have the right to pursue every lead in their hunt for bombers, but scaremongering is not their right. Sir Ian Blair, the Metropolitan Police Commissioner who understands how the media can be used and spends a lot of time in television studios, has yet to explain why he announced that the killing in the London Underground of the Brazilian Jean Charles de Menezes was "directly linked" to terrorism, when he must have known the truth. Muslim people all over Britain report the presence of police "video vans" cruising their streets, filming everyone. "We have become like ghettoes under siege," said one man too frightened to be named. "Do they know what this is doing to our young people?" The other day Blair said, "We are not having any of this nonsense about [the bombings having anything] to do with what the British are doing in Iraq or Afghanistan, or support for Israel, or support for America, or any of the rest of it. It is nonsense and we have to confront it as that." This "raving", as the American writer Mike Whitney observed, "is part of a broader strategy to dismiss the obvious facts about terror and blame the victims of American-British aggression. It's a tactic that was minted in Tel Aviv and perfected over 37 years of occupation. It is predicated on the assumption that terrorism emerges from an amorphous, religious-based ideology that transforms its adherents into ruthless butchers." Professor Robert Pape of the University of Chicago has examined every act of suicide terrorism over the past 25 years. He refutes the assumption that suicide bombers are mainly driven by "an evil ideology independent of other circumstances." He said, "The facts are that since 1980, half the attacks have been secular. Few of the terrorists fit the standard stereotype... Half of them are not religious fanatics at all. In fact, over 95 per cent of suicide attacks around the world [are not about] religion, but a specific strategic purpose - to compel the United States and other western countries to abandon military commitments on the Arabian Peninsula and in countries they view as their homeland or prize greatly... The link between anger over American, British and western military [action] and al-Qaeda's ability to recruit suicide terrorists to kill us could not be tighter." So we have been warned, yet again. Terrorism is the logical consquence of American and British "foreign policy" whose infinitely greater terrorism we need to recognise, and debate, as a matter of urgency. First published in the _New Statesman_ Last updated 23/08/2005 From checker at panix.com Sun Sep 25 01:55:09 2005 From: checker at panix.com (Premise Checker) Date: Sat, 24 Sep 2005 21:55:09 -0400 (EDT) Subject: [Paleopsych] NYT: Intelligence in the Internet age Message-ID: Intelligence in the Internet age http://www.nytimes.com/cnet/CNET_2100-11395_3-5869719.html Stefanie Olsen, Staff Writer, CNET News.com It's a question older than the Parthenon: Do innovations and new technologies make us more intelligent? A few thousand years ago, a Greek philosopher, as he snacked on dates on a bench in downtown Athens, may have wondered if the written language folks were starting to use was allowing them to avoid thinking for themselves. Today, terabytes of easily accessed data, always-on Internet connectivity, and lightning-fast search engines are profoundly changing the way people gather information. But the age-old question remains: Is technology making us smarter? Or are we lazily reliant on computers, and, well, dumber than we used to be? "Our environment, because of technology, is changing, and therefore the abilities we need in order to navigate these highly information-laden environments and succeed are changing," said Susana Urbina, a professor of psychology at the University of North Florida who has studied the roots of intelligence. If there is a good answer to the question, it probably starts with a contradiction: What makes us intelligent--the ability to reason and learn--is staying the same and will never fundamentally change because of technology. On the other hand, technology, from pocket calculators to the Internet, is radically changing the notion of the intelligence necessary to function in the modern world. Take Diego Valderrama, an economist with the Federal Reserve Bank in San Francisco. If he were an economist 40 years ago, he may have used a paper, pencil and slide rule to figure out and chart by hand how the local economy might change with a 1 percent boost in taxes. But because he's a thoroughly modern guy, he uses knowledge of the C++ programming language to create mathematical algorithms to compute answers and produce elaborate projections on the impact of macroeconomic changes to work forces or consumer consumption. Does that mean he's not as bright as an economist from the 1950s? Is he smarter? The answer is probably "no" on both counts. He traded one skill for another. Computer skills make him far more efficient and allow him to present more accurate--more intelligent--information. And without them, he'd have a tough time doing his job. But drop him into the Federal Reserve 40 years ago, and a lack of skill with the slide rule could put an equal crimp on his career. Intelligence, as it impacts the economist Valderrama, is our capacity to adapt and thrive in our own environment. In a Darwinian sense, it's as true now as it was millions of years ago, when man's aptitude for hearing the way branches broke or smelling a spore affected his power to avoid predators, eat and survive. But what makes someone smart can vary in different cultures and situations. A successful Wall Street banker who has dropped into the Australian Outback likely couldn't pull off a great Crocodile Dundee impression. A mathematical genius like Isaac Newton could be--in fact, he was--socially inept and a borderline hermit. A master painter? Probably not so good at balancing a checkbook. What's undeniable is the Internet's democratization of information. It's providing instant access to information and, in a sense, improving the practical application of intelligence for everyone. Nearly a century ago, Henry Ford didn't have the Internet, but he did have a bunch of smart guys. The auto industry pioneer, as a parlor trick, liked to claim he could answer any question in 30 minutes. In fact, he had organized a research staff he could call at any time to get him the answer. Today, you don't have to be an auto baron to feign that kind of knowledge. You just have to be able to type G-O-O-G-L-E. People can in a matter of minutes find sources of information like court documents, scientific papers or corporate securities filings. "The notion that the world's knowledge is literally at your fingertips is very compelling and is very beguiling," said Vint Cerf, who co-created the underlying architecture of the Internet and who is widely considered one of its "fathers." What's exciting "is the Internet's ability to absorb such a large amount of information and for it to be accessible to other people, even if they don't know it exists or don't know who you are." Indeed, Doug Engelbart, one of the pioneers of personal computing technology in the 1960s, envisioned in the early '60s that the PC would augment human intelligence. He believes that society's ability to gain insight from information has evolved with the help of computers. "The key thing about all the world's big problems is that they have to be dealt with collectively," Engelbart said. "If we don't get collectively smarter, we're doomed." The virtual memory According to at least one definition, intelligence is the "ability to reason, plan, solve problems, think abstractly, comprehend ideas and language, and learn." Yet intelligence is not just about book learning or test scores; it also reflects a deeper understanding of the world. On average, people with high IQs are thought to live longer, earn more money, process information faster and have larger working memories. Yet could all this information provided by the Internet and gadgets dampen our motivation to remember anything? Working with the Treo handheld computing device he helped create, Jeff Hawkins can easily recount exactly what he did three years ago on Sept. 8, factor 9,982 and Pi, or describe a weather system over the Pacific Ocean. But without his "smart" phone, he can't recall his daughter's telephone number offhand. It's a familiar circumstance for people living in the hyper-connected Internet age, when it has become easier to program a cell phone or computer--instead of your brain--to recall facts or other essential information. In some sense, our digital devices do the thinking for us now, helping us with everything from calendar scheduling and local directions to in-depth research and "Jeopardy"-like trivia. "It's true we don't remember anything anymore, but we don't need to," said Hawkins, the co-founder of Palm Computing and author of a book called "On Intelligence." "We might one day sit around and reminisce about having to remember phone numbers, but it's not a bad thing. It frees us up to think about other things. The brain has a limited capacity, if you give it high-level tools, it will work on high-level problems," he said. Only 600 years ago, people relied on memory as a primary means of communication and tradition. Before the printed word, memory was essential to lawyers, doctors, priests and poets, and those with particular talents for memory were revered. Seneca, a famous teacher of rhetoric around A.D. 37, was said to be able to repeat long passages of speeches he had heard years before. "Memory," said Greek playwright Aeschylus, "is the mother of all wisdom." People feared the invention of the printing press because it would cause people to rely on books for their memory. Today, memory is more irrelevant than ever, argue some academics. "What's important is your ability to use what you know well. There are people who are walking encyclopedias, but they make a mess of their lives. Getting a 100 percent on a written driving test doesn't mean you can drive," said Robert Sternberg, dean of Arts and Sciences at Tufts University and a professor of psychology. Tomorrow: A look at what makes us smart in the Internet age. And what happens when the lights go out? From checker at panix.com Sun Sep 25 01:55:16 2005 From: checker at panix.com (Premise Checker) Date: Sat, 24 Sep 2005 21:55:16 -0400 (EDT) Subject: [Paleopsych] Global Gulag: Steven Hassan: Political Propaganda is Cult Brainwashing Message-ID: Steven Hassan: Political Propaganda is Cult Brainwashing [Thanks to Laird for this. There are hundreds of theories about persuasion. Apparently this is just one man's theory. I don't know how his theories compare with other theories, how compatible they are with each other, and so on.] A successful induction by a destructive cult displaces a person's former identity and replaces it with a new one. That new identity may not be one that the person would have freely chosen under her own volition. -Steven Hassan http://www.chapman.edu/comm/comm/faculty/thobbs/com401/socialinfluence/cult.html http://www.batr.org/gulag/ Nothing is unusual about office holders, political parties and agency bureaucracies taking liberties with facts, when they frame their case and sell their policies and programs. But how many people look upon this process as one designed, not to persuade you; but to indoctrinate you into accepting causes that are not in your own self interest. The techniques of agenda shaping and peer pressure guiding is sophisticated and Sub-Rosa in intent. According to Margaret Thaler Singer, ???_Thought Reform_ (http://www.factnet.org/Thought_Reform_Exists.htm?FACTNet) is not a mysterious process. It is the systematic application of psychological and social influence techniques in an organized programmatic way within a constructed and managed environments. The goal is to produce specific attitudinal and behavioral changes. The changes occur incrementally without its being patently visible to those undergoing the process that their attitudes and behavior are being changed a step at a time according to the plan of those directing the program.??? The _Six Conditions_ (http://www.trinitypleasanton.org/healing/abuse_MargaretThalerSinger_article2.htm) for Thought Reform are: 1) Keep the person unaware of what is going on and how she or he is being changed a step at a time. 2) Control the person???s social and/or physical environment; especially control the person???s time. 3) Systematically create a sense of powerlessness in the person. 4) Manipulate a system of rewards, punishments and experiences in such a way as to inhibit behavior that reflects the person???s former social identity. 5) Manipulate a system of rewards, punishments, and experiences in order to promote learning the group???s ideology or belief system and group-approved behaviors. 6) Put forth a closed system of logic and an authoritarian structure that permits no feedback and refuses to be modified except by leadership approval or executive order. Since _Singer_ (http://www.caic.org.au/general/singer.htm) claims that techniques to achieve power through mind control are readily comprehended, the key inquiry is to understand the reason why the general public is so unaware of the assault upon their thought process? The factual historic record leads one to conclude that the limited and select group of participants that seek and attain positions of influence, share one crucial trait. Namely, a desire to preach their adherence to the ???Will of the People???; while they exhort democratic rule, their actions conform to governance by the privileged few. Citing the significance of 1984 we find the answer: ???Orwell reasoned that if a government could control all media and communication, meanwhile forcing citizens to speak in a politically- controlled jargon, this would blunt independent thinking. If thought could be controlled, then rebellious actions against a regime could be prevented.??? Because of the nature of lust for power, moral means to achieve political objectives becomes irrelevant; thus, the motive to conceal real intentions from the citizens. The Sub-Rosa character of the elites is not who they are, but the actual hidden goals they achieve with their policies and control of the State. Singer concludes: ???Orwell???s genius centered on seeing how language, not physical force would be used to manipulate minds. In fact the growing evidence in the behavioral sciences is that a smiling Big Brother has greater power to influence thought and decision-making that a visibly threatening person.??? Now review the following Chart that identifies the progression from individual education to mass control: CHART Education Advertising Propaganda Indoctrination Thought Reform Focus of body of knowledge Many bodies of knowledge, based on scientific findings in various fields. Body of knowledge concerns product, competitors; how to sell and influence via legal persuasion. Body of knowledge centers on political persuasion of masses of people. Body of knowledge is explicitly designed to inculcate organizational values. Direction & degree of exchange Two way pupil-teacher exchange encouraged. Exchange can occur but communication generally one-sided. Some exchange occurs but communication generally one-sided. Limited exchange occurs, communication is one-sided. No exchange occurs, communication is one-sided. Ability to change Change occurs as science advances; as students and other scholars offer criticisms; as students & citizens evaluate programs. Change made by those who pay for it, based upon the success of ad programs by consumers law, & in response to consumer complaints. Change based on changing tides in world politics and on political need to promote the group, nation, or international organization. Change made through formal channels, via written suggestions to higher-ups. Change occurs rarely; organization remains fairly rigid; change occurs primarily to improve thought-reform effectiveness. Structure of persuasion Uses teacher-pupil structure; logical thinking encouraged. Uses an instructional mode to persuade consumer/buyer. Takes authoritarian stance to persuade masses. Takes authoritarian & hierarchical stance. Takes authoritarian & hierarchical stance; No full awareness on part of learner. Type of relationship Instruction is time-limited: consensual. Consumer/buyer can accept or ignore communication. Learner support & engrossment expected. Instruction is contractual: consensual Group attempts to retain people forever. Deceptiveness Is not deceptive. Can be deceptive, selecting only positive views. Can be deceptive, often exaggerated. Is not deceptive. Is deceptive. Breadth of learning Focuses on learning to learn & learning about reality; broad goal is rounded knowledge for development of the individual. Has a narrow goal of swaying opinion to promote and sell an idea, object, or program; another goal is to enhance seller & possibly buyer. Targets large political masses to make them believe a specific view or circumstance is good. Stresses narrow learning for a specific goal; to become something or to train for performance of duties. Individualized target; hidden agenda (you will be changed one step at a time to become deployable to serve leaders). Tolerance Respects differences. Puts down competition. Wants to lessen opposition. Aware of differences. No respect for differences. Methods Instructional techniques. Mild to heavy persuasion. Overt persuasion sometimes unethical. Disciplinary techniques. Improper and unethical techniques. Propaganda leading to Indoctrination has the ultimate objective of Thought Reform. While many citizens would contest overt indoctrination, most accept that propaganda is a fact of life. But how many are even aware of the final stage - _Thought Reform_ (http://www.xjw.com/whatcult.html) ? And even more important, how many are willing to admit to themselves that this conclusive level effectively eliminates all individuality - subjugating each person to endless servitude to the cult regime run by approved elites. Yes, the State is a Cult. The World Book Encyclopedia explains that "traditionally, the term cult referred to any form of worship or ritual observance." We have been conditioned to propaganda - under ability to change - to promote the group nation. Then we have been subjected to indoctrination - under focus of body of knowledge - explicitly designed to inculcate organizational values. And finally we have been manipulated into Thought Reform - type of relationship - to accept group attempts to retain control over people forever. These techniques are all designed to create and maintain worship to a strong authority figure. We are told to accept that the State is the absolute embodiment of authority. The ???Supreme Being??? has been replaced with the coercive totalitarian temporal cult, that requires homage, tribute and obedience. The ritual of prearranged elections orchestrates an opera of musical chairs that has the fat lady singing before the first candidate enters the race. The franchise has become a fallacy, the group mission protects the government sect and the existence of independent conduct turns into an illusion. That is the purpose of Thought Reform. ???There are three forces, the only three forces capable of conquering and enslaving forever the conscience of these weak rebels in the interests of their own happiness. They are: the miracle, the mystery and authority.??? - F. Dostoyevsky, The Brothers Karamazov Cults want wealth and power for the leadership, to be supplied by members. Government conforms to this model. Both employ the following: * Miracle - ideology imputing miraculous power to leaders and/or activities. * Mystery - secrecy obscuring actual beliefs and practices. * Authority - claims on members' time, talents, bodies, or property to meet group needs. _Robert Lifton and Mikhael Heller_ (http://www.rickross.com/reference/brainwashing/brainwashing14.html) conducted studies on Thought Reform. Today we see those applications in our society. The miracle becomes the utopia that politicians promise from their policies administered by an all persuasive government. The mystery incorporates an obsession with secrecy, limited access of forged accounts, with significant news going unreported. And authority being centralized with state power supreme, with the destruction of competing political ideologies. From checker at panix.com Sun Sep 25 20:01:33 2005 From: checker at panix.com (Premise Checker) Date: Sun, 25 Sep 2005 16:01:33 -0400 (EDT) Subject: [Paleopsych] Wikipedia: Fascism Message-ID: Today's theme is fascism, some six articles. After you've digested them, let's discuss whether friendly fascism correctly describes America today. Frank Fascism - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Fascism [Links omitted.] This article is part of the Fascism series. This series is linked to the Politics and elections series Varieties and derivatives of fascism Neo-Fascism Nazism Rexism Falangism Clerical fascism Austrofascism Crypto-fascism Japanese fascism Greek fascism _________________________________________________________________ Fascist political parties and movements Arrow Cross Party (Hungary) Blueshirts (Ireland) Brazilian Integralism British Fascist parties Faisceau (France) Falange Espa?ola Iron Guard (Romania) Nasjonal Samling (Norway) Silver Legion of America (USA) Ustase (Croatia) Kodoha (Japan) _________________________________________________________________ Fascism in history Fascio March on Rome Italian Social Republic Greek Fascism _________________________________________________________________ Relevant lists List of fascists _________________________________________________________________ Related subjects Fascist symbolism Roman salute Blackshirts Corporatism National syndicalism Black Brigades Actual Idealism Fascist unification rhetoric Conservative Revolutionary movement Adolf Hitler National anarchism National Bolshevism International Third Position Neo-Fascism Neo-Nazism Neo-Nazi groups of the United States Neofascism and religion edit this box Fascism (in Italian, fascismo), capitalized, was the authoritarian political movement which ruled Italy from 1922 to 1943 under the leadership of Benito Mussolini. Similar political movements spread across Europe between World War One and World War Two and took several forms such as Nazism and Clerical fascism. Neofascism is generally used to describe post-WWII movements seen to have fascist attributes. Fascism was typified by attempts to impose state control over all aspects of life. The definitional debates and arguments by academics over the nature of fascism, however, fill entire bookshelves. There are clearly elements of both left and right ideology in the development of Fascism. Modern colloquial usage of the word has extended the definition of the terms fascism and neofascism to refer to any totalitarian worldview regardless of its political ideology, although scholars frown on this. Sometimes the word "fascist" is used as a hyperbolic political epithet. The word "fascism" comes from fascio (plural: fasci), which may mean "bundle," as in a political or militant group or a nation, but also from the fasces (rods bundled around an axe), which were an ancient Roman symbol of the authority of magistrates. The Italian 'Fascisti' were also known as Black Shirts for their style of uniform incorporating a black shirt (See Also: political colour). Italian Fascism is often considered to be a proper noun and thus denoted by a capital letter "F", whereas generic fascism is conventionally represented with the lower-case character "f". Italian Fascism is considered a model for other forms of fascism, yet there is disagreement over which aspects of structure, tactics, culture, and ideology represent a "fascist minimum" or core. Contents * 1 Definition * 2 The origin and ideology of Fascism * 3 Italian Fascism * 4 Nazism and Fascism * 5 Mussolini's influences * 6 Fascism and the political spectrum * 7 Fascism and other totalitarian regimes * 8 Anti-Communism * 9 Fascism and the Catholic Church * 10 Fascism and the Protestant churches * 11 Practice of fascism * 12 Fascism as an international phenomenon * 13 Fascism in the United States? * 14 Neo-Fascism * 15 Fascist mottos and sayings * 16 Related topics * 17 References * 18 General bibliography + 18.1 Bibliography on Fascist ideology + 18.2 Bibliography on international fascism * 19 Further reading * 20 External links Definition The term fascism has come to mean any system of government resembling Mussolini's, that in various combinations: * exalts the nation and party above the individual, with the state apparatus being supreme. * stresses loyalty to a single leader, and submission to a single culture. * engages in economic totalitarianism through the creation of a Corporatist State, where the divergent economic and social interests of different races and classes are combined with the interests of the State. As a political and economic system in Italy, fascism combined elements of corporatism, totalitarianism, nationalism, and anti-Communism. In an article in the 1932 Enciclopedia Italiana, written by Giovanni Gentile and attributed to Benito Mussolini, fascism is described as a system in which "The State not only is authority which governs and molds individual wills with laws and values of spiritual life, but it is also power which makes its will prevail abroad... For the Fascist, everything is within the State and... neither individuals nor groups are outside the State... For Fascism, the State is an absolute, before which individuals or groups are only relative..." Mussolini, in a speech delivered on October 28, 1925, stated the following maxim that encapsulates the fascist philosophy: "Tutto nello Stato, niente al di fuori dello Stato, nulla contro lo Stato." ("Everything in the State, nothing outside the State, nothing against the State".) Therefore, he reasoned, all individuals' business is the state's business, and the state's existence is the sole duty of the individual. Another key distinguishing feature of fascism is that it uses a mass movement to attack or absorb the organizations of the working class: parties of the left and trade unions. Peter Fritzsche and others have described fascism as a militant form of right-wing populism, although in terms of the political compass Fascism is more of a third positionist doctrine of the 'extreme center'. This mobilization strategy involves the creation of a Corporative State [1], in brief, this is a form of state action designed to both minimize the power of key business leaders and labor unions. Mussolini, for example, capitalized on fear of a Communist revolution [2], finding ways to unite Labor and Capital to prevent class war. In 1926 he created the National Council of Corporations, divided into guilds of employers and employees, tasked with managing 22 sectors of the economy. The guilds subsumed both labor unions and management, and were represented in a chamber of corporations through a triad comprising of a representative from management, from labour and from the party. Together they would plan aspects of the economy for mutual advantage. The movement was supported by small capitalists, low-level bureaucrats, and the middle classes, who had all felt threatened by the rise in power of the Socialists. Fascism also met with great success in rural areas, especially among farmers, peasants, and in the city, the lumpenproletariat. This working class support arose from the greater allegiance that many had to their State over class. Unlike the pre-World War II period, when many groups openly and proudly proclaimed themselves fascist, since World War II the term has taken on an extremely pejorative meaning, largely in reaction to the crimes against humanity committed by the National Socialist Nazis, who were allied with Mussolini during the war. Today, very few groups proclaim themselves as openly fascist, and the term is almost universally used for groups for whom the speaker has little regard, often with minimal understanding of what the term actually means. The term "fascist" is often ascribed to individuals or groups who are perceived to behave in an authoritarian or racialistic manner; by silencing opposition, judging personal behavior, or otherwise attempting to concentrate power. More particularly, "fascist" is sometimes used by members of the Left to characterize some group or persons of the Right. This usage receded following the 1970s, but has interestingly enjoyed a strong resurgence in connection with anti-globalization activism, even though most genuine Fascist movements are rabidly anti-Globalist. In addition to being anti-Globalist, fascism may be understood as being anti-liberalism, anti-socialist, anti-Communist, anti-democratic, anti-egalitarian, anti-rationalist etc., and in some of its forms anti-religion and anti-monarchy. The origin and ideology of Fascism Etymologically, the use of the word Fascism in modern Italian political history stretches back to the 1890s in the form of fasci, which were radical left-wing political factions that proliferated in the decades before World War I. (See Fascio for more on this movement and its evolution.) One of the first of these groups were the Fasci Siciliani who were part of the first movement that consisted of the Italian working-class peasants that made real progress. The Fasci Siciliani dei lavoratori, were revolutionary socialists that were led by Giuseppe De Felice Giuffrida. Italian Fascism The Doctrine of Fascism was written by Giovanni Gentile, a neo-Hegelian philosopher who served as the official philosopher of fascism. Mussolini signed the article and it was officially attributed to him. In it, French socialists Georges Sorel, Charles Peguy, and Hubert Lagardelle were invoked as the sources of fascism. Sorel's ideas concerning syndicalism and violence are much in evidence in this document. It also quotes from Ernest Renan who it says had "pre-fascist intuitions". Both Sorel and Peguy were influenced by the Frenchman Henri Bergson. Bergson rejected the scientism, mechanical evolution and materialism of Marxist ideology. Also, Bergson promoted an elan vital as an evolutionary process. Both of these elements of Bergson appear in fascism. Mussolini states that fascism negates the doctrine of scientific and Marxian socialism and the doctrine of historic materialism. Hubert Lagardelle, an authoritative syndicalist writer, was influenced by Pierre-Joseph Proudhon who, in turn, inspired anarchosyndicalism. There were several strains of tradition influencing Mussolini. Sergio Panunzio, a major theoretician of fascism in the 1920s, had a syndicalist background, but his influence waned as the movement shed its old left wing elements. The fascist concept of corporatism and particularly its theories of class collaboration and economic and social relations have similarities to the model laid out by Pope Leo XIII's 1892 encyclical Rerum Novarum[3]. This encyclical addressed politics as it had been transformed by the Industrial Revolution, and other changes in society that had occurred during the nineteenth century. The document criticized capitalism, complaining of the exploitation of the masses in industry. However, it also sharply criticized the socialist concept of class struggle, and the proposed socialist solution to exploitation (the elimination, or at least the limitation, of private property). Rerum Novarum called for strong governments to undertake a mission to protect their people from exploitation, while continuing to uphold private property and reject socialism. It also asked Catholics to apply principles of social justice in their own lives. Seeking to find some principle to compete with and replace the Marxist doctrine of class struggle, Rerum Novarum urged social solidarity between the upper and lower classes. Its analogy of the state as being like a body working together as "one mind" had some cultural influence on the early Fascists of Catholic nations. It also indicated the state had a right to suppress "firebrands" and striking workers. Further Rerum Novarum proposed a kind of corporatism that resembled medieval guilds for an industrial age. This relates far more directly to Brazilian Integralism form of Fascism than anything in Italy. There are also disputable claims that it influenced The New Deal. The encyclical intended to counteract the "subversive nature" of both Marxism and liberalism. Themes and ideas developed in Rerum Novarum can also be found in the ideology of fascism as developed by Mussolini. Although it also contains ideas like "the members of the working classes are citizens by nature and by the same right as the rich" or "the State has for its office to protect natural rights, not to destroy them; and, if it forbid its citizens to form associations, it contradicts the very principle of its own existence," that never fit easily with Italian Fascism. Fascism also borrowed from Gabriele D'Annunzio's Constitution of Fiume for his ephemeral "regency" in the city of Fiume. Syndicalism had an influence on fascism as well, particularly as some syndicalists intersected with D'Annunzio's ideas. Before the First World War, syndicalism had stood for a militant doctrine of working-class revolution. It distinguished itself from Marxism because it insisted that the best route for the working class to liberate itself was the trade union rather than the party. The Italian Socialist Party ejected the syndicalists in 1908. The syndicalist movement split between anarcho-syndicalists and a more moderate tendency. Some moderates began to advocate "mixed syndicates" of workers and employers. In this practice, they absorbed the teachings of Catholic theorists and expanded them to accommodate greater power of the state, and diverted them by the influence of D'Annunzio to nationalist ends. When Henri De Man's Italian translation of Au-dela du marxisme emerged, Mussolini was excited and wrote to the author that his criticism "destroyed any scientific element left in Marxism". Mussolini was appreciative of the idea that a corporative organization and a new relationship between labour and capital would eliminate "the clash of economic interests" and thereby neutralize "the germ of class warfare.'" Renegade socialist thinkers, Robert Michels, Sergio Panunzio, Ottavio Dinale, Agostino Lanzillo, Angelo Oliviero Olivetti, Michele Bianchi, and Edmondo Rossoni, turning against their former left-wing ideas, played a part in this attempt to find a "third way" that rejected both capitalism and socialism. Many historians claim that the March 23, 1919 meeting at the Piazza San Sepolcro was the historic "birthplace" of the fascist movement. However, this would imply that the Italian Fascists "came from nowhere" which is simply not true. Mussolini revived his former group, Fasci d'Azione rivoluzionaria, in order to take part in the 1919 elections in response to an increase in Communist activity occurring in Milan. The Fasci di Combattimenti were the result of this continuation (not creation) of the Fascist party. The result of the meeting was that Fascism became an organized political movement. Among the founding members were the revolutionary syndicalist leaders Agostino Lanzillo and Michele Bianchi. In 1921, the fascists developed a program that called for: * a democratic republic, * separation of church and state, * a national army, * progressive taxation for inherited wealth, and * development of co-operatives or guilds to replace labor unions. As the movement evolved, several of these initial ideas were abandoned and rejected. Mussolini's fascist state was established nearly a decade before Hitler's rise to power (1922 and the March on Rome). Both a movement and a historical phenomenon, Italian Fascism was, in many respects, an adverse reaction to both the apparent failure of laissez-faire economics and fear of the Left. Fascism was, to an extent, a product of a general feeling of anxiety and fear among the middle class of postwar Italy. This fear arose from a convergence of interrelated economic, political, and cultural pressures. Under the banner of this authoritarian and nationalistic ideology, Mussolini was able to exploit fears regarding the survival of capitalism in an era in which postwar depression, the rise of a more militant left, and a feeling of national shame and humiliation stemming from Italy's 'mutilated victory' at the hands of the World War I postwar peace treaties seemed to converge. Such unfulfilled nationalistic aspirations tainted the reputation of liberalism and constitutionalism among many sectors of the Italian population. In addition, such democratic institutions had never grown to become firmly rooted in the young nation-state. This same postwar depression heightened the allure of Marxism among an urban proletariat who were even more disenfranchised than their continental counterparts. But fear of the growing strength of trade unionism, Communism, and socialism proliferated among the elite and the middle class. In a way, Benito Mussolini filled a political vacuum. Fascism emerged as a "third way" -- as Italy's last hope to avoid imminent collapse of the 'weak' Italian liberalism, and Communist revolution. While failing to outline a coherent program, fascism evolved into a new political and economic system that combined corporatism, totalitarianism, nationalism, and anti-Communism in a state designed to bind all classes together under a capitalist system. This was a new capitalist system, however, one in which the state seized control of the organization of vital industries. Under the banners of nationalism and state power, Fascism seemed to synthesize the glorious Roman past with a futuristic utopia. Despite the themes of social and economic reform in the initial Fascist manifesto of June 1919, the movement came to be supported by sections of the middle class fearful of socialism and communism. Industrialists and landowners supported the movement as a defense against labour militancy. Under threat of a fascist March on Rome, in October 1922, Mussolini assumed the premiership of a right-wing coalition Cabinet initially including members of the pro-church Partito Popolare (People's Party). The regime's most lasting political achievement was perhaps the Lateran Treaty of February 1929 between the Italian state and the Holy See. Under this treaty, the Papacy was granted temporal sovereignty over the Vatican City and guaranteed the free exercise of Catholicism as the sole state religion throughout Italy in return for its acceptance of Italian sovereignty over the Pope's former dominions. In the 1930s, Italy recovered from the Great Depression, and achieved economic growth in part by developing domestic substitutes for imports (Autarchia). The draining of the malaria-infested Pontine Marshes south of Rome was one of the regime's proudest boasts. But growth was undermined by international sanctions following Italy's October 1935 invasion of Ethiopia (the Abyssinia crisis), and by the government's costly military support for Franco's Nationalists in Spain. International isolation and their common involvement in Spain brought about increasing diplomatic collaboration between Italy and Nazi Germany. This was reflected also in the Fascist regime's domestic policies as the first anti-semitic laws were passed in 1938. Italy's intervention (June 10th 1940) as Germany's ally in World War II brought military disaster, and resulted in the loss of her north and east African colonies and the American-British-Canadian invasion of Sicily in July 1943 and southern Italy in September 1943. Mussolini was dismissed as prime minister by King Victor Emmanuel III on July 25th 1943, and subsequently arrested. He was freed in September by German paratroopers under command of Otto Skorzeny and installed as head of a puppet "Italian Social Republic" at Salo in German-occupied northern Italy. His association with the German occupation regime eroded much of what little support remained to him. His summary execution on April 28th 1945 during the war's violent closing stages by the northern partisans was widely seen as a fitting end to his regime. After the war, the remnants of Italian fascism largely regrouped under the banner of the neo-Fascist "Italian Social Movement" (MSI). The MSI merged in 1994 with conservative former Christian Democrats to form the "National Alliance" (AN), which proclaims its commitment to constitutionalism, parliamentary government and political pluralism. Nazism and Fascism Benito Mussolini and Adolf Hitler Enlarge Benito Mussolini and Adolf Hitler The extent and nature of the affinity between Fascism and Nazism has been the subject of much academic debate. Although the modern consensus sees Nazism as a type or offshoot of fascism, there are many experts who argue that Nazism was not fascist at all, either on the grounds that the differences are too great, or because they deny that fascism is generic. Differences Nazism differed from Fascism proper in the emphasis on the state's purpose in serving a racial rather than a national ideal, specifically the social engineering of culture to the ends of the greatest possible prosperity for the so-called "Master race" at the expense of all else and all others. In contrast, Mussolini's Fascism held that cultural factors existed to serve the state, and that it wasn't necessarily in the state's interest to serve or engineer any of these particulars within its sphere. The only purpose of government under fascism proper was to uphold the state as supreme above all else, and for these reasons it can be said to have been a governmental statolatry. Where Nazism spoke of "Volk", Fascism talked of "State". While Nazism was a metapolitical ideology, seeing both party and government as a means to achieve an ideal condition for certain chosen people, fascism was a squarely anti-socialist form of statism that existed as an end in and of itself. The Nazi movement, at least in its overt ideology, spoke of class-based society as the enemy, and wanted to unify the racial element above established classes. The Fascist movement, on the other hand, sought to preserve the class system and uphold it as the foundation of established and desirable culture, although this is not to say that Fascists rejected the concept of social mobility. Indeed a central tenet of the Corporate State was meritocracy. This underlying theorem made the Fascists and National Socialists in the period between the two world wars sometimes see themselves and their respective political labels as at best partially exclusive of one another, and at worst diametrically opposed to one another. Affinities Nevertheless, despite these differences, Kevin Passmore (2002 p.62) observes: There are sufficient similarities between Fascism and Nazism to make it worthwhile applying the concept of fascism to both. In Italy and Germany a movement came to power that sought to create national unity through the repression of national enemies and the incorporation of all classes and both genders into a permanently mobilized nation. Hitler and Mussolini themselves recognised commonalities in their politics. The second part of Hitler's Mein Kampf, "The National Socialistic Movement", first published in 1926, contains this passage: I conceived the profoundest admiration for the great man south of the Alps, who, full of ardent love for his people, made no pacts with the enemies of Italy, but strove for their annihilation by all ways and means. What will rank Mussolini among the great men of this earth is his determination not to share Italy with the Marxists, but to destroy internationalism and save the fatherland from it. (p. 622) Mussolini's influences Fascism did not spring forth full-grown, and the writings of Fascist theoreticians cannot be taken as a full description of Mussolini's ideology, let alone how specific situations inevitably resulted in deviations from ideology. Mussolini's policies drew on both the history of the Italian nation and the philosophical ideas of the 19th century. What resulted was neither logical nor well defined, to the extent that Mussolini defined it as "action and mood, not doctrine". Nonetheless, certain ideas are clearly visible. The most obvious is nationalism. The last time Italy had been a great nation was under the banner of the Roman Empire and Italian nationalists always saw this as a period of glory. Given that even other European nations with imperial ambitions had often invoked ancient Rome in their architecture and vocabulary, it was perhaps inevitable that Mussolini would do the same. Following the fall of the Western Roman Empire, Italy had not again been united until its final unification in 1870. Mussolini desired to affirm an Italian national identity and therefore saw the unification as the first step towards returning Italy to greatness and often exploited the unification and the achievements of leading figures such as Garibaldi to induce a sense of Italian national pride. The Fascist cult of national rebirth through a strong leader has roots in the romantic movement of the 19th century, as does the glorification of war. For example, the loss of the war with Abyssinia had been a great humiliation to Italians and consequently it was the first place targeted for Italian expansion under Mussolini. Not all ideas of fascism originated from the 19th century; for example, the use of systematic propaganda to pass on simple slogans such as "believe, obey, fight" and Mussolini's use of the radio both were techniques developed in the 20th century. Similarly, Mussolini's corporate state was a distinctly 20th-century creation. Fascism and the political spectrum Early fascists demonstrated a willingness to do whatever was necessary to achieve their ends, and easily shifted from left-wing to right-wing positions as suited their purposes. This inconsistency makes it difficult to strictly categorize fascism on the traditional political spectrum. Some scholars argue that Italian Fascism, unlike some other contemporary movements, did not grow out of a strict theoretical basis. Layton describes Fascism as "not even a rational system of thought", and as "unique but not original". Fascism tends to be associated with the political right, but the appropriateness of this association is often contested. In one sense, fascism can be considered to be a new ideological development that transcends the right/left framework. At the same time, it does contain ideological elements usually associated with the right. These two facets can be seen in the following quote from Mussolini himself, writing in The Doctrine of Fascism: "Granted that the XIXth century was the century of socialism, liberalism, democracy, this does not mean that the XXth century must also be the century of socialism, liberalism, democracy. Political doctrines pass; nations remain. We are free to believe that this is the century of authority, a century tending to the 'right', a Fascist century." Griffin, Eatwell, Laqueuer, and Weber are among the top scholars of fascism, and they are reluctant to call fascism simply a right-wing ideology. Yet in their lengthy discussions they observe that generally fascism and neofascism allies itself with right-wing or conservative forces on the basis of racial nationalism, hatred of the political left, or simple expediency. Laqueuer: "But historical fascism was always a coalition between radical, populist ('fascist') elements and others gravitating toward the extreme Right" p. 223. Eatwell talks about the need of fascism for "syncretic legitimation" which sometimes led it to forge alliances with "existing mainstream elites, who often sought to turn fascism to their own more conservative purposes." Eatwell also observes that "in most countries it tended to gather force in countries where the right was weak" p. 39. Griffin also does not include right ideology in his "fascist minimum," but he has described Fascism as "Revolution from the Right" pp. 185-201. Weber: "...their most common allies lay on the right, particularly on the radical authoritarian right, and Italian Fascism as a semi-coherent entity was partly defined by its merger with one of the most radical of all right authoritarian movements in Europe, the Italian Nationalist Association (ANI)." p. 8. Thus according to these scholars, there are both left and right influences on fascism, and right-wing ideology should not be considered part of the "fascist minimum". However, they also show that in actual practice, there is a gravitation of fascism toward the political right. The adoption of the name by the Italian Fascist Party reflected the previous involvement of a number of ideologues who intersected with radical left politics. While opposing communism and social democracy, fascism was influenced by the theories of Gabriele D'Annunzio (a former anarchist), Alceste de Ambris (influenced by anarcho-syndicalism), and former socialist Benito Mussolini. Fascists themselves often rejected categorization as left or right-wing, claiming to be a "third force" (see international third position and political spectrum for more information). Analysts on the left counter that Fascism rejects Marxism and the concept of class struggle in favor of corporatism. Contrary to the practice of socialist states, fascist Italy did not nationalize any industries or capitalist entities. Rather, the left insists, it established a corporatist structure influenced by the model for class relations put forward by the Catholic Church. (For more on the influence of Catholicism on fascism see links between the clergy and fascist parties.) Fascism and other totalitarian regimes Some historians and theorists regard fascism and "Soviet Communism" (or more specifically, Stalinism) as being similar, lumping them together under the term "totalitarianism". Friedrich Hayek argues that the differences between fascism and totalitarian forms of socialism (see Stalinism) are rhetorical rather than actual. Others see them as being so dissimilar as to be utterly incomparable. According to the libertarian Nolan chart, "fascism" occupies a place on the political spectrum as the capitalist equivalent of communism, wherein a system that supports "economic liberty" is constrained by its social controls such that it becomes totalitarian. Hannah Arendt and other theorists of totalitarian rule argue that there are similarities between nations under Fascist and Stalinist rule. They condemn both groups as dictatorships and totalitarian police states. For example, both Hitler and Stalin committed the mass murder of millions of their country's civilians who did not fit in with their plans. In 1947, Austrian economist Ludwig von Mises published a short book entitled "Planned Chaos". He asserted that fascism and Nazism were socialist dictatorships and that both had been committed to the Soviet principle of dictatorship and violent oppression of dissenters. He argued that Mussolini's major heresy from Marxist orthodoxy had been his strong endorsement of Italian entry into World War I on the Allied side. (Mussolini aimed to "liberate" Italian-speaking areas under Austrian control in the Alps.) This view contradicts the statements of Mussolini himself (not to mention his socialist opponents), and is generally viewed with skepticism by historians. Critics of von Mises often argue that he was attacking a straw man; in other words, that he changed the definition of "socialism" in his book, for the precise purpose of accommodating fascism and Nazism into it. Critics of this view point out that Mussolini imprisoned Antonio Gramsci from 1926 until 1934, after Gramsci, a leader of the Italian Communist Party and leading Marxist intellectual, tried to create a common front among the political left and the workers, in order to resist and overthrow fascism. Other Italian Communist leaders like Palmiro Togliatti went into exile and fought for the Republic in Spain. The Marxist concept of dictatorship of the proletariat alluded to by Von Mises is not the same as the dictatorship concept employed by fascists, argue proponents of communism. Dictatorship of the proletariat is supposed to mean workers' democracy, or dictatorship by the working class, rather than dictatorship by the capitalist class. They claim that this concept had been distorted under Stalin to mean dictatorship by the General Secretary over the party and the working class. In this, Stalin deviated from Marx, and therefore it cannot be said that the Stalinist form of government is Marxist. Opponents of Communism, however, argue that the Soviet Union was dictatorial already under Lenin. The fascist economic model of corporatism promoted class collaboration by attempting to bring classes together under the unity of the state, a concept that is anathema to classic socialism. The fascist states from the period between the two world wars were police states, as were the ostensibly socialist USSR and the post-WWII Soviet bloc states. Conversely, there have been multi-party socialist states that have not been police states, and non-socialist states that have been police states. Examples of police states in modern times, outside of the Communist world, include: * Afghanistan under the Taliban; * Brazil under Getulio Vargas (fascism-like state) and also during the military dictatorship from 1964 to 1986; * Burma (Myanmar) under the current military dictatorship; * Chile under General Augusto Pinochet; * the Republic of China under Chiang Kai-shek's Kuomintang; * Iran under the Mohammad Ali Shah, as well as under the last Shah, Mohammad Reza Pahlavi, and later on under the Islamic Republic; * Iraq and Syria under Ba'athist dictatorships; * South Vietnam, South Korea, Singapore, etc. during certain periods of their recent history; Anti-Communism Fascism and Communism are political systems that rose to prominence after World War I. Historians of the period between World War I and World War II such as E.H. Carr and Eric Hobsbawm point out that liberalism was under serious stress in this period and seemed to be a doomed philosophy. The success of the Russian Revolution of 1917 resulted in a revolutionary wave across Europe. The socialist movement worldwide split into separate social democratic and Leninist wings. The subsequent formation of the Third International prompted serious debates within social democratic parties, resulting in supporters of the Russian Revolution splitting to form Communist Parties in most industrialized (and many non-industrialized) countries. At the end of World War I, there were attempted socialist uprisings or threats of socialist uprisings throughout Europe, most notably in Germany, where the Spartacist uprising, led by Rosa Luxemburg and Karl Liebknecht in January 1919, was eventually crushed. In Bavaria, Communists successfully overthrew the government and established the Munich Soviet Republic that lasted from 1918 to 1919. A short lived Hungarian Soviet Republic was also established under B?la Kun in 1919. The Russian Revolution also inspired attempted revolutionary movements in Italy with a wave of factory occupations. Most historians view fascism as a response to these developments, as a movement that both tried to appeal to the working class and divert them from Marxism. It also appealed to capitalists as a bulwark against Bolshevism. Italian fascism took power with the blessing of Italy's king after years of leftist-led unrest led many conservatives to fear that a communist revolution was inevitable. Throughout Europe, numerous aristocrats, conservative intellectuals, capitalists and industrialists lent their support to fascist movements in their countries that emulated Italian fascism. In Germany, numerous right-wing nationalist groups arose, particularly out of the post-war Freikorps, which were used to crush both the Spartacist uprising and the Munich Soviet. With the worldwide Great Depression of the 1930s, it seemed that liberalism and the liberal form of capitalism were doomed, and Communist and fascist movements swelled. These movements were bitterly opposed to each other and fought frequently, the most notable example of this conflict being the Spanish Civil War. This war became a proxy war between the fascist countries and their international supporters -- who backed Francisco Franco -- and the worldwide Communist movement allied uneasily with anarchists and Trotskyists -- who backed the Popular Front -- and were aided chiefly by the Soviet Union. Initially, the Soviet Union supported a coalition with the western powers against Nazi Germany and popular fronts in various countries against domestic fascism. This policy was largely unsuccessful due to the distrust shown by the western powers (especially Britain) towards the Soviet Union. The Munich Agreement between Germany, France and Britain heightened Soviet fears that the western powers were endeavoring to force them to bear the brunt of a war against Nazism. The lack of eagerness on the part of the British during diplomatic negotiations with the Soviets served to make the situation even worse. The Soviets changed their policy and negotiated a non-aggression pact known as the Molotov-Ribbentrop Pact in 1939. Vyacheslav Molotov claims in his memoirs that the Soviets believed this was necessary to buy them time to prepare for an expected war with Germany. Stalin expected the Germans not to attack until 1942, but the pact ended in 1941 when Nazi Germany invaded the Soviet Union in Operation Barbarossa. Fascism and communism reverted to being lethal enemies. The war, in the eyes of both sides, was a war between ideologies. * See also: Anti-Communism Fascism and the Catholic Church Another controversial topic is the relationship between fascist movements and the Catholic Church. As mentioned above, Pope Leo XIII's 1891 encyclical, Rerum Novarum included doctrines that fascists used or admired. Forty years later, the corporatist tendencies of Rerum Novarum were underscored by Pope Pius XI's May 25, 1931 encyclical Quadragesimo Anno[4] restated the hostility of Rerum Novarum to both unbridled competition and class struggle. The criticism of both socialism and capitalism in these encyclicals was not fascist per se, but by weakening support for either alternative such writings arguably opened the door to fascism. In the early 1920s, the Catholic party in Italy (Partito Popolare) was in the process of forming a coalition with the Reform Party that could have stabilized Italian politics and thwarted Mussolini's projected coup. On October 2, 1922, Pope Pius XI circulated a letter ordering clergy not to identify themselves with the Partito Popolare, but to remain neutral, an act that undercut the party and its alliance against Mussolini. Following Mussolini's rise to power, the Vatican's Secretary of State met Il Duce in early 1923 and agreed to dissolve the Partito Popolare, which Mussolini saw as obstacle to fascist rule. In exchange, the fascists made guarantees regarding Catholic education and institutions. In 1924, following the murder of the leader of the Socialist Party by fascists, the Partito Popolare joined with the Socialist Party in demanding that the King dismiss Mussolini as Prime Minister, and stated their willingness to form a coalition government. Pius XI responded by warning against any coalition between Catholics and socialists. The Vatican ordered all priests to resign from the Partito Popolare and from any positions they held in it. This led to the party's disintegration in rural areas where it relied on clerical assistance. The Vatican subsequently established Catholic Action as a non-political lay organization under the direct control of bishops. The organization was forbidden by the Vatican to participate in politics, and thus was not permitted to oppose the fascist regime. Pius XI ordered all Catholics to join Catholic Action. This resulted in hundreds of thousands of Catholics withdrawing from the Partito Popolare, and joining the apolitical Catholic Action. This caused the Catholic Party's final collapse. [5] When Mussolini ordered the closure of Catholic Action in May 1931, Pius XI issued an encyclical, Non abbiamo bisogno. This document stated the Catholic Church's opposition to the dissolution, and argued that the order "unmasked the 'pagan' intentions of the Fascist state". Under international pressure, Mussolini decided to compromise, and Catholic Action was saved. For Catholics, the encyclical's disapproval of any system that puts the nation above God or humanity remains doctrine. Aside from certain ideological similarities, the relationship between the Church and fascist movements in various countries has often been deemed close. An early example is Austria which developed a quasi-fascist authoritarian Catholic regime some call the "Austro-fascist" St?ndestaat between 1934 and 1938. There is little debate over Slovakia, where the fascist dictator was a Catholic monsignor; and Croatia, where the fascist Ustashe identified itself as a Catholic movement. The Iron Guard in Romania identified itself as an Eastern Orthodox movement (with no connection to Roman Catholicism), and had particularly strong leanings toward clerical fascism. (See also Involvement of Croatian Catholic clergy with the Ustasa regime.) The Vichy regime in France was also deeply influenced by the reactionary Catholic-influenced ideology of the Action Fran?aise. This group had actually been led by an agnostic and condemned by the Catholic Church in 1926. Many of its members were reactionary Catholics so this condemnation damaged the group, but then in 1938 the condemnation was lifted. Conversely, many Catholic priests were persecuted under the Nazi regime, and many Catholic laypeople and clergy played notable roles in sheltering Jews during the Holocaust. For a further exploration of the relationship between Catholicism and Fascism, see the article article on Clerical Fascism. Fascism and the Protestant churches Hitler, in his manifesto, Mein Kampf, listed Martin Luther as one of Germany's great historic reformers. In Luther's 1543 book On the Jews and Their Lies, Luther advocated the burning of synagogues and schools, the deportation of Jews, and many other measures that resemble the actions later taken by the Nazis. Protestant churches made no comment on the Nazis' growing anti-Jewish activities. Many Protestants opposed the governments of the Weimar Republic in the 1920s which they saw as coalitions between the Socialists and the Catholic Centre party. In 1932, many German Protestants joined together to form the German Christian Movement which enthusiastically supported Nazi propaganda, and sought to join Church and State. 3,000 of the 17,000 Protestant pastors in Germany were to join the movement. Hitler wished to unite a Protestant church of 28 different federations into one nationalist body. Pastor Ludwig Muller, the leader of the German Christian Movement, was soon appointed Hitler's advisor on religious affairs. He was elected Reich's Bishop in charge of the German Protestant churches in 1933. An "Aryan Paragraph" was introduced to the constitution which stated that no one of non-Aryan background, or married to anyone of non-Aryan background, could serve as either a pastor or church official. Pastors and officials who had married a non-Aryan were to be dismissed. Much of the Lutheran and Methodist establishment in Germany had fallen behind Hitler in his promise to oppose Bolshevism and instability. The new measures began to raise some opposition to the German Christians from a minority of Lutherans and Evangelicals who disliked state interference in church affairs. A small group of Protestant clergy under Martin Niemoeller separated from the main churches to form the Confessing Church. Neither the official, nor the Confessing church, however, openly opposed the Nazis' anti-Jewish policies. Practice of fascism Examples of fascist systems include: * Mussolini's Italy * Nazi Germany * Clerical fascism Fascism in practice embodied both political and economic policies, and invites different comparisons. As noted elsewhere in this article, some writers who focus on the politically repressive policies of fascism identify it as one form of totalitarianism, a description they use to characterize not only Fascist Italy and Nazi Germany, but also countries such as the Soviet Union, The People's Republic of China or North Korea. It should be noted that "totalitarianism" is a catch-all group which includes many different ideologies that are sworn enemies. However, some analysts point out that certain fascist governments were arguably more authoritarian than totalitarian. There is almost universal agreement that Nazi Germany was totalitarian. However, many would argue that the governments of Franco's Spain and Salazar's Portugal, while fascistic, were more authoritarian than totalitarian. Spain under the Falange Espa?ola y de las Juntas de Ofensiva Nacional Sindicalista (FET y de las JONS) Party of Francisco Franco, was a coalition that included fascists. Those who focus on economic policies and state intervention in the economy, identify fascism as corporatism. In this corporatist model of private management, the various functions of the state were controlling and regulating trade, while maintaining de jure private ownership. This contrasts with state socialism, in which the state controls industry through outright nationalization. Private activity is controlled by the state, so that the state may subsidize or suspend the activities of any entity in accordance with their usefulness and direction. Corporatism was a political outgrowth of Catholic social doctrine from the 1890s. Some contested examples of fascism are Franklin D. Roosevelt's New Deal in the United States and Juan Peron's populism in Argentina. Prominent proponents of fascism in pre-WWII America included the publisher Seward Collins, whose periodical The American Review (1933-1937) featured essays by Collins and others that praised Mussolini and Hitler. The America First anti-war movement fought to keep the US neutral after Britain entered the war in 1939, but was not supportive of fascism. Father Charles E. Coughlin's Depression-era radio broadcasts extolled the virtues of fascism. Henry Wallace, wrote in 1944 during his term as vice president of the United States, "American fascism will not be really dangerous until there is a purposeful coalition among the cartelists, the deliberate poisoners of public information, and those who stand for the K.K.K. type of demagoguery." [Wallace, 1944] Fascism as an international phenomenon It is often a matter of dispute whether a certain government is to be characterized as fascist, authoritarian, totalitarian, or just a plain police state. Regimes that are alleged to have been either fascist or sympathetic to fascism include: Austria (1933-1938) - Austro-fascism: Dollfu? dissolved parliament and established a clerical-fascist dictatorship which lasted until Austria was incorporated into Germany through the Anschluss. Dollfu?'s idea of a "St?ndestaat" was borrowed from Mussolini. Italy (1922-1943) - The first fascist country, it was ruled by Benito Mussolini (Il Duce) until he was dismissed and arrested on the 25 July 1943. Mussolini was then rescued from prison by German troops, and set up a short lived puppet state named "Repubblica di Sal?" in northern Italy under the protection of the German army. Germany (1933-1945) - Ruled by the Nazi movement of Adolf Hitler (der F?hrer). In the terminology of the Allies, Nazi Germany was as their chief enemy the mightiest and best-known fascist state. See above for a discussion on the differences and similarities between Nazism and fascism. Spain (1936-1975) - After the 1936 arrest and execution of its founder Jos? Antonio Primo de Rivera during the Spanish Civil War, the fascist Falange Espa?ola Party was allied to and ultimately came to be dominated by Generalissimo Francisco Franco, who became known as El Caudillo, the undisputed leader of the Nationalist side in the war, and, after victory, head of state until his death over 35 years later. However, it was best described as an autocracy based on the Falangist fascist principles in its early years. By the mid-50s, the Spanish Miracle and the rise of the Opus Dei in the Franco regime led to Falangist fascism being discarded and fascists minimized in importance. Portugal (1932-1974) - Although less restrictive than the Italian, German and Spanish regimes, the Estado Novo regime of Ant?nio de Oliveira Salazar was quasi-fascist. However, it was closer to the Spanish example of paternal authoritarianism than the Italian fascist or German Nazi model. Greece - Joannis Metaxas' 1936 to 1941 dictatorship was not particularly ideological in nature, and might hence be characterized as authoritarian rather than fascist. The same can be argued regarding Colonel George Papadopoulos' 1967 to 1974 military dictatorship, which was supported by the United States. Brazil (1937-1945) - Many historians have argued that Brazil's Estado Novo under Get?lio Vargas was a Brazilian variant of the continental fascist regimes. For a period of time, Vargas' regime was aligned with Pl?nio Salgado's Integralist Party, Brazil's fascist movement. However, it also showed great affinity with organized labour and leftist ideas, leaving its classification open to interpretation. Belgium (1940-1945) - The violent Rexist movement and the Vlaamsch-Nationaal Verbond party achieved some electoral success in the 1930s. Many of its members assisted the Nazi occupation during World War II. The Verdinaso movement, too, can be considered fascist. Its leader, Joris Van Severen, was killed before the Nazi occupation. Some of its adepts collaborated, but others joined the resistance. These collaborationist movements are generally classified as belonging to the National Socialist model or the German fascist model because of its brand of racial nationalism and the close relation with the occupational authorities. Slovakia (1939-1944) - The Slovak People's Party was a quasi-fascist nationalist movement associated with the Catholic Church. Founded by Father Andrej Hlinka, his successor Monsignor Jozef Tiso became the Nazis' quisling in a nominally independent Slovakia. The clerical element lends comparison with Austrofascism or the clerical fascism of Croatia, though not to the excesses of either model. The market system was run on principles agreeing with the standard Italian fascist model of industrial regulation. France (1940-1944) - The Vichy regime of Philippe P?tain, established following France's defeat by Germany, collaborated with the Nazis, including in the death of 65,000 French Jews. However, the minimal importance of fascists in the government until its direct occupation by Germany makes it appear to seem more similar to the regime of Franco or Salazar than the model fascist powers. While it has been argued that anti-Semitic massacres performed by the Vichy regime were more in the interests of pleasing Germany than in service of ideology, anti-semitism was strong in France before World War II. As early as October 1940 the Vichy regime introduced the infamous statut des Juifs, that produced a new legal definition of Jewishness and which barred Jews from certain public offices. Worse still, in May 1941 the Parisian police force had collaborated in the internment of foreign Jews. As a means of identifying Jews, the German authorities required all Jews in the occupied zone to wear the Star of David on their clothing. On the 11th June, they demanded that 100, 000 Jews be handed over for deportation. The most infamous of these mass arrests was the so-called grande rafle du V?l' d'Hiv' which took place in Paris on the 16th and 17th July 1942. The V?lodrome d'Hiver was a large indoor sports arena situated on the rue N?laton near the Quai de Grenelle in the 15th arrondissment of Paris. In a vast operation codenamed vent printanier, the French police rounded up 12,884 Jews from Paris and its surrounding suburbs. These were mostly adult men and women but there were around 4,000 children amongst them. The rounding up was made easier by the large number of files on Jews complied and held by Vichy authorities since 1940. The French police, headed by Ren? Bousquet, were entirely resonsible for this operation and not one German soldier assisted. Romania (1940-1944) - The Iron Guard, turned more and more into a pro-Nazi and pro-German movement, and took power in September 1940 when Ion Antonescu forced King Carol II to abdicate. However, the cohabitation between the Iron Guard and Ion Antonescu was short-lived. The Antonescu regime that followed hardly qualifies as fascist, as it did not have a clear political program or party. It was rather a military dictatorship. The regime was characterized by nationalism, anti-semitism, and anti-communism, but had no social program. Despite the Iasi pogrom and a near-liquidation of the Jews of many parts of Moldavia, the regime ultimately refused to send the Romanian Jews to German death camps. The regime was overturned on the 23rd of August 1944 by a coup lead by the king Mihai of Romania. Independent State of Croatia (1941-1945) - Poglavnik Ante Pavelic, leader of the infamous Ustase movement, came to power in 1941 as the Croatian puppet leader under the control of Nazi Germany. Under the indirect control of Germany, the Ustase regime was based heavily upon both upon clerical fascism and the Italian model of fascism, with elements of racial integrity and organic nationalism drawn from Nazism. Norway (1943-1945) - Vidkun Quisling had staged a coup d'?tat during the German invasion on April 9th, 1940. This first government was replaced by a Nazi puppet government under his leadership from February 1st, 1943. His party had never had any substantial support in Norway, undermining his attempts to emulate the Italian fascist state. Hungary (1932-1945) - By 1932, support for right-wing ideology, embodied by Gyula G?mb?s, had reached the point where Hungarian Regent Miklos Horthy could not postpone appointing a fascist prime minister. Horthy also showed signs of admiring the efficiency and conservative leanings of the Italian fascist state under Mussolini and was not too reluctant to appoint a fascist government (with terms for the extent of Horthy's power). Horthy would keep control over the mainstream fascist movement in Hungary until near the end of the Second World War. Ferenc Sz?lasi headed the extremist Arrow Cross party, which had been banned until German pressure lifted the law. In 1944, with German support, he replaced Admiral Mikl?s Horthy as Head of State; following Horthy's attempt to have Hungary change sides. The regime changed to a system more in line with Nazism and would remain this way until the capture of Budapest by Soviet troops. Starting in 1938, several racial laws were passed by the regime, and over 400,000 Jews were sent by Hungary to German death camps from 1941 to 1944. Argentina (1946-1955 and 1973-1974) - Juan Per?n admired Mussolini and established his own pseudo-fascist regime. After he died, his third wife and vice-president Isabel Per?n was deposed by a military junta. Similarities are best drawn, though, with the Vargas regime of Brazil. South Africa (1948-1994) - Many scholars have labelled the apartheid system built by Malan and Verwoerd as a type of fascism. Whether it was a fascist regime or an example of a socially conservative administration with excessive powers is hotly debated. The racial and nationalist ideas were implanted inside the South African regime, however the economic structure of the country was not as regulated as that of a typically fascist state. Guatemala (1953-1980s) - Mario Sandoval Alarc?n, a self-declared fascist, headed the National Liberation Movement after a coup d'?tat overthrew the democratic government of Col. Jacobo Arbenz. Sandoval became known as the "godfather of the death squads" during the Guatemalan military's 30-year counter-insurgency campaign and at one point served as Guatemala's vice president. Rhodesia (1965-1978) - The racial segregation system by Ian Smith is similarly considered by some to be a form of fascism. See the comments of South Africa. Lebanon (1982-1988) - The right wing Christian Phalangist Party, backed by its own private army and inspired by the Spanish Falangists, was nominally in power in the country during the 1980s but had limited authority over the highly factionalised state, two-thirds of which was occupied by Israeli and Syrian troops. Phalangists, trained and supported by Israel are alleged to have carried out the Sabra and Shatila Massacre in 1982. Iran (1950-1953) - Under the Iranian National Front, during the regime of Mohammad Mossadegh, attacks on the political left were led by right-wing groups with fascistic elements including the Iranian Nation Party, led by Dariush Forouhar; the Sumka (The National Socialist Iranian Workers Party) led by Dr. Davud Monshizadeh; and Kabud (Iranian Nazi Party) founded by Habibollah Nobakht. Fascism in the United States? This idea was first brought up in the cautionary novel It Can't Happen Here by Sinclair Lewis. Cases have been made both for and against this allegation on all sides of the political spectrum. For example, there are those on the right who claim that the US has been Fascist since the time of Franklin D. Roosevelt. Some on the political left see fascism in authoritarian policies of various Republican administrations or the Christian Right. Few scholars take these claims seriously (see Neo-Fascism). In 1933, retired General Smedley Butler testified to the McCormack-Dickstein Committee that he had been approached by a group of wealthy business interests, led by the Du Pont and J. P. Morgan industrial empires, to orchestrate a fascist coup against Roosevelt. The alleged coup attempt has come to be known as the Business Plot. Neo-Fascism Contemporary neo-fascism and allegations of neofascism are covered in a number of other articles rather than on this page: * See: Neo-Fascism; Neo-Nazism; Neofascism and religion; Christian Identity; Creativity Movement; National Alliance; Nouvelle Droite; American Nazi Party; Alain de Benoist; William Luther Pierce; George Lincoln Rockwell. Fascist mottos and sayings * Me ne frego, literally "I don't care," closer, in meaning, to "I don't give a damn": the Italian Fascist motto. * Libro e moschetto - fascista perfetto, "The book and the musket - make the perfect Fascist." * Viva la Morte, "Long live death (sacrifice)." * The above mentioned Tutto nello Stato, niente al di fuori dello Stato, nulla contro lo Stato, "Everything in the State, nothing outside the State, nothing against the State." * Credere, Obbedire, Combattere ("Believe, Obey, Fight") Related topics * Fascio (usage 1890s to World War I) * The Manifesto of the Fascist Struggle * George Seldes, early reporter of US fascism. * Horst Wessel Lied, a German song that encapsulates much of Fascist ideology. * Fascist symbolism * Japanese nationalism, Japanese Radical Right-Nationalist Local Ideology from the World War II times to the present day. References * Hitler, Adolf. Mein Kampf (1992). London: Pimlico. ISBN 071265254X * "Labor Charter" (1927-1934) * Mussolini, Benito. Doctrine of Fascism which was published as part of the entry for fascismo in the Enciclopedia Italiana 1932. * Sorel, Georges. Reflections on Violence. * Wallace, Henry. "The Dangers of American Fascism". The New York Times, Sunday, 9 April 1944. General bibliography * Hughes, H. Stuart. 1953. The United States and Italy. Cambridge, MA: Harvard University Press. * Payne, Stanley G. 1995. A History of Fascism, 1914-45. Madison, Wisc.: University of Wisconsin Press. * Eatwell, Roger. 1996. Fascism: A History. New York: Allen Lane. Bibliography on Fascist ideology * Laqueur, Walter. 1966. Fascism: Past, Present, Future, New York: Oxford: Oxford University Press, 1996. * Griffin, Roger. 2000. "Revolution from the Right: Fascism," chapter in David Parker (ed.) Revolutions and the Revolutionary Tradition in the West 1560-1991, Routledge, London. * Schapiro, J. Salwyn. 1949. Liberalism and The Challenge of Fascism, Social Forces in England and France (1815-1870). New York: McGraw-Hill. * Laclau, Ernesto. 1977. Politics and Ideology in Marxist Theory: Capitalism, Fascism, Populism. London: NLB/Atlantic Highlands Humanities Press. * Sternhell, Zeev with Mario Sznajder and Maia Asheri. [1989] 1994. The Birth of Fascist Ideology, From Cultural Rebellion to Political Revolution., Trans. David Maisei. Princeton, NJ: Princeton University Press. * Fritzsche, Peter. 1990. Rehearsals for Fascism: Populism and Political Mobilization in Weimar Germany. New York: Oxford University Press. ISBN 0195057805 * Gentile, Emilio. 2002. Fascismo. Storia ed interpretazione . Roma-Bari: Giuseppe Laterza & Figli. Bibliography on international fascism * Coogan, Kevin. 1999. Dreamer of the Day: Francis Parker Yockey and the Postwar Fascist International. Brooklyn, N.Y.: Autonomedia. * Griffin, Roger. 1991. The Nature of Fascism. New York: St. Martin's Press. * Paxton, Robert O. 2004. The Anatomy of Fascism. New York: Alfred A. Knopf. * Weber, Eugen. [1964] 1982. Varieties of Fascism: Doctrines of Revolution in the Twentieth Century, New York: Van Nostrand Reinhold Company, (Contains chapters on fascist movements in different countries.) Further reading * Seldes, George. 1935. Sawdust Caesar: The Untold History of Mussolini and Fascism. New York and London: Harper and Brothers. * Reich, Wilhelm. 1970. The Mass Psychology of Fascism. New York: Farrar, Straus & Giroux. * Mises, Ludwig von. 1944. Omnipotent Government: The Rise of the Total State and Total War. Grove City: Libertarian Press. External links Wikiquote has a collection of quotations by or about: Fascism * The Doctrine of Fascism by Benito Mussolini (complete text) * Fascism and Zionism - From The Hagshama Department - World Zionist Organization * Fascism Part I - Understanding Fascism and Anti-Semitism * Eternal Fascism: Fourteen Ways of Looking at a Blackshirt - Umberto Eco's list of 14 characteristics of Fascism, originally published 1995. * Site of an italian fascist party Italian and German languages * Site dedicated to the period of fascism in Greece (1936-1941) * Text of the papal encyclical Quadragesimo Anno. * The Economics of Fascism, Supporters Summit 2005, October 7-8, 2005, Mises Institute, Auburn, Alabama. From checker at panix.com Sun Sep 25 20:01:42 2005 From: checker at panix.com (Premise Checker) Date: Sun, 25 Sep 2005 16:01:42 -0400 (EDT) Subject: [Paleopsych] Wikipedia: Corporatism Message-ID: Corporatism http://en.wikipedia.org/wiki/Corporatism Historically, corporatism or corporativism (Italian corporativismo) is a political system in which legislative power is given to corporations that represent economic, industrial and professional groups. Unlike pluralism, in which many groups must compete for control of the state, in corporatism, certain unelected bodies take a critical role in the decision-making process. A corporatist state ... does not simply license the existence of organised interest groups but incorporates them into its own centralised hierarchical system of regulation. In doing so, the state simultaneously recognises its dependence upon these associations and seeks to use them as an instrument in the pursuit and legitimation of its policies.[1] The word "corporatism" is derived from the Latin word for body, corpus. This original meaning was not connected with the specific notion of a business corporation, but rather a general reference to any incorporated body. Its usage reflects medieval European concepts of a whole society in which the various components each play a part in the life of the society, just as the various parts of the body serve specific roles in the life of a body. According to various theorists, corporatism was an attempt to create a "modern" version of feudalism by merging the "corporate" interests with those of the state. (Also see neofeudalism.) Political scientists may also use the term corporatism to describe a practice whereby an authoritarian state, through the process of licensing and regulating officially-incorporated social, religious, economic, or popular organizations, effectively co-opts their leadership or circumscribes their ability to challenge state authority by establishing the state as the source of their legitimacy. This usage is particularly common in the area of East Asia studies, and is sometimes also referred to as state corporatism. Modern popular usage of the term is more perjorative, emphasizing the role of business corporations in government decision-making at the expense of the public. The power of business to affect government legislation through lobbying and other avenues of influence in order to promote their interests is usually seen as detrimental to those of the public. In this respect, corporatism may be characterized as an extreme form of regulatory capture, and is also termed corporatocracy. If there is substantial military-corporate collaboration it is often called militarism or the military-industrial complex. Some modern political scientists and sociologists use the term neo-corporatism to describe a process of bargaining between labor, capital, and government identified as occuring in some small, open economies (particularly in Europe) as a means of distinguishing their observations from popular perjorative usage and to highlight ties to classical theories. Contents * 1 Classical theoretical origins * 2 Neo-corporatism * 3 State corporatism * 4 Criticism + 4.1 Anti-Corporate Criticism + 4.2 Free Market criticisms * 5 Related Topics * 6 External links * 7 Sources Classical theoretical origins Corporatism is a form of class collaboration put forward as an alternative to class conflict, and was first proposed in Pope Leo XIII's 1891 encyclical Rerum Novarum, which influenced Catholic trade unions that organised in the early twentieth century to counter the influence of trade unions founded on a socialist ideology. Theoretical underpinning came from the medieval traditions of guilds and craft-based economics. Gabriele D'Annunzio and anarcho-syndicalist Alceste de Ambris incorporated principles of corporative philosophy in their Constitution of Fiume. One early and important theorist of corporatism was Adam M?ller, an advisor to Prince Metternich in what is now eastern Germany and Austria. M?ller propounded his views as an antidote to the twin "dangers" of the egalitarianism of the French Revolution and the laissez-faire economics of Adam Smith. In Germany and elsewhere there was a distinct aversion among rulers to allow unrestricted capitalism, owing to the feudalist and aristocratic tradition of giving state privileges to the wealthy and powerful. Under Fascism in Italy, business owners, employees, trades-people, professionals, and other economic classes were organized into 22 guilds, or associations, known as "corporations" according to their industries, and these groups were given representation in a legislative body known as the Camera dei Fasci e delle Corporazioni. Similar ideas were also ventilated in other European countries at the time. For instance, Austria under the Dollfu? dictatorship had a constitution modelled on that of Italy; but there were also conservative philosophers and/or economists advocating the corporate state, for example Othmar Spann. In Portugal, a similar ideal, but based on bottom-up individual moral renewal, inspired Salazar to work towards corporatism. He wrote the Portuguese Constitution of 1933, which is credited as the first corporatist constitution in the world. Neo-corporatism In the recent literature of political science and sociology, corporatism (or neo-corporatism) lacks negative connotation. In the writings of Philippe Schmitter, Gerhard Lehmbruch and their followers, "neo-corporatism" refers to social arrangements dominated by tri-partite bargaining between unions, the private sector (capital), and government. Such bargaining is oriented toward (a) dividing the productivity gains created in the economy "fairly" among the social partners and (b) gaining wage restraint in recessionary or inflationary periods. Most political economists believe that such neo-corporatist arrangements are only possible in societies in which labor is highly organized and various labor unions are hierarchically organized in a single labor federation. Such "encompassing" unions bargain on behalf of all workers, and have a strong incentive to balance the employment cost of high wages against the real income consequences of small wage gains. Many of the small, open European economies, such as Sweden, Austria, Norway, Ireland, and the Netherlands fit this classification. In the work of some scholars, such as Peter Katzenstein, neo-corporatist arrangements enable small open economies to effectively manage their relationship with the global economy. The adjustment to trade shocks occurs through a bargaining process in which the costs of adjustment are distributed evenly ("fairly") among the social partners. Examples of modern neocorporatism include the ILO Conference or in the Economic and Social Committee of the European Union, the collective agreement arrangements of the Scandinavian countries, the Dutch Poldermodel system of consensus, or the Republic of Ireland's system of Social Partnership. In Australia, the Labor Party governments of 1983-96 fostered a set of policies known as The Accord, under which the Australian Council of Trade Unions agreed to hold back demands for pay increases, the compensation being increased expenditure on the "social wage", Prime Minister Paul Keating's name for broad-based welfare programs. In Italy, the Carlo Azeglio Ciampi administration inaugurated in July 23, 1993 a concertation (italian: concertazione) policy of peaceful agreement on salary rates between government, the three main trade unions and the Confindustria employers' federation. Before that, salary augmentations always were conquered by strike actions. In 2001 the Silvio Berlusconi administration puts and end to concertation. Most theorists agree that neo-corporatism is undergoing a crisis. In many classically corporatist countries, traditional bargaining is on the retreat. This crisis is often attributed to globalization, but this claim is not undisputed. State corporatism While classical corporatism and its intellectual successor, neo-corporatism (and their critics) emphasize the role of corporate bodies in influencing government decision-making, corporatism used in the context of the study of autocratic states, particularly within East Asian studies, usually refers instead to a process by which the state uses officially-recognized organizations as a tool for restricting public participation in the political process and limiting the power of civil society. Under such a system, as described by Jonathan Unger and Anita Chan in their essay China, Corporatism, and the East Asian Model[2], at the national level the state recognizes one and only one organization (say, a national labour union, a business association, a farmers' association) as the sole representative of the sectoral interests of the individuals, enterprises or institutions that comprise that organization's assigned constituency. The state determines which organizations will be recognized as legitimate and forms an unequal partnership of sorts with such organizations. The associations sometimes even get channelled into the policy-making processes and often help implement state policy on the government's behalf. By establishing itself as the arbitrator of legitimacy and assigning responsibility for a particular constituency with one sole organization, the state limits the number of players with which it must negotiate its policies and co-opts their leadership into policing their own members. This arrangement is not limited to economic organizations such as business groups or trade unions; examples can also include social or religious groups. Examples abound, but one such would be the People's Republic of China's Islamic Association of China, in which the state actively intervenes in the appointment of imams and controls the educational contents of their seminaries, which must be approved by the government to operate and which feature courses on "patriotic reeducation".[3] Another example is the phenomenon known as "Japan, Inc.", in which major industrial conglomerates and their dependent workforces were consciously manipulated by the Japanese MITI to maximize post-war economic growth. Criticism Anti-Corporate Criticism Corporatism or neo-corporatism is often used popularly as a pejorative term in reference to perceived tendencies in politics for legislators and administrations to be influenced or dominated by the interests of business enterprises. The influence of other types of corporations, such as labor unions, is perceived to be relatively minor. In this view, government decisions are seen as being influenced strongly by which sorts of policies will lead to greater profits for favored companies. Corporatism is also used to describe a condition of corporate-dominated globalization. Points enumerated by users of the term in this sense include the prevalence of very large, multinational corporations that freely move operations around the world in response to corporate, rather than public, needs; the push by the corporate world to introduce legislation and treaties which would restrict the abilities of individual nations to restrict corporate activity; and similar measures to allow corporations to sue nations over "restrictive" policies, such as a nation's environmental regulations that would restrict corporate activities. Critics of capitalism often argue that any form of capitalism would eventually devolve into corporatism, due to the concentration of wealth in fewer and fewer hands. A permutation of this term is corporate globalism. John Ralston Saul argues that most Western societies are best described as corporatist states, run by a small elite of professional and interest groups, that exclude political participation from the citizenry. In the United States, corporations representing many different sectors are involved in attempts to influence legislation through lobbying. This is also true of many non-business groups, unions, membership organizations, and non-profits. While these groups have no official membership in any legislative body, they can often wield considerable power over law-makers. In recent times, the profusion of lobby groups and the increase in campaign contributions has led to widespread controversy and the McCain-Feingold Act. American corporatism may be evidenced in the close ties between members of the Bush Administration and many large corporations, such as Halliburton. Some [4] claim that Franklin D. Roosevelt's New Deal programs were an unprecedented jump towards a corporate state. However, this ignores the long history of narrow economic interests controlling the decision-making process in America. Some political activists argue that the political economy of the United States is heading toward fascism, which critics say confuses the historic and contemporary uses of the term corporatism. They often cite a quote on corporatism widely attributed to Mussolini. In an article written by Mussolini, and reportedly found in the 1932 Enciclopedia Italiana, Mussolini allegedly says: "Fascism should more properly be called corporatism, since it is the merger of state and corporate power." However, the quote does not appear in that book, and is arguably inconsistent with, or contradictory to, Mussolini's writings on Corporatism. [5] Free Market criticisms Free Market theorists like Ludwig von Mises, would describe corporatism as anathema to their vision of capitalism. In the kind of capitalism such theorists advocate, what has been called the "night-watchman" state, the government's role in the economy is restricted to safeguarding the autonomous operation of the free market. Other critics argue that corporatist arrangements exclude some groups, notably the unemployed, and are thus responsible for high unemployment. This argument goes back to the famous "Logic of Collective Action" by Harvard economist Mancur Olson. However, many critics of free market theories, such as George Orwell, have argued that corporatism (in the sense of an economic system dominated by massive corporations) is the natural result of free market capitalism. Related Topics * Anti-globalization * Antitrust * Collectivism * Corporate nationalism * Corporate police state * Crony capitalism * Globalization * Plutocracy * Quango * New Deal External links * publiceye.org discusses this article, and provides copious references on the subject * Constitution of Fiume * Rerum Novarum: encyclical of pope Leo XIII on capital and labor * Quadragesimo Anno: encyclical of pope Pius XI on reconstruction of the social order Sources On Neo-Corporatism * Katzenstein, Peter: Small States in World Markets, Ithaca, 1985. * Olson, Mancur: Logic of Collective Action: Public Goods and the Theory of Groups, (Harvard Economic Studies), Cambridge, 1965. * Schmitter, P. C. and Lehmbruch, G. (eds.), Trends toward Corporatist Intermediation, London, 1979. From checker at panix.com Sun Sep 25 20:01:55 2005 From: checker at panix.com (Premise Checker) Date: Sun, 25 Sep 2005 16:01:55 -0400 (EDT) Subject: [Paleopsych] Fascism and Homeland Security Message-ID: Fascism and Homeland Security http://www.oilempire.us/fascism.html Friendly Fascism or Full Strength? Democrats vs. Republicans, neo-liberals vs. neo-conservatives "The illusion of freedom will continue as long as it's profitable to continue the illusion. At the point where the illusion becomes too expensive to maintain, they will just take down the scenery, pull back the curtains, and you will see the brick wall at the back of the theater." -- Frank Zappa Justice Sandra Day O'Connor, a few hours before the Supreme Court's 2001-2002 session, said that Americans are "likely to experience more restrictions on our personal freedom than has ever been the case in our country." "I did not like fascists when I fought them as a diplomat for 23 years and I don't like them now in my own country." retired American diplomat Joe Wilson (smeared by the Bush gang) see interview at [1]http://www.dailykos.com/story/2005/2/9/94615/61143 [2]introduction - [3]updates and blog - [4]all files - [5]chapters [6]newsletters - [7]slideshow - [8]dictionary [10]9/11 Cheney's crime, not a failure - the American Reichstag Fire [11]peak oil the real connection between Iraq and 9/11 [12]World War IV "the war that will not end in our lifetime" [13]fascism and Homeland Security, the war on freedom [14]media manipulates minds, left gatekeepers, psyops [15]stolen elections ballot machines, lone gunmen, plane crashes [16]Bush - oil we are saying, give impeachment a chance [17]permatopia: permaculture solutions to Peak Oil and climate change - toward a sustainable civilization personal, local, global responses to Peak Oil and climate change _________________________________________________________________ [billofrights.jpg] restoring civil liberties would require exposing 9/11: the pretext for the war on freedom [18]Bill of Rights - R.I.P. [19]Homeland inSecurity and Total(itarian) Information Awareness Michael [20]Chertoff: the new Homeland Security Secretary (Chertoff means "devil" in Russian) John [21]Negroponte: Death Squad supporter now in charge of US spy agencies who sent [22]anthrax letters to the media and to the Democrats in Congress to "enable" passage of the [23]U.S.A.P.A.T.R.I.O.T. Act? Uncle Sam's Army Packages of Anthrax Terrorized Representatives Into Obliterating Tolerance [24]Papers, Please! national ID cards and the surveillance society [25]Geoslavery: how the global positioning system will track everyone, all the time [slavery.jpg] [26]Red Alert: FEMA camps, martial law and indefinite detention without trial Friendly [27]Fascism or Full Strength Fascism? Democrats vs. Republicans, neo-liberals vs. neo-conservatives [28]Brazil (the movie) Monty Python meets the Department of Homeland Security Orwell's "[29]1984" [30]torture, "rendition" and "non lethal" weapons The War on [31]Dissent and the abolition of First Amendment rights Radio Frequency ID ([32]RFID) chips: a technological advancement over Nazi tatooing of prisoners (coming soon to your drivers license, your grocery store, your cash currency and possibly under your very skin) a core part of the global surveillance society [33]Biometrics the [34]Police State is not on the horizon, it is already here Slaying The Beast Before The Election Is Canceled, February 3, 2004 From the Streets of Little Beirut By Glen Yeadon [35]http://portland.indymedia.org/en/2004/02/279760.shtml There are three unmistakable characteristics of a police state. The federal or state police serve the interest of the government rather than the interest of the people. The police enforce the policies of the government instead of the responding primarily to criminal behavior. Finally, the police spy on private citizens. All these conditions exist today in the United States. [36]United Nations Universal Declaration of Human Rights [big%20brother.jpg] http://www.maebrussell.com/Garrison/Garrison%20Playboy%20Intvw%202.htm l also archived at http://www.oilempire.us/reichstag-fire.html Jim Garrison, New Orleans District Attorney (subject of Oliver Stone's film JFK) "fascism will come to America in the name of national security." [37][corpflag.jpg] "Fascism should more properly be called corporatism, since it is the merger of state and corporate power." - Benito Mussolini _________________________________________________________________ Peak Oil and fascism This explanation of how fascist fringe parties come to power is accurate. The article's mention of the neo-cons misses the fact that their program is definitely related to their understanding of the implications of Peak Oil (as described masterfully in "[38]Crossing the Rubicon"). [39]http://newerainvestor.blogspot.com/2005/04/politics-of-peak-oil-an d-fascism.html Tuesday, April 26, 2005 The Politics of Peak Oil and Fascism _________________________________________________________________ [40]http://peakenergy.blogspot.com/2005/05/voyage-of-beagle.html list of recent articles detailing America's slide into fascism _________________________________________________________________ [41]http://urbansurvival.com/week.htm July 5, 2005 Meantime, back at the Police State With the new National Security Service signed into law: We can expect a large number of side shows to be developed in coming months because of the long wave economic train wreck forming up. Specifically, as the economy softens, it will become more and more difficult for the Powers That Be (PTB) to hold onto the reigns of power. As a result, we read with interest this morning that the Pentagon's planners are considering beefing up domestic security. This is almost amusing to the well read student of history: Clearly the agenda has nothing to do with terrorism and is more likely to be the formation of military strength internally to attempt maintenance of order after the markets crash and lynch mobs take to the street looking for banksters and crooked politicians who are squandering the public's trust as we speak. It will also be used as the forerunner of the draft, we consider a near certainty by mid-2006. Look for news out of Afghanistan and Iraq to be main drivers, but the not-so-well hidden agenda is building up a protection unit for the PTB to keep "order." _________________________________________________________________ [42]http://www.theregister.co.uk/2005/04/12/bank_regs_boost_data_minin g/ Bush Admin demands more banking data By Thomas C Greene in Washington Published Tuesday 12th April 2005 16:02 GMT The Bush Administration plans to extend its mighty neural networks to international banking in hopes of discovering terrorist activity, the New York Times reported in its Sunday edition. The scheme would allow the US Treasury Department to maintain databases of international money transfers to and from the USA, creating an additional regulatory burden on banks struggling to comply with myriad regulations already imposed by the so-called "Patriot" Act. The result of this additional data mining, the article suggests, will be a flood of largely irrelevant data to federal agencies already awash in irrelevant data. But the Administration's overall approach has been to get all the data it can now, and figure out how it might be used to catch terrorists later. Since the Administration's grand schemes for monitoring the public's every move, such as the MATRIX and Total Information Awareness (TIA), fell into disrepute, it appears to be taking a piecemeal approach, building its surveillance society one step at a time. We now have, or have in the works: biometric, RFID drivers' licenses following a federal standard with state motor vehicle databases linked electronically, as required by the Real-ID Act; biometric, RFID passports; bulk demands by the federal government for airline passenger records; the CAPPS - now called "Secure Flight" - airline data mining scheme; and banking and other financial data in federal hands, and enriched with personal data from retail privacy invaders like ChoicePoint. All of these components can, and certainly will, be correlated, although the government hopes that the public will perceive them as discrete elements in the so-called war on terror, and not contemplate the eventual, cumulative effect of all this activity. The public has rejected big gestures like TIA and MATRIX, but seems tolerant of incremental developments, like the new banking regulations now being worked out. With enough such steps, the TIA dream can be realized without public resistance. So far, it appears to be on track. [sorry_we_missed_you-copy.gif] _________________________________________________________________ Remarks by Senator Byrd Delivered on March 1, 2005 Stopping a Strike at the Heart of the Senate witness how men with motives and a majority can manipulate law to cruel and unjust ends. Historian Alan Bullock writes that Hitler's dictatorship rested on the constitutional foundation of a single law, the Enabling Law. Hitler needed a two-thirds vote to pass that law, and he cajoled his opposition in the Reichstag to support it. Bullock writes that "Hitler was prepared to promise anything to get his bill through, with the appearances of legality preserved intact." And he succeeded. Hitler's originality lay in his realization that effective revolutions, in modern conditions, are carried out with, and not against, the power of the State: the correct order of events was first to secure access to that power and then begin his revolution. Hitler never abandoned the cloak of legality; he recognized the enormous psychological value of having the law on his side. Instead, he turned the law inside out and made illegality legal. _________________________________________________________________ March 7, 2005 Columbus Free Press / Ohio [43]http://www.freepress.org/columns/display/7/2005/1084 Senator Byrd is Correct to Equate Bush With Hitler by Harvey Wasserman The U.S. Senate's senior Constitutional scholar has correctly equated Bush with Hitler, and the usual attack dogs are howling. But they are wrong, and Americans must now face the harsh realities of an increasingly fascist and totalitarian GOP. Octogenarian Senator Robert Byrd of West Virginia made the equation in the context of Bush's attack on Senate procedures which might slow or halt his on-going attempt to pack the courts with extreme right-wing fanatics. Byrd said Bush's moves to destroy time-honored Senate rules parallel Hitler's ramming fascist legislation through his gutted Reichstag. .... While Bush advocates for "democracy" overseas, the GOP is crushing it at home. These judicial nominees mean to further solidify Republican control of the court system, which they have added to their grip on the Executive, both houses of Congress and the media. The GOP is also gutting safeguards within the FBI and CIA, turning them into a [44]personal police force that could parallel Hitler's Gestapo. Because the regime wraps itself in the rhetoric of our democratic roots, it's emotionally difficult for Americans to equate Bush with Hitler. He is not, after all, running [45]death camps like the ones Hitler used to exterminate millions of Jews, Gypsies, gays, unionists, Jehovah Witnesses, the elderly and infirm, birth defected and handicapped. But the distinction may be lost on the tens of thousands of Iraqis who have died in the wholesale slaughter there, and whose land has been carpeted with radioactive [46]depleted uranium which will kill for centuries. Bush is now operating a classic concentration camp in [47]Guantanamo. This infamous holding center operates entirely outside the rule of law, with prisoners held without charge, without evidence, without access to attorneys, family or the outside world. .... Both Mussolini's Fascist's and Hitler's Nazis used acts of terror and alleged terror to grab absolute power. Ranting at Bolshevism as the GOP now does against Islam, the Nazis used the burning of the [48]Reichstag much as the GOP has capitalized on the terror attacks of [49]September 11. .... Like Hitler, Bush believes he talks to and for God. He has said at least twice in public that he does not oppose dictatorship as long as he can be the dictator. His family has long, well-documented financial and political ties to the Nazi regime, as well as to [50]Osama bin Laden and a long list of [51]oil-rich Islamic fundamentalists. Senator Byrd's invocation of the Nazis to describe the Bush regime may be considered impolitic. But it's folly to avoid the important parallels. By all accounts American democracy is hanging by a thin thread which Bush/[52]Rove is laboring mightily to cut. Sen. Robert Byrd is a conservative, uniquely learned man. When he equates Bush with Hitler, he speaks with great sadness and scholarship -- and must be heeded. [woodyguthrie.jpg] Woody Guthrie [53]http://keyholepublishing.com/index.html The Unveiling of the National Security State (7/14/04) The Unveiling of the National Security State by Richard M. Dolan copyright ?2004 by Richard M. Dolan. All rights reserved. All things change, including our time-honored system of government. We have entered into a new era, marked by the existence of an omnipresent state, controlled by the very few, bound by no law but its own. Welcome to the New World Order. A new American order is in place. Better get used to it. Or else. Five centuries ago, Niccolo Machiavelli explained how to undertake a revolution from above without most people even noticing. In his Discourses on Livy, he wrote that one "must at least retain the semblance of the old forms; so that it may seem to the people that there has been no change in the institutions, even though in fact they are entirely different from the old ones." That is, keep the old government structures, even while you make profound changes to the actual system, because the appearances are all that most people will notice. So today, instead of seeing the corpse of a republic in which we live, we see merely the dead man's clothing. Those clothes look the same as ever, albeit increasingly worn. We have had a quiet revolution that has not eliminated our Congressional representatives - it's simply made them largely irrelevant. It's been a long journey to our current state of affairs. Not surprisingly, wars have been a major catalyst. Most wars fought by the United States have added power to the executive branch, while whittling power away from the legislature. This includes wars fought for high-minded purposes such as the Civil War and World War Two, mindless bloodbaths like World War One, and the dozens of undeclared wars over the past half-century. I would select World War Two - and its immediate aftermath - as the real turning point when the American Dream went awry. This is ironic, since it was at that moment when America first sat atop the world at the pinnacle of power. And therein lies the problem. For this was when the American republic began its transformation into a national security state. Or, to put it another way, into an Empire. ... _________________________________________________________________ Creeping Fascism It is becoming increasingly difficult to avoid the conclusion that conservatives, subtly but unmistakably, are fomenting violence against liberals for the 2004 election. And if they succeed in doing so, America will be facing what has always been considered unthinkable here: a serious manifestation of fascism. ... I concluded previously that it seemed likely that any manifestation of fascism was some ways off, perhaps as long as a generation, if these trends were left unchecked. Now it appears that the timetable is moving much faster than that -- and countervailing forces are so far slow in coalescing, in no small part because of the utter, Stalinist ruthlessness of their opponents. [54]http://dneiwert.blogspot.com/2003_11_09_dneiwert_archive.html#1068 73808557482314 Rush, Newspeak and Fascism: An exegesis by David Neiwert POSTED AUGUST 30, 2003 -- [55]http://www.cursor.org/stories/fascismintroduction.php _________________________________________________________________ http://www.prisonplanet.com/122203fascistamerica.html PRISON PLANET.com Juan Bosch: His Prophecy of a Fascist America William Hughes A long time ago, I remember reading a disturbing magazine article that was written by Juan Bosch. He was the former president of the Dominican Republic ... Bosch, like any true son of a nation, stood with his people against the grasping Oligarchs. He favored modest land reform in his poverty-stricken country. That proved a little too much for the fat cat sugar plantation owners and the ambitious army generals. He was ousted after only seven months in office, in 1963, in a CIA-assisted military coup. In 1965, President Lyndon B. Johnson sent in 20,000 Marines, on the pretext that the Dominican Republic might go communist. He feared it was another Cuba in the making. Sure, LBJ! Bosch died in 2001. Anyway, getting back to Bosch's commentary. He had predicted that America would one day look like a U.S. occupied Dominican Republic! His point was, and I'm sorry I can't remember his exact words, that America would be corrupted by its underhanded role in replacing his and other governments around the globe. He saw a time when the U.S. would have its soldiers patrolling its own cities, airports, railroad stations, ports, public buildings and its capital. And, also that its citizens' liberties would be unduly restricted by the federal government that had lost its moral compass. Sadly, that time is coming fast! [fasces.jpg] -- close up --> [fasces2.jpg] Roman "Fasces" on the wall behind the Speaker's Podium US House of Representatives chamber, United States Capitol The "Fasces" was a symbol of imperial power in ancient Rome. A bundle of sticks bound together, it represented the "many bound together as one." "Fasces" is the root where the term "fascism" comes from. "Of all the enemies to public liberty war is, perhaps, the most to be dreaded because it comprises and develops the germ of every other. War is the parent of armies; from these proceed debts and taxes. And armies, and debts, and taxes are the known instruments for bringing the many under the domination of the few. In war, too, the discretionary power of the Executive is extended. Its influence in dealing out offices, honors, and emoluments is multiplied; and all the means of seducing the minds, are added to those of subduing the force of the people. The same malignant aspect in republicanism may be traced in the inequality of fortunes, and the opportunities of fraud, growing out of a state of war...and in the degeneracy of manners and morals, engendered by both. No nation could preserve its freedom in the midst of continual warfare." -- James Madison, April 20, 1795 _________________________________________________________________ "A clique of U.S. industrialists is hell-bent to bring a fascist state to supplant our democratic government and is working closely with the fascist regime in Germany and Italy. I have had plenty of opportunity in my post in Berlin to witness how close some of our [56]American ruling families are to the Nazi regime. They extended aid to help Fascism occupy the seat of power, and they are helping to keep it there." --William E. Dodd, U.S. Ambassador to Germany, 1937 _________________________________________________________________ [57]http://www.politicalstrategy.org/2003_09_23_weblog_archive.php No one knows the Bush extremists better than Ray McGovern, a former senior CIA officer and personal friend of George Bush senior, the President's father. In Breaking The Silence, he tells me: "They were referred to in the circles in which I moved when I was briefing at the top policy levels as 'the crazies'." "Who referred to them as 'the crazies'?" I asked. "All of us... in policy circles as well as intelligence circles... There is plenty of documented evidence that they have been planning these attacks for a long time and that 9/11 accelerated their plan. (The weapons of mass destruction issue) was all contrived, so was the connection of Iraq with al Qaeda. It was all PR... Josef Goebbels had this dictum: If you say something often enough, the people will believe it." He added: "I think we ought to be all worried about fascism (in the United States)." _________________________________________________________________ the terrorist threat is either hype or a neo-Con false flag project. There is no al-Qaeda. You are being played by a government jealous of your desire to focus on something other than the government itself. All of these warnings are intended to keep your mind focused on what Bush wants your mind focused on - Bush himself. This is classically typical of a fascist regime. It may sound outlandish, but it is true. Nothing about fascism is normal. Don't expect the normal, expect the twisted. [58]http://www.stop-fascism.org/dec__26_-_jan__1.htm _________________________________________________________________ [59]http://www.secularhumanism.org/library/fi/britt_23_2.htm The 14 Defining Characteristics Of Fascism By Dr. Lawrence Britt Free Inquiry Magazine / Spring 2003 Fascism Anyone? Laurence W. Britt The following article is from Free Inquiry magazine, Volume 23, Number 2. Free Inquiry readers may pause to read the "Affirmations of Humanism: A Statement of Principles" on the inside cover of the magazine. To a secular humanist, these principles seem so logical, so right, so crucial. Yet, there is one archetypal political philosophy that is anathema to almost all of these principles. It is fascism. And fascism's principles are wafting in the air today, surreptitiously masquerading as something else, challenging everything we stand for. The clich? that people and nations learn from history is not only overused, but also overestimated; often we fail to learn from history, or draw the wrong conclusions. Sadly, historical amnesia is the norm. We are two-and-a-half generations removed from the horrors of Nazi Germany, although constant reminders jog the consciousness. German and Italian fascism form the historical models that define this twisted political worldview. Although they no longer exist, this worldview and the characteristics of these models have been imitated by protofascist1 regimes at various times in the twentieth century. Both the original German and Italian models and the later protofascist regimes show remarkably similar characteristics. Although many scholars question any direct connection among these regimes, few can dispute their visual similarities. Beyond the visual, even a cursory study of these fascist and protofascist regimes reveals the absolutely striking convergence of their modus operandi. This, of course, is not a revelation to the informed political observer, but it is sometimes useful in the interests of perspective to restate obvious facts and in so doing shed needed light on current circumstances. For the purpose of this perspective, I will consider the following regimes: Nazi Germany, Fascist Italy, Franco's Spain, Salazar's Portugal, Papadopoulos's Greece, Pinochet's Chile, and Suharto's Indonesia. To be sure, they constitute a mixed bag of national identities, cultures, developmental levels, and history. But they all followed the fascist or protofascist model in obtaining, expanding, and maintaining power. Further, all these regimes have been overthrown, so a more or less complete picture of their basic characteristics and abuses is possible. Analysis of these seven regimes reveals fourteen common threads that link them in recognizable patterns of national behavior and abuse of power. These basic characteristics are more prevalent and intense in some regimes than in others, but they all share at least some level of similarity. 1. Powerful and continuing expressions of nationalism. From the prominent displays of flags and bunting to the ubiquitous lapel pins, the fervor to show patriotic nationalism, both on the part of the regime itself and of citizens caught up in its frenzy, was always obvious. Catchy slogans, pride in the military, and demands for unity were common themes in expressing this nationalism. It was usually coupled with a suspicion of things foreign that often bordered on xenophobia. 2. Disdain for the importance of human rights. The regimes themselves viewed human rights as of little value and a hindrance to realizing the objectives of the ruling elite. Through clever use of propaganda, the population was brought to accept these human rights abuses by marginalizing, even demonizing, those being targeted. When abuse was egregious, the tactic was to use secrecy, denial, and disinformation. 3. Identification of enemies/scapegoats as a unifying cause. The most significant common thread among these regimes was the use of scapegoating as a means to divert the people's attention from other problems, to shift blame for failures, and to channel frustration in controlled directions. The methods of choice--relentless propaganda and disinformation--were usually effective. Often the regimes would incite "spontaneous" acts against the target scapegoats, usually communists, socialists, liberals, Jews, ethnic and racial minorities, traditional national enemies, members of other religions, secularists, homosexuals, and "terrorists." Active opponents of these regimes were inevitably labeled as terrorists and dealt with accordingly. 4. The supremacy of the military/avid militarism. Ruling elites always identified closely with the military and the industrial infrastructure that supported it. A disproportionate share of national resources was allocated to the military, even when domestic needs were acute. The military was seen as an expression of nationalism, and was used whenever possible to assert national goals, intimidate other nations, and increase the power and prestige of the ruling elite. 5. Rampant sexism. Beyond the simple fact that the political elite and the national culture were male-dominated, these regimes inevitably viewed women as second-class citizens. They were adamantly anti-abortion and also homophobic. These attitudes were usually codified in Draconian laws that enjoyed strong support by the orthodox religion of the country, thus lending the regime cover for its abuses. 6. A controlled mass media. Under some of the regimes, the mass media were under strict direct control and could be relied upon never to stray from the party line. Other regimes exercised more subtle power to ensure media orthodoxy. Methods included the control of licensing and access to resources, economic pressure, appeals to patriotism, and implied threats. The leaders of the mass media were often politically compatible with the power elite. The result was usually success in keeping the general public unaware of the regimes' excesses. 7. Obsession with national security. Inevitably, a national security apparatus was under direct control of the ruling elite. It was usually an instrument of oppression, operating in secret and beyond any constraints. Its actions were justified under the rubric of protecting "national security," and questioning its activities was portrayed as unpatriotic or even treasonous. 8. Religion and ruling elite tied together. Unlike communist regimes, the fascist and protofascist regimes were never proclaimed as godless by their opponents. In fact, most of the regimes attached themselves to the predominant religion of the country and chose to portray themselves as militant defenders of that religion. The fact that the ruling elite's behavior was incompatible with the precepts of the religion was generally swept under the rug. Propaganda kept up the illusion that the ruling elites were defenders of the faith and opponents of the "godless." A perception was manufactured that opposing the power elite was tantamount to an attack on religion. 9. Power of corporations protected. Although the personal life of ordinary citizens was under strict control, the ability of large corporations to operate in relative freedom was not compromised. The ruling elite saw the corporate structure as a way to not only ensure military production (in developed states), but also as an additional means of social control. Members of the economic elite were often pampered by the political elite to ensure a continued mutuality of interests, especially in the repression of "have-not" citizens. 10. Power of labor suppressed or eliminated. Since organized labor was seen as the one power center that could challenge the political hegemony of the ruling elite and its corporate allies, it was inevitably crushed or made powerless. The poor formed an underclass, viewed with suspicion or outright contempt. Under some regimes, being poor was considered akin to a vice. 11. Disdain and suppression of intellectuals and the arts. Intellectuals and the inherent freedom of ideas and expression associated with them were anathema to these regimes. Intellectual and academic freedom were considered subversive to national security and the patriotic ideal. Universities were tightly controlled; politically unreliable faculty harassed or eliminated. Unorthodox ideas or expressions of dissent were strongly attacked, silenced, or crushed. To these regimes, art and literature should serve the national interest or they had no right to exist. 12. Obsession with crime and punishment. Most of these regimes maintained Draconian systems of criminal justice with huge prison populations. The police were often glorified and had almost unchecked power, leading to rampant abuse. "Normal" and political crime were often merged into trumped-up criminal charges and sometimes used against political opponents of the regime. Fear, and hatred, of criminals or "traitors" was often promoted among the population as an excuse for more police power. 13. Rampant cronyism and corruption. Those in business circles and close to the power elite often used their position to enrich themselves. This corruption worked both ways; the power elite would receive financial gifts and property from the economic elite, who in turn would gain the benefit of government favoritism. Members of the power elite were in a position to obtain vast wealth from other sources as well: for example, by stealing national resources. With the national security apparatus under control and the media muzzled, this corruption was largely unconstrained and not well understood by the general population. 14. Fraudulent elections. Elections in the form of plebiscites or public opinion polls were usually bogus. When actual elections with candidates were held, they would usually be perverted by the power elite to get the desired result. Common methods included maintaining control of the election machinery, intimidating and disenfranchising opposition voters, destroying or disallowing legal votes, and, as a last resort, turning to a judiciary beholden to the power elite. Does any of this ring alarm bells? Of course not. After all, this is America, officially a democracy with the rule of law, a constitution, a free press, honest elections, and a well-informed public constantly being put on guard against evils. Historical comparisons like these are just exercises in verbal gymnastics. Maybe, maybe not. Note 1. Defined as a "political movement or regime tending toward or imitating Fascism"--Webster's Unabridged Dictionary. References Andrews, Kevin. Greece in the Dark. Amsterdam: Hakkert, 1980. Chabod, Frederico. A History of Italian Fascism. London: Weidenfeld, 1963. Cooper, Marc. Pinochet and Me. New York: Verso, 2001. Cornwell, John. Hitler as Pope. New York: Viking, 1999. de Figuerio, Antonio. Portugal--Fifty Years of Dictatorship. New York: Holmes & Meier, 1976. Eatwell, Roger. Fascism, A History. New York: Penguin, 1995. Fest, Joachim C. The Face of the Third Reich. New York: Pantheon, 1970. Gallo, Max. Mussolini's Italy. New York: MacMillan, 1973. Kershaw, Ian. Hitler (two volumes). New York: Norton, 1999. Laqueur, Walter. Fascism, Past, Present, and Future. New York: Oxford, 1996. Papandreau, Andreas. Democracy at Gunpoint. New York: Penguin Books, 1971. Phillips, Peter. Censored 2001: 25 Years of Censored News. New York: Seven Stories. 2001. Sharp, M.E. Indonesia Beyond Suharto. Armonk, 1999. Verdugo, Patricia. Chile, Pinochet, and the Caravan of Death. Coral Gables, Florida: North-South Center Press, 2001. Yglesias, Jose. The Franco Years. Indianapolis: Bobbs-Merrill, 1977. Laurence Britt's novel, June, 2004, depicts a future America dominated by right-wing extremists. _________________________________________________________________ On the road to Fascism United States is taking on all the defining characteristics Durango Herald -October 5, 2003 [60]http://new.globalfreepress.com/article.pl?sid=03/10/10/1832217 We believe that the United States of America is drifting towards its own version of fascism. Fascism, whatever its particular national characteristics, is inherently a destruction of the "old order" of a country its laws, its culture, its internal politics and its international relations. Fascists govern within existing systems until parallel systems are in place. Once those systems are in position, the evolution from national populism to fascism is unstoppable. Fascist states are internally destructive. When they become externally destructive, they are destroyed from outside. Characteristics of fascist states include: Fascist countries project that they are in a permanent or long-term state of war. (Example: We are in an endless war on terrorism.) Fascist countries invade other countries without provocation. (Example: pre-emptive war against Iraq.) We tried the Germans at Nuremberg for exactly this offense. As Supreme Court Justice Robert L. Jackson, the chief U.S. prosecutor at the Nuremberg Tribunal, said on Aug. 12, 1945: "We must make clear to the Germans that the wrong for which their fallen leaders are on trial is not that they lost the war, but that they started it. ... Our position is that no grievances or policies will justify resort to aggressive war. It is utterly renounced and condemned as an instrument of policy." Fascist countries violate their own treaties and international law. (Example: violation of the United Nations Charter by waging war against Iraq without UN approval.) Fascist countries lie to the general population, instilling fear and hysteria against mythological enemies, so they can go to war at will. (Examples: The 9-11 tragedy was used to generate hysteria through a massive government propaganda campaign based upon lies about Iraq; intimidation by color-coded terror warnings and provisions of the USA Patriot Act.) Fascism is characterized by single-party rule, the destruction or transformation of the two-party system. (Examples: Colorado Gov. Bill Owens abolished the Colorado 2004 primary election; illegal attempts at redistricting driven by the White House; monetary corruption of the system resulting in voter apathy.) Fascist governments demand unquestioning support otherwise you are a traitor. (Example: President Bush's statement, "You are either with us or against us.") Fascist governments project an ideology that they are "right." (Example: President Bush, "I am right and I know I am right and history will prove me right.") Fascist countries consolidate media control for propaganda purposes. (Example: Federal Communications Commission and corporate attempts at consolidation of the media.) Fascism is characterized by legal parallelism. Fascist states create shadow agencies, shadow courts, separate prisons, thus destroying guaranteed constitutional rights. (Examples: Destruction of the guarantee of right to trial by jury; holding U.S. citizens without charge, without access to legal counsel and without the right to court appearance; intimidation of the judiciary by threats of blacklisting; intimidation of lawyers; degrading attorney-client privileges.) Fascism is characterized by using torture, concentration camps and having major prison populations. (Examples: Guantanamo concentration camp; the FBI's description of how it "breaks" suspects with heat, cold, sound and sleep deprivation. The United States has the highest percentage of citizenry in prisons of any country in the world.) Fascism is characterized by parallelism between the state and corporations. (Examples: Government and corporate overlap in certain industries oil, energy, military contractors and the media; massive corporate donations to both parties to assure connivance.) Fascism, U.S. version, is characterized by the privatization of public services and the sell-off of public entities and resources for the benefit of the party faithful rather than the public at large. (Examples: Private profiteering on public services such as prisons, water, sewer, forest use, oil and gas; current order for appraisal of all post office buildings for contemplated sale.) Fascism incorporates racism and attacks on the nondominant religion. (Examples: Imprisoning disproportionately one race for using a drug of choice other than that used by the dominant majority; Muslim profiling and harassment; denial of franchise to blacks under false pretenses in the Florida election.) Fascism promotes conservative views of arts, literature, family culture, family planning and morals. (Examples: attacks on and decreased funding for National Public Radio, the Public Broadcasting System, the National Endowment for the Arts, any institution promoting family planning; school vouchers as the beginning of class-based private education and the destruction of the public education system; passing financial responsibility for Head Start to states that are near bankruptcy.) Fascism takes religious symbolism and transfers the emotional and moral appeal to state symbols. (Examples: Aggressive and ostentatious God Bless America signs; the attempt to make the Pledge of Allegiance mandatory in Colorado schools; ostentatious flag waving and display; destruction of constitutional separation of church and state.) We believe that the American tradition is in great peril. Christine Eleanor Anderson, of Vallecito, is a businesswoman and a former law professor. Ross A. Worley is retired from Fort Lewis College. He lives in Durango. This was also signed by Jennifer Gehrman and Mark Seis, of Bayfield, and Greg Rossell, Charles Swift and Mary Lou Swift, of Durango. _________________________________________________________________ [61]http://erippy.home.mindspring.com/Guns%2C_Drugs%2C_and_Oil_9-11_an d_US-Led_Global_Fascism.html GUNS, DRUGS, AND OIL: 9-11 AND US-LED GLOBAL FASCISM By Ed Rippy Wake Up and Smell the Swastikas! By Ed Rippy [Note: most of the historical background is documented on my "9-11 and US-led Global Neofascism" (http://www.globalresearch.ca/articles/RIP308A.html) and "War Is Still A Racket" (http://www.globalresearch.ca/articles/RIP308B.html). To save space, I have omitted those footnotes here.] According to the best information I have been able to find, we are in much more trouble than most of us realize. The biological, ideological, and corporate descendants of the people who ran Hitler & Mussolini are now running much of this country, much of South America, much of the Middle East, and parts of Central Asia. We are in serious danger of a neofascist police state that will make Nazi Germany look like a picnic, and we need to take this as seriously as if our lives and our children's lived depended on it - because they do. We are as expendable to these people as the Jews, the Poles, the Gypsies, and the homosexuals were to the Third Reich. Today we face unemployment, homelessness, the prison-industrial complex, a drug war which is really a war on us, and biological warfare attacks from our own government. The terrorist attacks of 9/11/01 serve the same function as the Reichstag fire which enabled the Nazis to take over Germany. Prescott Bush, our President's grandfather, handled banking arrangements for a host of US industrialists who supplied Hitler with money, war materiel, and political cover. Bush owned part of the Silesian-American Steel company, which used slave labor in Poland to make steel for the tanks and airplanes which killed and wounded many of our fathers, grandfathers, and uncles. He also had a piece of the Hamburg-Amerika Steamship line, which gave Nazi propagandists free passage to the US and had supervisors from the Nazi Labor Front on all its ships. Henry Ford, Standard Oil of New Jersey, and ITT built up Hitler's war machine, some of them racking up huge profits from slave labor, and often supplying both sides. The list goes on. Some of these fascists, seeing Roosevelt as a Communist, plotted a coup against him - he found out and it fizzled - but they were so powerful that he couldn't have a single one of them arrested, even though a Congressional report found that the plot was real. The report was hushed up. After the war, using the military and intelligence services, US fascists rescued many of the Nazi leaders and their industrial assets (of course other countries got some too), using the Morgan Bank among others. The Dulles brothers, who became head of the CIA and Secretary of State, managed much of this while the Justice Department stood by helpless. Hitler's entire eastern spy network went to work for the US after the war. Klaus Barbie, a brutal Gestapo chief, set up the School of the Americas in the Panama Canal Zone (now the Western Hemispheric Institute for Security Co-operation at Fort Benning, GA) as a co-ordinating center for the recycled Nazis of South America. In 1980 his troops, wearing Swastika armbands, carried out a bloody coup in Bolivia. Otto Skorzeny, "Hitler's favorite commando," went to the Middle East, where he built up a network of over 100 former SS officers. Dr. Kurt Blome, a Nazi biological warfare scientist who experimented on concentration camp prisoners, went to the US Army Chemical Corps. Walter Dornberger, a general who worked concentration camp slaves literally to death in the Reich's rocket program, became a senior vice-president of the Bell Aerospace Division of Textron Corporation. The South American Neonazi network has killed, tortured, and disappeared tens of thousands of people, and US clients around the globe are little different. Argentinean death squads tortured victims under pictures of Adolf Hitler. In the `60s and `70s military intelligence gave money, tear-gas bombs, Mace, and electronic surveillance equipment to thugs in Chicago for use against local anti-war groups. Sixteen states have passed laws allowing forced vaccination or quarantine in case of biological emergency. Eleven allow confiscation of buildings and property. Seventeen grant immunity from prosecution to state and private actors for these deeds. Remember, the anthrax in the mailings after 9-11 was processed in US Government labs, and a former biowarfare director said the attacks were a good thing since they got more money for the budget. Police in Oakland routinely beat up and even kill poor people, usually of color, with impunity. At last report, there is still no evidence that protesters at the Oakland docks on April 7 threw anything at police, but the police shot them with "less-than-lethal" weapons and ran into them with motorcycles. They targeted dockworkers as well as protesters, dragged an ILWU business agent from his car, roughed him up, and held him for eighteen hours. Witnesses report that police had covered or removed their badges. Port management and Stevedoring Services of America, which has contracts in the Persian Gulf, met with police three days before the protests. The "PATRIOT" Act --written long before 9-11 -- allows the government to lock non-citizens up forever with no hearing or evidence if the Secretary of State "suspects" them of terrorists links. A new law in the works will allow the government to strip people of their citizenship for joining or providing "material support" to a group which the Attorney General has designated "terrorist." The FBI, with the aid of local police departments, is collecting intelligence on antiwar groups as part of its "counterterrorism" program. Gen. Tommy Franks, retired "liberator" of Iraq, says that another terrorist attack on the US would probably lead to a military government. The Department of Homeland Security said in late November 2003 that it expects al-Qaeda to attack soon. At the FTAA protests in Miami, police shot, gassed, beat, and arrested protesters without provocation, injuring over 100 and sending at least 12 to the hospital. Although the police knew the charges wouldn't stick, they arrested over 250 in order to beat and torture them for daring to voice disagreement with government policy. One eyewitness relates, "[O]ver at the jail vigil a few blocks away the police declare an illegal assembly. They tell people to get on the sidewalk and they'll be safe. Then they surround the group on the sidewalk, beat people to the ground, kneel on their spines and arrest them.... [A] friend comes up and tells me that Abby and her friends have been badly beaten up, jumped by cops on their way home to their hotel, her sweet, lovely face pushed into the pavement. `We could kill you here,' the cops tell them." _________________________________________________________________ The Soldiers At My Front Door by John Dear Saturday, November 29, 2003 CommonDreams.org I live in a tiny, remote, impoverished, three block long town in the desert of northeastern New Mexico. Everyone in town--and the whole state--knows that I am against the occupation of Iraq, that I have called for the closing of Los Alamos, and that as a priest, I have been preaching, like the Pope, against the bombing of Baghdad. Last week, it was announced that the local National Guard unit for northeastern New Mexico, based in the nearby Armory, was being deployed to Iraq early next year. I was not surprised when yellow ribbons immediately sprang up after the press conference. But I was surprised the following morning to hear 75 soldiers singing, shouting and screaming as they jogged down Main Street, passed our St. Joseph's church, back and forth around town for an hour. It was 6 a.m., and they woke me up with their war slogans, chants like "Kill! Kill! Kill!" and "Swing your guns from left to right; we can kill those guys all night." Their chants were disturbing, but this is war. They have to psyche themselves up for the kill. They have to believe that flying off to some tiny, remote desert town in Iraq where they will march in front of someone's house and kill poor young Iraqis has some greater meaning besides cold-blooded murder. Most of these young reservists have never left our town, and they need our support for the "unpleasant" task before them. I have been to Iraq, and led a delegation of Nobel Peace Prize winners to Baghdad in 1999, and I know that the people there are no different than the people here. The screaming and chanting went on for one hour. They would march passed the church, down Main Street, back around the post office, and down Main Street again. It was clear they wanted to be seen and heard. In fact, it was quite scary because the desert is normally a place of perfect peace and silence. Suddenly, at 7 a.m., the shouting got dramatically louder. I looked out the front window of the house where I live, next door to the church, and there they were--all 75 of them, standing yards away from my front door, in the street right in front of my house and our church, shouting and screaming to the top of their lungs, "Kill! Kill! Kill!" Their commanders had planted them there and were egging them on. I was astonished and appalled. I suddenly realized that I do not need to go to Iraq; the war had come to my front door. Later, I heard that they had deliberately decided to do their exercises in front of my house and our church because of my outspoken opposition to the war. They wanted to put me in my place. This, I think, is a new tactic. Over the years, I have been arrested some 75 times in demonstrations, been imprisoned for a "Plowshares" disarmament action, been bugged, tapped, and harassed, searched at airports, and monitored by police. But this time, the soldiers who will soon march through Baghdad and attack desert homes in Iraq, practiced on me. They confronted me personally, just as the death squad militaries did in Guatemala and El Salvador in the 1980s, which I witnessed there on several occasions. I decided I had to do something. I put on my winter coat and walked out the front door right into the middle of the street. They stopped shouting and looked at me, so I said loudly, publicly for all to hear, "In the name of God, I order all of you to stop this nonsense, and not to go to Iraq. I want all of you to quit the military, disobey your orders to kill, and not to kill anyone. I do not want you to get killed. I want you to practice the love and nonviolence of Jesus. God does not bless war. God does not want you to kill so Bush and Cheney can get more oil. God does not support war. Stop all this and go home. God bless you." Their jaws dropped, their eyeballs popped and they stood in shock and silence, looking steadily at me. Then they burst out laughing. Finally, the commander dismissed them and they left. Later, military officials spread lies around town that I had disrupted their military exercises at the Armory, so they decided to come to my house and to the church in retaliation. Others appealed to the archbishop to have me kicked out of New Mexico for denouncing their warmaking. Then, a general called the mayor and asked him to mediate "negotiations" with me, saying he did not want the military "in confrontation" with the church. Really, the mayor told me, they fear that I will disrupt the gala send-off next month, just before Christmas, when the soldiers go to Iraq. This dramatic episode is only the latest in a series of confrontations since I came to the desert of New Mexico in the summer of 2002 to serve as pastor of several poor, desert churches. I have spoken out extensively against the U.S. war on Iraq, and been denounced by people, including church people, across the state. I have organized small Christian peace groups throughout the state. We planned a prayer vigil for nuclear disarmament at Los Alamos on the anniversary of Hiroshima this past August, but when the devout people of Los Alamos, most of them Catholic, heard about it, they appealed to the archbishop to have me expelled if I appeared publicly in their town. In the end, I did not attend the vigil, but the publicity gave me further opportunities to call for the closing of Los Alamos. I receive hate mail, negative phone calls and at least one death threat for daring to criticize our country. But New Mexico is the poorest state in the U.S. It is also number one in military spending and number one in nuclear weapons. It is the most militarized, the most in need of disarmament, the most in need of nonviolence. It is the first place the Pentagon goes to recruit poor youth into the empire's army. If we are to change the direction of our country, and turn people against Bush's occupation of Iraq, we are going to have to face the ire and persecution of our local communities. If peace people in every local community insisted that our troops be brought home immediately, that the U.N. be sent in to restore Iraq, that all U.S. military aid to the Middle East be cut, and that our arsenal of weapons of mass destruction be dismantled, then we might all find soldiers marching at our front doors, trying to intimidate us. If we can face our soldiers, call them to quit the military and urge them to disobey orders to kill, then perhaps some of them will refuse to fight, become conscientious objectors and take up the wisdom of nonviolence. If we can look them in the eye and engage them in personal Satyagraha as Gandhi demonstrated, then we know that the transformation has begun. In the end, the episode for me was an experience of hope. We must be making a difference if the soldiers have to march at our front doors. That they failed to convert me or intimidate me, that they had to listen to my side of the story, may haunt their consciences as they travel to Iraq. No matter what happens, they have heard loud and clear the good news that God does not want them to kill anyone. I hope we can all learn the lesson. John Dear is a Catholic priest, peace activist, lecturer, and former executive director of the Fellowship of Reconciliation. His latest books include "Mohandas Gandhi" (Orbis) and "Mary of Nazareth, Prophet of Peace" (Ave Maria Press). For info, see http://www.johndear.org _________________________________________________________________ [62]http://www.unknownnews.net/031121civics.html Civics class: 2003 Teaching kids to live in a police state Unknown News, Nov. 21, 2003 Cops plan mock commando games for school lockdown drills by Mike Conway, Modesto [CA] Bee Nov. 19, 2003 LIVINGSTON, Ca. -- Lockdown drills come three or four times a year at Livingston High School.The one set for Thursday will be more real than any other. Officials said someone will open fire and try to break into classrooms as police close in. The guns will be real -- and loaded with blanks. Thursday morning, students and faculty again will be reminded that the exercise is only a drill. But Police Chief Bill Eldridge said he has a few surprises that he will not disclose. School administrators and police said their intent is not to cause undue alarm. Officials mailed notices to all students' homes, and alerted nearby residents and other schools. "We do lockdown drills regularly," Principal Robert Wendel said. "We feel the need to take it one step further, so the police can see how things work on campus and how to move about." There's not enough hours in the school year for our children to learn about the relationship between their needs & desires and those of the authorities. I wonder who in the Senior Class will be voted "Most Likely to Be the 1st One Killed in Any Hostage Situation"? =John C.= Kathleen Luxon has a daughter, Julie, who is a senior at Livingston High. "I think it's a good thing," Luxon said of the drill. "I think the kids always need to be prepared on how to handle a situation if it happens." She said having the lockdown is no different than a fire drill or "duck and cover" exercises meant to show children how to protect themselves in earthquakes and nuclear attacks.Police "I feel that Livingston is a very safe school," Luxon said. "We would hate for something like this to happen and not be prepared for it." Chief Bill Eldridge said: "We're not doing this out of fear or paranoia. After Columbine, we can no longer say, it won't happen to us." In April 1999, at Columbine High in Littleton, Colo., two students killed 12 students and a teacher before killing themselves. While the killing went on, police waited outside the school for Special Weapons and Tactics officers. "We're changing our whole mode of thinking about how we would get on the school grounds," Eldridge said. "The old philosophy was to respond and contain, and call in the tactical units," he said. "We can no longer stand back and wait. We, as street officers, have to get on the school grounds very quickly to prevent other students and staff from being injured." In a real-life situation, a mutual aid call would be placed and sheriff's deputies and other available law enforcement personnel would be called to the scene. But Livingston police would still be the first responders. "I don't think we do enough hands-on training of these scenarios to make our personnel and the school personnel feel comfortable," he said. Eventually, Eldridge said, he wants to have a full-scale drill with firefighters, paramedics and neighboring law enforcement. "That one would include a controlled evacuation of the school," he said. Since 1999, the school has had a response plan and practices it regularly, Livingston High's Wendel said. A critique follows each drill. Thursday morning, students and faculty again will be reminded that the exercise is only a drill. But Eldridge said he has a few surprises that he will not disclose. "We're going to add to the realism as closely as we can," he said. "There's always that percentage of staff members and students who think these training scenarios are a joke. But they're not. We learned it's not a matter of if, it's when." Published by Modesto [CA] Bee _________________________________________________________________ Police raid middle school in Vermont Principal eyes 12-year-old who wears baggy pants by Peter Freyne, Seven Days [Burlington, VT] Nov. 19, 2003 Television brings countless frightening images into the living room, but none was more chilling than that of cops with police dogs and guns drawn, terrifying kids in the hallway of a South Carolina High School on November 5. Students that didn't line up against the wall fast enough were thrown to the floor and handcuffed. The principal had called the cops after getting reports of illegal drug activity, specifically marijuana. The massive police drug raid failed, however, to turn up any pot. Not a single seed. A few days later, when a caller to a Vermont talk show mentioned the Gestapo-style high school drug raid, the host was quick to point out it happened in South Carolina, not Vermont! But guess what, folks? It has happened here. Seven Days has learned that school officials and police conducted a drug raid at the Colchester Middle Schoolon November 6, the day after the controversial raid in South Carolina. Lockers were searched, as were individual students. And a German Shepherd sniffed the joint searching for a whiff of illegal marijuana. According to Principal John Barone, about two ounces of marijuana were found behind a tile in the ceiling of the boys' bathroom. Three boys were suspended. They were scheduled to appear in executive session before the Colchester School Board Monday night. After finding the pot, Barone said a sniffer dog named Kilo was brought in to sniff lockers. No additional pot was discovered, said Barone. The matter was brought to our attention last week by a hard-hat construction laborer on Hospital Hill. The 31-year-old single mom was upset because she believed school officials were picking on her 12-year-old son, a seventh-grader. We're withholding her identity to protect the privacy of her child, who's getting quite the education in what it means to have no constitutional rights. Let's just call her Rosie the Riveter. According to Rosie, her son was taken to the principal's office the Monday after the raid and suspended for alleged insubordination. In the process his clothing was thoroughly searched. Then the school nurse, Melissa Goldberg, examined the boy in a locked room, said Rosie. She shined a flashlight in his eyes, nose and mouth and asked if he had "shot, snorted or smoked anything." When he asked to call his mom, he was told by Assistant Principal Karen Gockley they just wanted "a quick check." The boy told Seven Days that he was "frightened" and "didn't know what was going on." He also told us the nurse asked if he could pass a urinalysis drug test. He told her he could. The assistant principal, he said, also told him he ought to stop wearing those baggy clothes. "Ms. Gockley always tells us how it's not a ghetto school and stuff. She told us, like, to change how we dress." In fact, when his mother later called Gockley, she was told she should change the way her son dressed because, "when drugs were found they were often on kids with the baggy-style clothes." When she asked what "reasonable suspicion" they had to search her son, Rosie said she was told the boy had been seen with another child who looked to be under the influence of drugs. It turned out, said Rosie, that kid was on prescription drugs for an ear problem. "I feel he was violated," said Rosie. "It's not right to do that to a 12-year-old, especially when he wants his mother. I'd be scared if I was 12." No evidence of drugs or drug use was found, but the boy's two-day suspension stood. Welcome to middle school in Vermont, folks -- the new front line in the totally failed War on Drugs. And you thought kids had constitutional rights? Civil liberties? Think again. One veteran criminal lawyer put it this way: "In America today there are three places where you have no rights. One is at the border. One is in prison. And the other one is in school." The U.S. Supreme Court ruled in 1985 that school officials, not police, may search students without a warrant if they have "reasonable grounds" to suspect the search will turn up evidence of a crime or violation of school rules. Principal Barone told Seven Days he received a "tip" that morning "that there was marijuana being distributed in school." Based on that intelligence, he searched student lockers and individual students. No pot was found in those searches, he reported. Mr. Barone then received another tip that led to the discovery of two bags of grass in the boys' bathroom ceiling. More students came forward, said Barone. They told him there were more drugs in the school but "they couldn't elaborate." At that point, the decision was made to bring in the Colchester Police and Kilo. Barone said that when Kilo "hit" on a possible drug location, Officer Jeff Fontaine would stand back and let Barone or Gockley conduct the search. No drugs, however, were found as a result of the K-9 unit's help. Colchester Middle School has 605 students. It is the largest middle school in Chittenden County. Both Barone and Gockley are in their first year at the school. Previously they worked in Essex Junction. "When we were hired," said Barone, "it was made clear to us that student management had been lax." He said the school board wanted us "to tighten things up." Sounds like the kids in Colchester are getting quite a lesson in citizenship, eh? Published by Seven Days [Burlington, VT] References Visible links 1. http://www.dailykos.com/story/2005/2/9/94615/61143 2. http://www.oilempire.us/oilempire.html 3. http://www.oilempire.us/blog.html 4. http://www.oilempire.us/allfiles.html 5. http://www.oilempire.us/chapters.html 6. http://www.oilempire.us/newsletters.html 7. http://www.oilempire.us/slideshow.html 8. http://www.oilempire.us/dictionary.html 9. LYNXIMGMAP:http://www.oilempire.us/fascism.html#Map 10. http://www.oilempire.us/911.html 11. http://www.oilempire.us/peakoil.html 12. http://www.oilempire.us/worldwar4.html 13. http://www.oilempire.us/fascism.html 14. http://www.oilempire.us/media.html 15. http://www.oilempire.us/stolenelection2004.html 16. http://www.oilempire.us/bush.html 17. http://www.permatopia.com/ 18. http://www.oilempire.us/unpatriotic.html 19. http://www.oilempire.us/homeland.html 20. http://www.oilempire.us/chertoff.html 21. http://www.oilempire.us/negroponte.html 22. http://www.oilempire.us/anthrax.html 23. http://www.oilempire.us/unpatriotic.html 24. http://www.oilempire.us/papersplease.html 25. http://www.oilempire.us/geoslavery.html 26. http://www.oilempire.us/redalert.html 27. http://www.oilempire.us/fascism.html 28. http://www.oilempire.us/brazil.html 29. http://www.oilempire.us/1984.html 30. http://www.oilempire.us/torture.html 31. http://www.oilempire.us/dissent.html 32. http://www.oilempire.us/rfid.html 33. http://www.oilempire.us/biometric.html 34. http://www.oilempire.us/policestate.html 35. http://portland.indymedia.org/en/2004/02/279760.shtml 36. http://www.oilempire.us/un.html 37. http://www.adbusters.org/ 38. http://www.fromthewilderness.com/ 39. http://newerainvestor.blogspot.com/2005/04/politics-of-peak-oil-and-fascism.html 40. http://peakenergy.blogspot.com/2005/05/voyage-of-beagle.html 41. http://urbansurvival.com/week.htm 42. http://www.theregister.co.uk/2005/04/12/bank_regs_boost_data_mining/ 43. http://www.freepress.org/columns/display/7/2005/1084 44. http://www.oilempire.us/homeland.html 45. http://www.oilempire.us/holocaust.html 46. http://www.oilempire.us/depleted-uranium.html 47. http://www.oilempire.us/redalert.html 48. http://www.oilempire.us/reichstag-fire.html 49. http://www.oilempire.us/911.html 50. http://www.oilempire.us/bushbinladen.html 51. http://www.oilempire.us/saudi.html 52. http://www.oilempire.us/karlrove.html 53. http://keyholepublishing.com/index.html 54. http://dneiwert.blogspot.com/2003_11_09_dneiwert_archive.html#106873808557482314 55. http://www.cursor.org/stories/fascismintroduction.php 56. http://www.oilempire.us/bush.html 57. http://www.politicalstrategy.org/2003_09_23_weblog_archive.php 58. http://www.stop-fascism.org/dec__26_-_jan__1.htm 59. http://www.secularhumanism.org/library/fi/britt_23_2.htm 60. http://new.globalfreepress.com/article.pl?sid=03/10/10/1832217 61. http://erippy.home.mindspring.com/Guns%2C_Drugs%2C_and_Oil_9-11_and_US-Led_Global_Fascism.html 62. http://www.unknownnews.net/031121civics.html From checker at panix.com Sun Sep 25 20:02:17 2005 From: checker at panix.com (Premise Checker) Date: Sun, 25 Sep 2005 16:02:17 -0400 (EDT) Subject: [Paleopsych] Bertram Gross: Friendly Fascism The New Face of Power in America Message-ID: Excerpts from the classic book. Frank Bertram Gross: Friendly Fascism The New Face of Power in America http://www.thirdworldtraveler.com/Fascism/Friendly_Fascism_BGross.html excerpts from the book South End Press, 1980, paper [1]Introduction, Rise and Fall of Classic Fascism [2]The Takeoff Toward a New Corporate Society [3]The Mysterious Establishment [4]The Specter of Friendly Fascism [5]Subverting Democratic Machinery [6]Impossibility: It Couldn't Happen [7]Quotations ______________________________________________________________________ [8][purpleball.gif] Fascism page [9][purpleball.gif] Index of Website [10][purpleball.gif] Home Page References 1. http://www.thirdworldtraveler.com/Fascism/RiseFall_Friend_Fascism_FF.html 2. http://www.thirdworldtraveler.com/Fascism/Takeoff_NewCorpSociety_FF.html 3. http://www.thirdworldtraveler.com/Fascism/Mysterious_Establish_FF.html 4. http://www.thirdworldtraveler.com/Fascism/Specter_FriendlyFascism_FF.html 5. http://www.thirdworldtraveler.com/Fascism/Subvert_Demo_Machine_FF.html 6. http://www.thirdworldtraveler.com/Fascism/It_Couldn%27t_Happen_FF.html 7. http://www.thirdworldtraveler.com/Fascism/Quotations_FF.html 8. http://www.thirdworldtraveler.com/Fascism/Fascism.html 9. http://www.thirdworldtraveler.com/TWTwebsite_INDEX.html 10. http://www.thirdworldtraveler.com/index.html Introduction, Rise and Fall of Classic Fascism excerpted from the book Friendly Fascism The New Face of Power in America by Bertram Gross http://www.thirdworldtraveler.com/Fascism/RiseFall_Friend_Fascism_FF.html Introduction pxi Friendly fascism portrays two conflicting trends in the United States and other countries of the so-called "free world." The first is a slow and powerful drift toward greater concentration of power and wealth in a repressive Big Business-Big Government partnership. This drift leads down the road toward a new and subtly manipulative form of corporatist serfdom. The phrase "friendly fascism" helps distinguish this possible future from the patently vicious corporatism of classic fascism in the past of Germany, Italy and Japan. It also contrasts with the friendly present of the dependent fascisms propped up by the U.S. government in El Salvador, Haiti, Argentina, Chile, South Korea, the Philippines and elsewhere. The other is a slower and less powerful tendency for individuals and groups to seek greater participation in decisions affecting themselves and others. This trend goes beyond mere reaction to authoritarianism. It transcends the activities of progressive groups or movements and their use of formal democratic machinery. It is nourished by establishment promises-too often rendered false-of more human rights, civil rights and civil liberties. It is embodied in larger values of community, sharing, cooperation, service to others and basic morality as contrasted with crass materialism and dog-eat-dog competition. It affects power relations in the household, workplace, community, school, church, synagogue, and even the labyrinths of private and public bureaucracies. It could lead toward a truer democracy-and for this reason is bitterly fought... These contradictory trends are woven fine into the fabric of highly industrialized capitalism. The unfolding logic of friendly fascist corporatism is rooted in "capitalist society's transnational growth and the groping responses to mounting crises in a dwindling capitalist world". Mind management and sophisticated repression become more attractive to would-be oligarchs when too many people try to convert democratic promises into reality. On the other hand, the alternative logic of true democracy is rooted in "humankind's long history of resistance to unjustified privilege" and in spontaneous or organized "reaction (other than fright or apathy) to concentrated power...and inequality, injustice or coercion". A few years ago too many people closed their eyes to the indicators of the first tendency. But events soon began to change perceptions. The Ku Klux Klan and American Nazis crept out of the woodwork. An immoral minority of demagogues took to the airwaves. "Let me tell you something about the character of God," orated Jim Robison at a televised meeting personally endorsed by candidate Ronald Reagan. "If necessary, God would raise up a tyrant, a man who may not have the best ethics, to protect the freedom interests of the ethical and the godly." To protect Western oil companies, candidate Jimmy Carter proclaimed presidential willingness to send American troops into the Persian Gulf. Rosalyn Carter went further by telling an lowa campaign audience: "Jimmy is not afraid to declare war." Carter then proved himself unafraid to expand unemployment, presumably as an inflation cure, thereby reneging on his party's past full employment declarations. Reaching the White House with this assist from Carter (as well as from the Klan and the immoral minority of televangelicals), Reagan promptly served the immediate interests of the most powerful and the wealthiest. The Reaganites depressed real wages through the worst unemployment since the 1929-39 depression, promoted "give backs" by labor unions, cut social programs for lower and middle income people, expanded tax giveaways for the truly rich, boosted the military budget and warmed up the cold war. They launched savage assaults on organized labor, civil rights and civil liberties. pxiii economist Robert Lekachman "Ronald Reagan must be the nicest president who ever destroyed a union, tried to cut school lunch milk rations from six to four ounces, and compelled families in need of public help to first dispose of household goods in excess of $1,000...1f there is an authoritarian regime in the American future, Ronald Reagan is tailored to the image of a friendly fascist." pxiii The bad news is that evil now wears a friendlier face than ever before in American history. "Like a good TV commercial, Reagan's image goes down easy," Mark Crispin Miller has written, "calming his audience with sweet inversions of the truth...He has learned to liven up his every televised appearance with frequent shifts in expression, constant movements of the head, lots of warm chuckles and ironic shrugs and sudden frowns of manly purpose. Reagan is unfailingly attractive-'a nice guy, 'pure and simple." But what is really there, he asks, behind the mask? The President's critics have many answers. Some call him "an amiable dunce." Some see him, reports Miller, as a devil "who takes from the poor to give to the rich, has supported infanticide abroad, ravages his own countryside and props up brutal dictatorships." Others regard him as a congenital falsifier who surrounds any half-truth with a "bodyguard of lies." Miller himself has still another answer: there is nothing behind the mask. "The best way to keep his real self hidden" he suggests, "is not to have one...Reagan's mask and face are as one." To this, one might add that the Reagan image is an artfully designed blend of charisma and machismo, a combination that Kusum Singh calls charismacho. "Princes," wrote Machiavelli many centuries ago, "should delegate the ugly jobs to other people, and reserve the attractive functions for themselves." In keeping with this maxim, Reagan's less visible entourage has surrounded the President with highly visible targets of disaffection: Volcker, Stockman, Haig, Weinberger, Kirkpatrick, and Watt. In comparison, Reagan looks truly wholesome. This makes it all the more difficult to focus attention on the currents and forces behind the people behind the President-or for that matter, other less visible leaders of the American Establishment. pxvii beyond "nice guy" imagery. They establish America's symbolic environment. The Reagan administration has triggered a great leap forward in the mobilization and deployment of corporatist myths. Many billions of tax-exempt funds from conservative foundations have gone into the funding of such think tanks as the Heritage Foundation and the American Enterprise Institute. According to the Wall Street Journal, nearly three hundred economists on the staffs of conservative think tanks are part of an informal information network organized by the American Heritage Foundation alone. (This contrasts with only about two dozen economists working for trade unions, most of whom are pinned down in researching contract negotiations.) pxvii Expanded government intervention into \ the lives of ordinary people is glorified under the slogan "getting the I government off our backs." Decriminalization of corporate bribery, fraud and the dumping of health-killing wastes is justified under the banner of "promoting free enterprise" and countering "environmental extremists." Private greed, gluttony and speculation are disguised in "free market" imagery. Business corruption is hidden behind smokescreens of exaggerated attacks on the public sector. Like Trojan horses, these ideas penetrate the defenses of those opposed to any new corporatism. They establish strongholds of false consciousness and treacherous terminology in the minds not only of old-fashioned conservatives but also of the most dedicated liberals and left-wingers. Hence on many issues the left seems bereft, the middle muddled and the right not always wrong. Other elements are thereby added to the new bill of frights. One is a frightening retreat by liberals and leftwingers on the key gut issues of domestic policy: full employment, inflation and crime. "Deep cynicism has been engendered in progressive circles by past experiences with 'full employment' legislation (as) the tail on the kite of an ever expanding military economy." A movement for full employment without militarism or inflation is seen as dangerous by old-time labor leaders, utopian by liberals and by some Marxists as impossible under capitalism. Inflation is seen as a conservative issue-or else one that requires the kind of price controls that necessitate more far-reaching social controls over capital. Middle-of-the-roaders try to deal with crime by fussing too much with the details of the police-courthouse jail-parole complex and too little with the sources of low-income crime, racketeering, political corruption and crime in the executive suites. Thus the demagogues among the Reaganites and their frenetic fringes have been able to seize and keep initiatives on these issues. pxxiii Samuel Johnson "Power is always gradually stealing away from the many to the few, because the few are more vigilant and consistent." ***** The Rise and Fall of Friendly Fascsim p1 Looking at the present, I see a more probable future: a new despotism creeping slowly across America. Faceless oligarchs sit at command posts of a corporate-government complex that has been slowly evolving over many decades. In efforts to enlarge their own powers and privileges, they are willing to have others suffer the intended or unintended consequences of their institutional or personal greed. For Americans, these consequences include chronic inflation, recurring recession, open and hidden unemployment, the poisoning of air, water, soil and bodies, and, more important, the subversion of our constitution. More broadly, consequences include widespread intervention in international politics through economic manipulation, covert action, or military invasion... I see at present members of the Establishment or people on its fringes who, in the name of Americanism, betray the interests of most Americans by fomenting militarism, applauding rat-race individualism, protecting undeserved privilege, or stirring up nationalistic and ethnic hatreds. I see pretended patriots who desecrate the American flag by waving it while waiving the law. In this present, many highly intelligent people look with but one eye and see only one part of the emerging Leviathan. From the right, we are warned against the danger of state capitalism or state socialism, in which Big Business is dominated by Big Government. From the left, we hear that the future danger (or present reality) is monopoly capitalism, with finance capitalists dominating the state. I am prepared to offer a cheer and a half for each view; together, they make enough sense for a full three cheers. Big Business and Big Government have been learning how to live in bed together and despite arguments between them, enjoy the cohabitation. Who may be on top at any particular moment is a minor matter-and in any case can be determined only by those with privileged access to a well-positioned keyhole. I am uneasy with those who still adhere strictly to President Eisenhower's warning in his farewell address against the potential for the disastrous rise of power in the hands of the military-industrial complex. Nearly two decades later, it should be clear to the opponents of militarism that the military-industrial complex does not walk alone. It has many partners: the nuclear-power complex, the technology-science complex, the energy-auto-highway complex, the banking-investment-housing complex, the city-planning-development-land-speculation complex, the agribusiness complex, the communications complex, and the enormous tangle of public bureaucracies and universities whose overt and secret services provide the foregoing with financial sustenance and a nurturing environment. Equally important, the emerging Big Business-Big Government partnership has a global reach. It is rooted in colossal transnational corporations and complexes that help knit together a "Free World" on which the sun never sets. These are elements of the new despotism. A few years ago a fine political scientist, Kenneth Dolbeare, conducted a series of in-depth interviews totaling twenty to twenty-five hours per person. He found that most respondents were deeply afraid of some future despotism. "The most striking thing about inquiring into expectations for the future," he reported, "is the rapidity with which the concept of fascism (with or without the label) enters the conversation." But not all knowledge serves the cause of freedom... the tendency is to suppress fears of the future, just as most people have learned to repress fears of a nuclear holocaust. It is easier to repress well-justified fears than to control the dangers giving rise to them. p3 In 1935 Sinclair Lewis wrote a popular novel in which a racist, anti-Semitic, flag-waving, army-backed demagogue wins the 1936 presidential election and proceeds to establish an Americanized version of Nazi Germany. The title, It Can't Happen Here, was a tongue-in-cheek warning that it might. But the "it" Lewis referred to is unlikely to happen again any place. Even in today's Germany, Italy or Japan, a modern-style corporate state or society would be far different from the old regimes of Hitler, Mussolini, and the Japanese oligarchs. Anyone looking for black shirts, mass parties, or men on horseback will miss the telltale clues of creeping fascism. In any First World country of advanced capitalism, the new fascism will be colored by national and cultural heritage, ethnic and religious composition, formal political structure, and geopolitical environment. The Japanese or German versions would be quite different from the Italian variety-and still more different from the British, French, Belgian, Dutch, Australian, Canadian, or Israeli versions. In America, it would be supermodern and multi-ethnic-as American as Madison Avenue, executive luncheons, credit cards, and apple pie. It would be fascism with a smile. As a warning against its cosmetic facade, subtle manipulation, and velvet gloves, I call it friendly fascism. What scares me most is its subtle appeal. I am worried by those who fail to remember-or have never learned -that Big Business-Big Government partnerships, backed up by other elements, were the central facts behind the power structures of old fascism in the days of Mussolini, Hitler, and the Japanese empire builders. I am worried by those who quibble about labels. Some of my friends seem transfixed by the idea that if it is fascism, it must appear in the classic, unfriendly form of their youth. "Why, oh why," they retrospectively moan, "didn't people see what was happening during the 1920s and the 1930s?" But in their own blindness they are willing to use the terms invented by the fascist ideologists, "corporate state" or "corporatism," but not fascism. I am upset with those who prefer to remain spectators until it may be too late. I am shocked by those who seem to believe in Anne Morrow Lindbergh's words of 1940-that "there is no fighting the wave of the future" and all you can do is "leap with it." I am appalled by those who stiffly maintain that nothing can be done until things get worse or the system has been changed. I am afraid of inaction. I am afraid of those who will heed no warnings and who wait for some revelation, research, or technology to offer a perfect solution. I am afraid of those who do not see that some of the best in America has been the product of promises and that the promises of the past are not enough for the future. I am dismayed by those who will not hope, who will not commit themselves to something larger than themselves, of those who are afraid of true democracy or even its pursuit. p5 I suspect that many people underestimate both the dangers that lie ahead and the potential strength of those who seem weak and powerless. Either underestimation stems, I think, from fear of bucking the Establishment ... a deep and well-hidden fear ... p5 ...the fanfare of elections and "participatory" democracy usually disguises business- government control. THE RISE AND FALL OF CLASSIC FASCISM p11 Between the two world wars fascist movements developed in many parts of the world. In the most industrially advanced capitalist countries-the United States, Britain, France, Holland and Belgium-they made waves but did not engulf the constitutional regimes. In the most backward capitalist countries-Albania, Austria, Greece, Hungary, Poland, Portugal, Rumania, Spain, and Yugoslavia-there came to power authoritarian or dictatorial regimes that boastfully called themselves "fascist" or, as the term soon came to be an all-purpose nasty word, were branded "fascist" by their opponents. The most genuine and vigorous fascist movements arose in three countries-Italy, Germany and Japan-which, while trailing behind the capitalist leaders in industrialization and empire, were well ahead of the laggards. ITALY, GERMANY, JAPAN In Milan on March 23, 1919, in a hall offered by a businessmen's club, former socialist Benito Mussolini transformed a collection of blackshirted roughnecks into the Italian Fascist party. His word "fascism" came from the Latin fasces for a bundle of rods with an axe, the symbol of State power carried ahead of the consuls in ancient Rome. Mussolini and his comrades censured old-fashioned conservatives for not being more militant in opposing the socialist and communist movements that arose, in response to the depression, after World War I. At the same time, they borrowed rhetorical slogans from their socialist and communist foes, and strengthened their support among workers and peasants. In their early days these groups had tough going. The more respectable elements in the Establishment tended to be shocked by their rowdy, untrustworthy nature. Campaign contributions from businessmen came in slowly and sporadically. When they entered electoral contests, the Fascists did badly. Thus, in their very first year of life the Italian Fascists suffered a staggering defeat by the Socialists. In 1920 the left-wing power seemed to grow. Hundreds of factories were seized by striking workers in Milan, Turin, and other industrial areas. Peasant unrest became stronger, and many large estates were seized. The Socialists campaigned under the slogan of "all power to the proletariat." For Mussolini, this situation was an opportunity to be exploited. He countered with a nationwide wave of terror that went far beyond ordinary strikebreaking. Mussolini directed his forces at destroying all sources of proletarian or peasant leadership. The Fascist squadristi raided the offices of Socialist or Communist mayors, trade unions, cooperatives and leftwing newspapers, beating up their occupants and burning down the buildings. They rounded up outspoken anti-Fascists, clubbed them, and forced them to drink large doses of castor oil. They enjoyed the passive acquiescence-and at times the direct support-of the police, the army, and the church. Above all, business groups supplied Mussolini with an increasing amount of funds. In turn, Mussolini responded by toning down the syndicalism and radical rhetoric of his followers, and, while still promising to "do something for the workers," began to extol the merits of private enterprise. On October 26, 1922, as his Fascist columns started their so-called March on Rome, Mussolini met with a group of industrial leaders to assure them that "the aim of the impending Fascist movement was to reestablish discipline within the factories and that no outlandish experiments . . . would be carried out." l On October 28 and 29 he convinced the leaders of the Italian Association of Manufacturers "to use their influence to get him appointed premier." 2 In the evening of October 29 he received a telegram from the king inviting him to become premier. He took the sleeping train to Rome and by the end of the next day formed a coalition cabinet. In 1924, in an election characterized by open violence and intimidation, the Fascist-led coalition won a clear majority. If Mussolini did not actually march on Rome in 1922, during the next seven years he did march into the hearts of important leaders in other countries. He won the friendship, support, or qualified approval of Richard Childs (the American ambassador), Cornelius Vanderbilt, Thomas Lamont, many newspaper and magazine publishers, the majority of business journals, and quite a sprinkling of liberals, including some associated with both The Nation and The New Republic. "Whatever the dangers of fascism," wrote Herbert Croly, in 1927, "it has at any rate substituted movement for stagnation, purposive behavior for drifting, and visions of great future for collective pettiness and discouragements." ~ these same years, as paeans of praise for Mussolini arose throughout Western capitalism, Mussolini consolidated his rule, purging anti-Fascists from the government service, winning decree power from the legislature, and passing election laws favorable to himself and his conservative, liberal, and Catholic allies. Only a few days after the march on Rome, a close associate of Hitler, Herman Esser, proclaimed in Munich among tumultuous applause: "What has been done in Italy by a handful of courageous men is not impossible here. In Bavaria too we have Italy's Mussolini. His name is Adolf Hitler...." F. L. CARSTEN In January, 1919, in Munich, a small group of anti-Semitic crackpot extremists founded the German Workers Party. Later that year the German Army's district commander ordered one of his agents, a demobilized corporal, to investigate it. The Army's agent, Adolf Hitler, instead joined the party and became its most powerful orator against Slavs, Jews, Marxism, liberalism, and the Versailles treaty. A few months later, under Hitler's leadership, the party changed its name to the National Socialist German Workers' Party and organized a bunch of dislocated war veterans into brown-shirted strong-arm squads or storm troopers (in German, S.A. for Sturmabteilung). The party's symbol, designed by Hitler himself, became a black swastika in a white circle on a flag with a red background. On November 8, 1923, in the garden of a large Munich beer hall, Adolf Hitler and his storm troopers started what he thought would be a quick march to Berlin. With the support of General Erich Ludendorff, he tried to take over the Bavarian government. But neither the police nor the army supported the Putsch. Instead of winning power in Munich, Hitler was arrested, tried for treason, and sentenced to five years' imprisonment, but confined in luxurious quarters and paroled after only nine months, the gestational period needed to produce the first volume of Mein Kampf. His release from prison coincided with an upward turn ~n the fortunes of the Weimar Republic, as the postwar inflation abated and an influx of British and American capital sparked a wave of prosperity from 1925 to 1929. "These, the relatively fat years of the Weimar Republic, were correspondingly lean years for the Nazis." Weimar's "fat years" ended in 1929. If postwar disruption and class conflict brought the Fascists to power in Italy and nurtured similar movements in Germany, Japan, and other nations, the Great Depression opened the second stage in the rise of the fascist powers. In Germany, where all classes were demoralized by the crash, Hitler recruited jobless youth into the S.A., renewed his earlier promises to rebuild the German army, and expanded his attacks on Jews, Bolshevism, the Versailles treaty, liberalism, and constitutional government. In September 1930, to the surprise of most observers (and probably Hitler himself), the Nazis made an unprecedented electoral breakthrough, becoming the second largest party in the country. A coalition of conservative parties, without the Nazis, then took over under General Kurt von Schleicher, guiding genius of the army. With aged Field Marshal von Hindenberg serving as figurehead president, three successive cabinets- headed by Heinrich Bruening, Franz von Papen, and then von Schleicher himself-cemented greater unity between big business and big government (both civilian and military), while stripping the Reichstag of considerable power. They nonetheless failed miserably in their efforts to liquidate the Depression. Meanwhile Adolf Hitler, the only right-wing nationalist with a mass following, was publicly promising full employment and prosperity. Privately meeting with the largest industrialists he warned, "Private enterprise cannot be maintained in a democracy." On January 30, 1933, he was invited to serve as chancellor of a coalition cabinet. "We've hired Hitler!" a conservative leader reported to a business magnate. A few weeks later, using the S.A. to terrorize left-wing opposition and the Reichstag fire to conjure up the specter of conspiratorial bolshevism, Hitler won 44 percent of the total vote in a national election. With the Support of the Conservative and Center parties, he then pushed through legislation that abolished the independent functioning of both the Reichstag and the German states, liquidated all parties other than the Nazis, and established concentrated power in his own hands. He also purged the S.A. of its semi-socialist leadership and vastly expanded the size and power of his personal army of blackshirts. Through this rapid process of streamlining, Hitler was able to make immediate payments on his debts to big business by wiping out independent trade unions. abolishing overtime pay, decreasing compulsory cartelization decrees (like similar regulations promulgated earlier in Japan and Italy), and giving fat contracts for public works and fatter contracts for arms production. By initiating an official pogrom against the Jews, he gave Nazi activists a chance to loot Jewish shops and family possessions, take over Jewish enterprises, or occupy jobs previously held by German Jews. Above all, he kept his promise to the unemployed; he put them back to work, while at the same time using price control to prevent a recurrence of inflation. As Shirer demonstrates in his masterful The Rise and Fall of the Third Reich, Hitler also won considerable support among German workers, who did not seem desperately concerned with the loss of political freedom and even of their trade unions as long as they were employed full time. "In the past, for so many, for as many as six million men and their families, such rights of free men in Germany had been overshadowed as he [Hitler] said, by the freedom to starve. In taking away that last freedom," Shirer reports, "Hitler assured himself of the support of the working class, probably the most skillful and industrious and disciplined in the Western world." Also in 1919, Kita Ikki, later known as "the ideological father of Japanese fascism," set up the "Society of Those Who Yet Remain." His General Outline of Measures for the Reconstruction of Japan, the Mein Kampf of this association, set forth a program for the construction of a revolutionized Japan, the coordination of reform movements, and the emancipation of the Asian peoples under Japanese leadership. In Japan, where organized labor and proletarian movements had been smashed many years earlier and where an oligarchic structure was already firmly in control, the transition to full-fledged fascism was- paradoxically-both simpler than in Italy and Germany and stretched out over a longer period. In the mid-1920s hired bullies smashed labor unions and liberal newspapers as the government campaigned against "dangerous thoughts" and used a Peace Preservation Law to incarcerate anyone who joined any organization that tried to limit private property rights. The worldwide depression struck hard in Japan, particularly at the small landholders whose sons had tried to escape rural poverty through military careers. The secret military societies expanded their activities to establish a Japanese "Monroe Doctrine for Asia." In 1931 they provoked an incident, quickly seized all of Manchuria, and early in 1932 established the Japanese puppet state of Manchukuo. At home, the Japanese premier was assassinated and replaced by an admiral, as the armed forces pressed forward for still more rapid expansion on the continent and support for armament industries. As the frontiers of Manchukuo were extended, a split developed between two rival military factions. In February 1936, the Imperial Way faction attempted a fascist coup from below. Crushing the rebels, the Control faction of higher-ranking officers ushered in fascism from above. "The interests of business groups and the military drew nearer, and a 'close embrace' structure of Japanese fascism came to completion," writes Masao Maruyama. "The fascist movement from below was completely absorbed into totalitarian transformation from above." Into this respectable embrace came both the bureaucracy and the established political parties, absorbed into the Imperial Rule Assistance Association. And although there was no charismatic dictator or party leader, the Emperor was the supercharismatic symbol of Japanese society as a nation of families. By 1937, with well-shaped support at home, the Japanese army c seized Nanking and started its long war with China. BREEDING GROUNDS OF FASCISM Before fascism, the establishments in Italy, Japan and Germany each consisted of a loose working alliance between big-business, the military, the older landed aristocracy, and various political leaders. The origin of these alliances could be traced to the consolidation of government and industry during World War I. "Manufacturing and finance," writes Roland Sarti about World War I in Italy (but in terms applicable to many other countries also), "drew even closer than they had been before the war to form the giant combines necessary to sustain the war effort. Industrialists and government officials sat side by side in the same planning agencies, where they learned to appreciate the advantages of economic planning and cooperation. Never before had the industrialists been so close to the center of political power, so deeply involved in the decision-making process " 0 United in the desire to renew the campaigns of conquest that had been dashed by the war and its aftermath, the establishments in these countries were nonetheless seriously divided by conflicting interests and divergent views on national policy. As Sarti points out, big-business leaders were confronted by "economically conservative and politically influential agricultural interests, aggressive labor unions, strong political parties ideologically committed to the liquidation of capitalism, and governments responsive to a variety of pressures." Despite the development of capitalist planning, coping with inflation and depression demanded more operations through the Nation-State than many banking and industrial leaders could easily- accept, more government planning than most governments were capable of undertaking, and more international cooperation among imperial interests than was conceivable in that period The establishment faced other grave difficulties in the form of widespread social discontent amidst the uncertain and eventually catastrophic economic conditions of the postwar world. One of the challenges came from the fascists, who seemed to attack every element in the existing regimes. They criticized businessmen for putting profits above patriotism and for lacking the dynamism needed for imperial expansion. They tore at those elements in the military forces who were reluctant to break with constitutional government. They vilified the aristocracy as snobbish remnants of a decadent past. They branded liberals as socialists, socialists as communists, communists as traitors to the country, and parliamentary operations in general as an outmoded system run by degenerate babblers. They criticized the bureaucrats for sloth and branded intellectuals as self-proclaimed "great minds" (in Hitler's phrasing) who knew nothing about the real world. They damned the Old Order as an oligarchy of tired old men, demanding a New Order of young people and new faces. In Japan, the young blood was represented mainly by junior officers in the armed forces. In Italy and Germany the hoped-for infusion of new dynamism was to come from the "little men," the "common people," the "lost generation," the "outsiders," and the "uprooted" or the "rootless." Although some of these were gangsters, thugs, and pimps, most were white-collar workers, lower-level civil servants, or declassed artisans and small-businessmen. But the fascist challenge did not threaten the jugular vein. Unlike the communists, the fascists were not out to destroy the old power structure or to create an entirely new one. Rather, they were heretics seeking to revive the old faith by concentrating on the fundamentals of ;imperial expansion militarism, repression, and racism. They had the courage of the old-time establishment's convictions. If they at times sounded like violent revolutionaries, the purpose was not merely to pick up popular support from among the discontented and alienated, but to mobilize and channel the violence-prone. If at the outset they tolerated anti-capitalist currents among their followers, the effect was to enlarge the following for policies that strengthened capitalism. Above all, the fascists "wanted in." In turn, at a time of crisis, leaders in the old establishment wanted them in as junior partners. These leaders operated on the principle that "If we want things to stay as they are, things will have to change." Ultimately, the marriage of the fascist elements with the old order was one of convenience. In Italy and Japan, the fascists won substantial control of international and domestic politics, were the dominant ideological force, and controlled the police. The old upper-class structure remained in control of the armed forces and the economy. In Japan, the upper-class military was successfully converted to fascism, but there were difficulties in winning over Japan's family conglomerates, the zaibatsu. Thus, while much of the old order was done away with, the genuinely anti-capitalist and socialist elements that provided much of the strength in the fascist rise to power were suppressed. The existing social system in each country was actually preserved, although in a changed form. THE AXIS From the start fascism had been nationalist and militarist, exploiting the bitterness felt in Italy, Germany, and Japan over the postwar settlements. Italians, denied territories secretly promised them as enticement for entering the war, felt cheated of the fruits of victory. Japanese leaders chafed at the rise of American and British resistance to Japanese expansion in China, and resented the Allies' refusal to include a statement of racial equality in the Covenant of the League of Nations. Germans were outraged by the Versailles treaty; in addition to depriving Germany of 13 percent of its European territories and population, the treaty split wide open two of Germany's three major industrial areas and gave French and Polish industrialists 19 percent of Germany's coke, 17 percent of its blast furnaces, 60 percent of its zinc foundries, and 75 percent of its iron ore. Furthermore, each of the fascist nations could ground their expansion in national tradition. As far back as 1898, Ito Hirobumi, one of the founders of the "new" Japan after the Meiji restoration of 1868, had gone into great detail on Japan's opportunities for exploiting China's vast resources. While the late-nineteenth-century Italians and Germans were pushing into Africa, the Japanese had seized Korea as a stepping-stone to China and started eyeing Manchuria for the same purpose. Mussolini's imperial expansion in Africa was rooted, if not in the Roman empire, then in late nineteenth-century experience and, more specifically, in the "ignominy" of the 1896 Italian defeat by ill-armed Ethiopian forces in Aduwa. Hitler's expansionism harked back to an imperialist drive nearly a century old-at least. Now, while Japan was seizing Manchuria, Mussolini responded to the crash by moving toward armaments and war. He used foreign aid to establish economic control over Albania, consolidating this position through naval action in 1934. In 1935 he launched a larger military thrust into Ethiopia and Eritrea. By that time, the Nazi-led establishment in Germany was ready to plunge into the European heartland itself. In 1935, Hitler took over the Saarland through a peaceful plebiscite, formally repudiating the Versailles treaty. In 1936 he occupied the Rhineland and announced the formation of a Berlin-Rome Axis and the signing of a German-Japanese Pact. Hitler and Mussolini then actively intervened in the Spanish Civil War, sending "volunteers" and equipment to support General Franco's rebellion against Spain's democratically-elected left-wing republic. The timetable accelerated: in 1938, the occupation of Austria in March and of Czechoslovakia in September; in 1939, the swallowing up of more parts of Czechoslovakia and, after conclusion of the Nazi-Soviet Pact in August, the invasion of Poland. At this point, England and France declared war on Germany and World War II began. Japan joined Italy and Germany in a ten-year pact "for the creation of conditions which would promote the prosperity of their peoples." As a signal of its good intentions, Japan began to occupy Indochina as well as China. Germany did even better. By 1941 the Germans had conquered Poland, Denmark, Norway, the Netherlands, Belgium, and France. They had thrown the British army into the sea at Dunkirk and had invaded Rumania, Greece, and Yugoslavia. A new world order seemed to be in the making. For Japan, it was the "Greater East Asia Co-Prosperity Sphere," and for Italy a new Roman Empire to include "the Mediterranean for the Mediterraneans." And, for Germany, the new order was the "Thousand Year Reich" bestriding the Euro-Slavic-Asian land mass. p21 FASCIST EXPLOITS The essence of the new fascist order was an exploitative combination of imperial expansion, domestic repression, militarism, and racism. Each of these elements had a logic of its own and a clear relation to the others. Imperial expansion brought in the raw materials and markets needed for more profitable economic activity. By absorbing surplus energies as well as surplus capital, it diverted attention from domestic problems and brought in a flood of consumer goods that could at least for a while- provide greater satisfactions for the masses. Domestic repression in each of the three countries was essential to eliminate any serious opposition to imperialism, militarism, or racism. It was used to destroy the bargaining power of unions and the political power not only of communists, socialists, and liberals but of smaller enterprises. It helped hold down wages and social benefits and channel more money and power into the hands of big business and its political allies Militarism, in turn, helped each of the Axis countries escape from the depression, while also providing the indispensable power needed for both imperial ventures and domestic pacification. All of the other elements were invigorated by racism, which served as a substitute for class struggle and a justification of any and all brutalities committed by members of the Master Race (whether Japanese, German, or Italian) against "inferior" beings. This may not have been the most efficient of all possible formulae for exploitation, but it was theirs. No one of these elements, of course, was either new or unique. None of the "haves" among the capitalist powers, as the fascists pointed out again and again, had built their positions without imperialism, militarism, repression, and racism. The new leaders of the three "have nots," as the fascists pointed out, were merely expanding on the same methods. "Let these 'well-bred' gentry learn," proclaimed Hitler, "that we do with a clear conscience the things they secretly do with a guilty one." There was nothing particularly new in Mussolini's imperialism and militarism. His critics at the League of Nations in 1935, when a weak anti-Italian embargo was voted on, may have seemed shocked by his use of poison gas against Ethiopian troops, but he did nothing that French, British, English, and Dutch forces had not done earlier in many other countries. The Japanese and Germans, however, were a little more original. In China and other parts of Asia, the Japanese invaders used against Koreans, Chinese, Burmese, Malayans, and other Asians even harsher methods than those previously used by white invaders. Similarly, up to a certain point, the Nazi war crimes consisted largely of inflicting on white Europeans levels of brutality that had previously been reserved only for Asians, Africans, and the native populations of North, Central, and South America. In open violation of the so-called "laws of war," German, Japanese, and Italian officials-to the consternation of old-style officers from the upper class "gentry"-ordered the massacre of prisoners. All three regimes engaged in large-scale plunder and looting. Since German-occupied Europe was richer than any of the areas invaded by the Japanese or Italians, the Nazi record of exploitation is more impressive. "Whenever you come across anything that may be needed by the German people," ordered Reichsmarshall Goering, "you must be after it like a bloodhound...." The Nazi bloodhounds snatched all gold and foreign holdings from the national banks of seized countries, levied huge occupation costs, fines and forced loans, and snatched away tons of raw materials, finished goods, art treasures, machines, and factory installations. In addition to this unprecedented volume of looting, the Nazis revived the ancient practice of using conquered people as slaves. In doing so, they went far beyond most previous practices of imperial exploitation. By 1944, "some seven and a half million civilian foreigners were toiling for the Third Reich.... In addition, two million prisoners of war were added to the foreign labor force." Under these conditions German industrialists competed for their fair share of slaves. As key contributors to the "Hitler Fund," the Krupps did very well. "Besides obtaining thousands of slave laborers, both civilians and prisoners of war, for its factories in Germany, the Krupp firm also built a large fuse factory at the extermination camp at Auschwitz, where Jews were worked to exhaustion and then gassed to death." Domestic repression by the fascists was directed at both working-class movements and any other sources of potential opposition. In all three countries the fascists destroyed the very liberties which industrialization had brought into being; if more was destroyed in Germany than in Italy and more in Italy than in Japan, it was because there was more there to destroy. All three regimes succeeded in reducing real wages (except for the significant increments which the unemployed attained when put to work by the armaments boom), shifting resources from private consumption to private and public investment and from smaller enterprises to organized big business and channeling income from wages to profits. As these activities tended to "de-class" small entrepreneurs and small landowners, this added to the pool of uprooted people available for repressive activities, if not for the armed services directly. Moreover, each of the three regimes attained substantial control over education at all levels, cultural and scientific activities, and the media of communication. In Germany, however. domestic repression probably exceeded that of any other dictatorial regime in world history. An interesting, although little known, example is provided by Aktion t 4. In this personally signed decree, Hitler ordered mercy killing for hospital patients judged incurable, insane or otherwise useless to the war effort, thereby freeing hospital beds for wounded soldiers. At first the patients were "herded into prisons and abandoned castles and allowed to die of starvation." Since this was too slow, the Nazis then used "a primitive gas chamber fed by exhaust fumes from internal combustion engines." Later they used larger gas chambers where "ducts shaped like shower nozzles fed coal gas through the ceiling . . . Afterward the gold teeth were torn out and the bodies cremated." Two years later, after about ten thousand Germans were killed in this manner, a Catholic bishop made a public protest and the extermination campaign was called off. By this time, however, Aktion t 4 had been replaced by Aktion f 14, "an adaptation of the same principles to the concentration camps, where the secret police kept their prisoners-socialists, communists, Jews and antistate elements." By the time he declared war on the United States in December 1941, Hitler extended Aktion f 14 to all conquered territories in his "Night and Fog" (Nacht und Nebel) decree, through which millions of people were spirited away with no information given their families or friends. This was an expansion of the lettres de cachet system previously used by French monarchs and the tsar's police against important state prisoners. Under this method untold thousands vanished into the night and fog never to be heard of again. Each of the three regimes, moreover, developed an extra-virulent form of racism to justify its aggressive drive for more and more "living space" (in German, the infamous Lebensraum). Italian racism was directed mainly against the Africans-although by the time Italy became a virtual satellite of Nazi Germany, Mussolini started a massive anti-Jewish campaign. Japanese racism was directed mainly against the Chinese, the Indochinese, and in fact, all other Asiatic people and served to justify, in Japanese eyes, the arrogance and brutality of the Japanese troops. The largest target of Nazi racism was the Slavs, who inhabited all of the Eastern regions destined to provide Lebensraum for the Master Race. And during World War II more Slavs were killed than an' other group of war victims in previous history. But Nazi racism went still deeper in its fanatic al anti-Semitism. Hitler, of course, did not invent anti-Semitism, which ran as a strand through most significant ideologies of the previous century. While a strong strain of anti-Semitism has usually characterized the Catholic church, Martin Luther, the founder of Protestantism, went further in urging that Jewish "synagogues or schools be set on fire, that their houses be broken up and destroyed." 18 Nazi anti-Semitism brought all these strands together into a concentrated form of racism that started with looting, deprived the German Jews (about a quarter of a million at that time) of their citizenship and economic rights under the Nuremberg Laws of 1935, and then-following Martin Luther's advice with a vengeance-led to the arson, widespread looting, and violence of the Kristolnacht ("The Night of the Broken Glass") of November 1938. Early in 1939 Hitler declared, in a Reichstag speech, that if a world war should ensue, "the result will be . . . the annihilation of the Jewish race throughout Europe," a threat and near-prophecy that he kept on repeating in his public statements. A few weeks after the Nazi invasion of Russia he started to make it a reality with a decree calling for a "total solution" (Gesamtlosung) or "final solution" (Endlosung) of the Jewish question in all the territories of Europe which were under German influence. The "final solution" went through various stages: at first simply working Jews to death, then gassing them in the old-style chambers used under Aktion t 4, then using still larger gas chambers capable of gassing six thousand prisoners a day- to the lilting music of The Merry Widow-through the use of hydrogen cyanide. While business firms competed for the privilege of building the gas chambers and crematoria and supplying the cyanide, recycling enterprises also developed. The gold teeth were "melted down and shipped along with other valuables snatched from the condemned Jews to the Reichsbank.... With its vaults filled to overflowing as early as 1942, the bank's profit-minded directors sought to turn the holdings into cold cash by disposing of them through the municipal pawnshops." Other recycling operations included using the hair for furniture stuffing, human fat for making soap, and ashes from the crematoria for fertilizer. While a small number of cadavers were used for anatomical research or skeleton collections, a much larger number of live persons-including Slavs as well as Jews-were used in experimental medical research for the German Air Force on the effects on the human body of simulated high-altitude conditions and immersion in freezing water. All in all, of an estimated 11 million Jews in Europe, between 5 and 6 million were killed in the destruction chambers (and work gangs or medical laboratories) at Auschwitz, Treblinka, Belsen, Sibibor and Chelmna, as well as minor camps that used such old-fashioned methods as mere shooting.'. p25 FASCIST IDEOLOGIES Centrally controlled propaganda was a major instrument for winning the hearts of the German, Japanese, and Italian people. The growth of the control apparatus coincided with the flowering during the 1920s and 1930s of new instruments of propagandistic technology particularly the radio and the cinema, with major forward steps in the arts of capitalist advertising. "Hitler's dictatorship." according to Albert Speer, "was the first dictatorship of an industrial state in this age of technology, a dictatorship which employed to perfection the instruments of technology to dominate its own people." Apart from technology, each of the Axis powers used marching as an instrument of dominating minds. In discussing this method of domination, one of Hitler's early colleagues, Hermann Rauschning, has given us this explanation: "Marching diverts men's thoughts. Marching kills thought. Marching makes an end of individuality. Marching is the indispensable magic stroke performed in order to accustom the people to a mechanic, quasi-ritualistic activity until it becomes second nature." The content of fascist propaganda. however, was more significant than its forms or methodology. In essence, this content was a justification of imperial conquest, rampant militarism, brutal repression, and unmitigated racism. Many fascist theorists and intellectuals spun high-flown ideologies to present each of these elements in fascist exploitation in the garb of glory, honor, justice, and scientific necessity. The mass propagandists, however (including not only Hitler, Mussolini, and their closest associates, but also the flaming "radicals" of the Japanese ultra-right), wove all these glittering abstractions into the super-pageantry of a cosmic struggle between Good and Evil, between the Master Race which is the fount of all culture, art, beauty, and genius and the inferior beings (non-Aryans, non-Romans, non-Japanese) who were the enemies of all civilization. As the stars and the planets gazed down upon this apocalyptic struggle, the true defenders of civilization against bolshevism and racial impurity must descend to the level of the enemies of culture and for the sake of mankind's future, do whatever may be necessary in the grim struggle for survival. Thus, bloodletting and blood sacrifice became a spiritual imperative for the people, an imperative transcending mere materialism. This holy-war psychology was backed up by the indiscriminate use of any concept, any idea, theory, or antitheory that was useful at a particular time or place. Liberalism and monarchism, individualism and collectivism, hierarchic leadership and egalitarianism, scientific management and organic spontaneity, private enterprise and socialism, religion and atheism-all were drawn upon as the condition warranted- to polish the image of the nation's leader and play upon the emotions of both establishment and masses. No human interest, drive, or aspiration was safe from exploitation. To help in organizing support of specific groups, promises were made to workers as well as businessmen, peasants as well as landowners, rural folk as well as urbanites, the old nobility as well as the "common man," the old as well as the young, women as well as men. p28 On of the great successes of the classic fascists was to concoct was to concoct misleading pronouncements on their purposes and practices. Anti-fascists have often accepted some of these self-descriptions or added part-truths of their own. The result has been a vast structure of apparently indestructible myths. Today, these myths still obscure the nature of classic fascism and of present tendencies toward new forms of the o d horror. Although the classic fascists openly subverted constitutional democracy and flaunted their militarism, they took great pains to conceal Big Capital-Big Government partnership. One device for doing this was the myth of "corporatism" or the 'corporate state." In place of geographically elected parliaments, the Italians and the Germans set up elaborate systems whereby every interest in the country-including labor -was to be "functionally" represented. In fact, the main function was to provide facades behind which the decisions were made by intricate networks of business cartels working closely with military officers and their own people in civilian government. p29 There is no doubt that in all three countries the consolidation of the fascist establishment was supported by a psychological malaise that had hit the lower middle classes harder than anyone else. But if one examines the support base of classic fascism, it is hard to avoid the conclusion that the fascists had multi-class support. Many workers joined the fascist ranks-even former socialist and communist leaders. To the unemployed workers not represented by trade unions or the socialist movement, fascism offered jobs and security and delivered on this promise. Although the older aristocrats were somewhat divided on the subject, many highly respectable members of the landed aristocracy and nobility joined the fascist ranks. The great bulk of civil service bureaucrats was won over. Most leaders of organized religion (despite some heroic exceptions in Germany and some foot-dragging in Italy)either tacitly or openly supported the new regimes. Leading academicians, intellectuals, writers, and artists toed the line; the dissident minority who broke away or left the country made the articulation of support by the majority all the more important. Hitler enjoyed intellectual support, if not adulation, from the leading academicians in German universities. In Japan, the Showa Research Association brought many of the country's leading intellectuals together to help the imperial leaders formulate the p30 ... instead of operating directly, big capital under fascism operated indirectly through an uneasy partnership with the fascist politicos, the military leaders, and the large landowners. If the privileged classes won many advantages as a result of the indispensable support they gave to the fascist regimes in Italy, Japan, and Germany, they also paid a high price. In addition to being subjected to various forms of political plunder, they lost control of many essential elements of policy, particularly the direction and tempo of imperial expansion. Second, the shift from constitutional to fascist capitalism meant structural changes, not merely the removal of a fig leaf. The fascists suppressed independent trade unions and working-class parties and consolidated big capital at the expense of small business. They destroyed the democratic institutions that capitalism had itself brought into being. They wiped out pro-capitalist liberation and old-fashioned conservatism as vital political forces. Third, while classic fascism was terroristic, it was also beneficent. The fascists provided jobs for the unemployed and upward mobility for large numbers of lower and middle class people. Although real wage rates were held down, these two factors alone-in addition to domestic political plunder and war booty-improved the material standard of living for a substantial number, until the whole picture was changed by wartime losses. roughshod over his or her students may be called a "fascist pig." p31 ... for thousands of years hundreds of governments have been fiercely brutal-sometimes on conquered people only, often on their own people also. If we stick by this terminology, then many of the ancient Greeks and Hebrews, the old Roman, Persian, Byzantine, Indian, and Chinese empires, the Huns, the Aztecs, and the tsars who ruled Russia were also fascist. Some of these, let me add, also exercised total control over almost all aspects of human life. Indeed, "force, fraud and violence," as Carl Friedrich and Zbigniew Brzezinski have pointed out, "have always been features of organized government and they do not constitute by themselves the distinctively totalitarian operation." 28 But concentrated capital, modern-style government, and constitutional democracy are relatively new features of human history-as is also the kind of Big Business- - Government alliance that subverts constitutional democracy. Anyone has the constitutional right to pin the label "fascist" or "fascistic" on the brutalities of a Stalin or his heirs in various "Marxist-Leninist" countries, or on the bloodbath inflicted by American firepower on Indochina for a full decade, or even on the latest case of police brutality in a black or Latin ghetto of New York City. This may be a forceful way of protesting brutality. It is much less than a serious examination of the realities of classic fascism or the accumulating tendencies toward new forms of fascism toward the end of the twentieth century. ______________________________________________________________________ The Mysterious Establishment excerpted from the book Friendly Fascism The New Face of Power in America http://www.thirdworldtraveler.com/Fascism/Mysterious_Establish_FF.html p54 American Heritage Dictionary "Establishment: An exclusive group of powerful people who rule a ) government or society by means of private agreements or decisions." The American Establishment is not an organization. Nor is it a simple coalition or network. Like the industrial-military complex, it has no chairman or executive committees. (Like the Golden International,)the Establishment is more complicated than any complex. It is a complex of complexes, a far-flung network of power centers, including institutional hierarchies. These are held together less by hierarchical control and more by mutual interests, shared ideologies, and accepted procedures for mediating their endless conflicts. Like the establishments in other First World countries, the American Establishment is not just a network of State leaders. Nor is it merely a coalition of private governments. It is an interweaving of two structures- polity and economy-that under industrial capitalism have never been independent of each other. It is the modern partnership of big business and big government. As such, it is much looser and more flexible than the establishments of classic fascism. And in contrast with them, above all, it operates in part through-and is to an important extent constrained by-the democratic machinery of constitutional government. Private agreements and decisions-even well-protected secrecy-play a large role in its operations; this adds to the Establishment's inherent mystery. It is why people often refer to it as the "invisible government." Yet many of its agreements and decisions are open to public view. Indeed, so much information is available in public reports, congressional hearings, and the specialized press that anyone trying to make sense of it all runs the danger of being drowned in a sea of excessive information. This, of course, is the problem faced by all intelligence agencies, which usually feed on a diet of 95 percent public data spiced with 5 percent obtained through espionage. Also, as with intelligence and counterintelligence, there are huge information gaps side by side with huge amounts of deliberately deceptive misinformation. p56 The number of people actively involved-even at the very top-is too large for any meeting or convention hall. Robert Townsend, who headed Avis before it was swallowed by IIT, has made this estimate: "America is run largely by and for about 5,000 people who are actively supported by 50,000 beavers eager to take their places. I arrive at this figure this way: maybe 2,500 megacorporation executives, 500 politicians, lobbyists and Congressional committee chairmen, 500 investment bankers, 500 partners in major accounting firms, 500 labor brokers. If you don't like my figures, make up you own . . ." I am convinced his figures are far too small. If there are 4,000-6,000 at the top, they are probably able to deploy at least five times as many in executive management; who in turn operate through at least ten times as many junior and contingent members. My total ranges between a quarter and a third of a million. Even without adding their dependents, this is a far cry from a small handful of people. Yet in relative numbers this large number of people is still a "few." A third of a million people numbers less than two tenths of one percent of the U.S. population of about 220 million; and with their immediate family members this would still be less than 10 percent. It is less than one hundreth of 1 percent (.0001) of the "Free World" under the shared leadership of the United / States. Seldom, if ever, has such a small number of people done so much to guide the destinies of so many over such vast expanses of the planet. There are conflicts at all levels. Most of these are rooted in divergent or clashing interests, values, perceptions, and traditions. Some are minor, others are major. Many minor crises at various points in the Establishment are daily occurrences, surprising only the uninitiated. But "whenever we are prepared to talk about a deep political crisis," as Papandreou observes, "we should assume that the Establishment (as a whole) is undergoing a crisis, either because of internal trouble-namely, because some of its members have seen fit to alter their relative position within the coalition-or because of external trouble, because another challenger has risen who wants a share of the power." The bulk of these conflicts are resolved through bargaining, accommodation, market competition, and government decision making, particularly through bureaucratic channels. A few more come to the surface through the legislative, judicial, or electoral processes. Coherence is provided not only through these procedures for conflict adjustment but also by large areas of partially shared interests, values, and ideologies. It is constantly changing. E the Establishment were a mere defender of the status quo, it would be much weaker. While some of its members may resist many changes or even want to "turn the clock back," the dominant leaders know that change is essential to preserve, let alone, expand, power. "If we want things to stay as they are," the young nephew said to his uncle, the prince, in Lampedusa's The Leopard, "things have got to change. Do you understand?" Power holders may not understand this at once, but events drive the point home to them-or drive them out. Thus many of the changes occur in the membership of the Establishment which, at any point, may expand or contract. E the Establishment is a target, it is-in Leonard Silk's apt words for the "overall corporate government complex"-a "moving target." ' There is no single central conspiracy. I agree with Karl Popper when he says: "Conspiracies occur, it must be admitted. But the striking fact which, in spite of their occurrence, disproves the conspiracy theory is that few of these conspiracies are ultimately successful." Many of them have consequences entirely or partly unintended or unforeseen. Popper adds the observation that the successful ones rarely come to public attention and that there is usually a "situational logic" that transcends any conscious planning. When there is a fire in an auditorium, people do not get together to plan what to do. The logical response to the situation is "Get out." Some will do it in an orderly fashion; others might be rather rough toward people who get in their way. The Establishment often operates this way. Some of its most historic achievements have been forced on it by "fires" that break out suddenly, often unanticipated. The major advances in the welfare state, for example, have historical}y been opposed by most elements in capitalist establishments who were usually too stupid or nearsighted to realize that these measures would put a floor (or elevator) under market demand, thereby promoting the accumulation of corporate capital and taking the sting out of anticapitalist movements. p59 The greatest difference between the Ultra-Rich and the rest of us is that most of them are addicted to sensory gratification on a grand scale. In part, as Ferdinand Lundberg has documented, this gratification takes the form of palatial estates, fabulously furnished town houses, private art collections, exclusive clubs, summer and winter resort on many continents, membership in social registers, birth and burial under distinctive conditions, etc. It also involves an array of services going far beyond the ordinary housekeepers, cooks, gardeners, masseurs, valets, chauffeurs, yacht captains, and pilots of the large fleet of rich people's private aircraft. But above all, the valets of the ultra-rich also include expert executives, managers, advisers, braintrusters, ghostwriters, entertainers, lawyers, accountants, and consultants. Most of their services are more expensive (and far more sophisticated) than those enjoyed by the emperors, emirs, and moguls of past centuries. Some are freely given in exchange for the privilege of approaching the throne and basking in the effulgent glory of accumulated wealth. Most are paid for by others-either being written off as tax deductions or appearing as expenses on accounts of various corporations, banks, foundations, universities, research institutes, or government agencies. These payments for modern valet service can be unbelievably high. Indeed, one of the earmarks of the Ultra-Rich in America is that they even have millionaires-most of them involved in big business -working for them. Among the Ultra-Rich, of course, there are the so-called "beautiful people" who nourish their addiction merely by using a little of what accrues to them from fortunes managed by others. These are the "idle rich," the rentiers whose hardest work, beyond clipping coupons, is flitting from one form of entertainment to another. There are also a few deviants who betray their class by denouncing their addiction, getting along with small doses only, ore actively using their money to finance liberal or left wing causes. The great majority, however, seem to be stalwart conservatives who abstain from idleness by some form of "public service"-that is by holding the most prestigious institutions of philanthropy, higher education, health. culture. and art. There are also those whose addiction is more powerful; they can satisfy it only by larger and larger doses of money or power. This can be done only by exercising directly or indirectly their roles as overseers, roles legitimized by their personal participation in the management of corporate property. p62 Adam Smith "Wherever there is great property, there is great inequality. For one very rich man, there must be at least five hundred poor, and the affluence of the few supposes the indigence of the many." p63 C. Wright Mills "No one can be truly powerful unless he has access to the command of major institutions, for it is over these institutional means of power that the truly powerful are, in the first instance, truly powerful ..." p63 Richard Barber "Their [a few immense corporations] incredible absolute size and commanding market positions make them the most exceptional man-made creatures of the twentieth century.... In terms of the size of their constituency, volume of receipts and expenditures, effective power, and prestige, they are more akin to nation-states than business enterprises of the classic variety." p64 If better means more powerful, then the rich and the ultra-rich are truly better than most people. While you and I may work for major institutions, they are part of or close to (sometimes on top of) the cliques that control them. Their family life is also different. For ordinary people, family planning has something to do with control over the number and spacing of children. For the rich, family planning involves spawning trust funds and family foundations that hide wealth and augment control of corporate clusters and complexes. As a result of brilliant family planning, the formal institutions of corporate bureaucracy and high finance have not led to a withering away of the Morgans, Rockefellers, Harrimans, du Ponts, Weyerhausers, Mellons, and other oligarchic families of an earlier era. Nor have they prevented the rise of newer family networks such as the Kennedys. Rather, the nature of family wealth and operations has changed. "Rather than an Irenee du Pont exercising absolute domination, now the [du Pont] family fortune has been passed on to a number of heirs, even as the family's total wealth continues to grow. This splitting up of family stock blocks does not mean that capital no longer tends to accumulate. Just the opposite . . . du Pont wealth, and the power of their business class as a whole, is not diminishing, but growing." The growth of familial power, paradoxically, has been made possible by the sharing of that power with nonfamily members who handle their affairs professionally and mediate inevitable intrafamily disputes. Many of the corporate institutions, moreover, have been built and are guided by people who are merely rich and are ultra-rich only in intent. Whether the heirs of old wealth or the creators of new wealth, they mingle with the ultra-rich in clubs and boardrooms and play an indispensable role in overseeing corporate affairs. The role of overseer no longer requires total ownership-or even owning a majority of a company's stock. Most corporations are controlled by only a small minority of corporate stockholders. By usual Wall Street calculations, 5 percent stock ownership is enough to give total control; in a few cases, the figure may rise to 10 percent. The larger the number of stockholders, the smaller this percentage. This "internal pyramiding" is carried still further through chains of subsidiaries and holding companies. Thus, strategic control of a small block of holding company stock yields power over a vast network of accumulated power and capital. Many of these networks include both financial corporations and corporations in industry, utilities, communications, distribution, and transportation. Most of the overseers are what Herbert Gans called Unknowns. "How many well-informed people," asks Robert Heilbroner, "can name even one of the chief executive officers-with the exception of Henry Ford II-of the top ten industrial companies: General Motors, Standard Oil (N.J.), Ford, General Electric, Socony, U.S. Steel, Chrysler, Texaco, Gulf, Western Electric? How many can name the top figures in the ten top utilities or banks-perhaps with the exception of David Rockefeller.'' While the names of chief executive officers are a matter of public record, the names of the top stockholders are not. Most wealthy individuals, as Richard Barber has shown, "are tending to withdraw from direct stock ownership to companies and to funnel their investments through institutions, especially pension funds and mutual funds. This latter development has substantially increased the power of institutions-pension funds, banks, insurance companies and mutual funds-in the affairs of even the largest corporations. p90 As the takeoff toward a more perfect capitalism began after World War II, popular support of the system was assured in large part by the system's performance-more striking than ever before-in providing material payoffs and physical security. The record of over a third of a century has included the avoidance of mass depression or runaway inflation in any advanced capitalist country, expanded mass consumption, the maintenance or expansion of personal options, no near-war between any advanced capitalist countries and, above all, no world war. Yet these achievements have depended upon a level of commitment among the elites at the Establishment's lower and middle levels that could scarcely have been forthcoming if either had seriously doubted the legitimacy of the evolving order. This legitimacy was fostered by a three-pronged ideological thrust. The first prong has consisted of a sophisticated and passionate reiteration in a thousand variations of the simple proposition: communism and socialism are bad. Before World War II there were many small, right-wing movements whose members were driven by nightmares of evil conspirators-usually communists, Jews, Catholics, "niggers" or "nigger lovers"-bent on destroying the "American way of life." During the immediate prewar period, their fears were expressed directly in the Dies Committee's crusade against "pinkos" in the Roosevelt administration. After World War II, these witch-hunting nightmares were transformed into dominant ideology. Professional antiradicalism became entrenched during the brief period of atomic monopoly. It grew stronger in the more frenetic period of nuclear confrontation after Russia acquired atomic bombs. With some toning down and fine tuning, it has maintained itself during the present and more complex period of conflict with socialism and communism. During each of these stages it meshed rather well with anti-capitalist ideology in the Soviet Union, China, Cuba, and other communist countries, thereby providing an ideological balance to parallel the delicate balance of nuclear terror. More specifically, it has given the overall rationale for the extension of America's multicontinental frontiers. It has helped link together the many disparate elements in America's quasi-empire. In large measure, the unity of the NATO countries in Europe had depended on their fear of Soviet communism, and the allegiance of Japan to the United States on the fear of either Soviet or Chinese communism. American aid to "have-not" countries, in turn, has often varied with their ability to produce-or invent-a communist threat on or within their borders. At home, anti-communism has provided the justification needed by the ambitious leaders of the massive military establishment. As Colonel James A. Donovan wrote after retirement from the U.S. Marine Corps, "If there were no Communist bloc . . ., the defense establishment would have to invent one." Above all, anti-communism has been a valuable instrument in containing pressures for a more rapid expansion of welfare-state measures as opposed to more generous forms of aid to business. In this sense, the ideology of anti-communism has also been anti-socialistic. Although favoring corporate and military socialism for the benefit of businessmen and military officers, the anti-communists have bitterly attacked the "creeping socialism" that aims to benefit the poor, the underorganized, and the ethnic minorities. The power and the imaginative vigor of anti-communist and antisocialist ideology has stemmed from its many interlacing currents. At one extreme, there have been those like Senator Joseph McCarthy and Robert Welch of the John Birch Society, both of whom charged that Secretaries of State Dean Acheson and George Marshall were communist agents or dupes. In the middle, people like Acheson and Marshall themselves developed the more influential, mainstream version of anticommunist ideology. By deeds as well as words, they attempted to prove they were more anti-communist than their detractors. Toward the left, many brilliant intellectuals have done their own thing less stridently, demonstrating the inefficiency of communist and socialist practice and the stodginess of communist and socialist doctrine. Each of these currents have been invigorated by significant numbers of former communists and socialists, who have atoned for their former sins by capitalizing on their special knowledge of communist inequity or socialist futility. Each helped publicize many of the Soviet Union's hidden horrors-although the tendency has been less to understand the deformation of Soviet socialism (and its roots) and more to warn against the horrors that would result from any tinkering with the American system. Thus, like a restaurant with a large and varied menu, anti-communist and anti-socialist ideology has been able to offer something for almost any taste. Each dish, moreover, is extremely cheap. A high price is paid only by those who refuse to select any variety, thus opening themselves to the charge of being "soft on communism." For over a quarter of a century there has been only a small minority-particularly in the realm of government service and academia-willing to pay the price. The result has been a rather widespread conformity with ritualistic anti-communism and anti-socialism and a powerful consensus on the virtues of the established order. The second prong of the ideological thrust consists of even more sophisticated variations on an equally simple proposition: the capitalist order is good. Before World War II one of the weakest links in the established order was the image of the corporation. For its consumers, the corporation said, "The public be damned!" On matters of broad public policy-particularly during the depths of the Great Depression- corporate leaders often distinguished themselves by ignorance and incompetence. There was blatant evidence to support President Roosevelt's epithet "economic bourbons." Even during the 1950s Charles Wilson, a former General Motors president, as secretary of defense, was able to suggest that what's good for General Motors is good for the United States. In short, the large corporation-as the central symbol of capitalism-was selfish, venal, and mean. To cope with this situation, huge investments were made in public relations campaigns. Some of these campaigns concentrated on the corporate image. Many of them set forth in excruciating detail the infinite blessings of private ownership and free, competitive private enterprise. An exhaustive analysis of the material appears in The American Business Creed, by a group of Harvard economists. The essence of this so-called creed (to which no serious corporate executives could possibly have given credence) was the ridiculous assumption that the market was mainly composed of small, powerless firms and that large, powerful corporations were controlled by huge numbers of small stockholders instead of a small minority of large stockholders, managers, or investment institutions. During the same period, however, a more influential ideology for postwar capitalism was formulated by various groups of pragmatic intellectuals. Their problem was that many corporate managers and their truly conservative economists were traditionally rather blunt in stating that their job was moneymaking, period-no nonsense about social responsibility. Besides, even the most dedicated corporate lawyers often remembered Justice Oliver Wendell Holmes's dictum on the subject: "The notion that a business is clothed with a public interest and has been devoted to the public use is little more than a fiction intended to beautify what is disagreeable to others." Nonetheless, the Advertising Council spent billions over the decades in creating fictional images of business "clothed with public interest." In this they were helped by uninhibited academics like Carl Kaysen, who stated that in the corporate world of Standard Oil, American Telephone and Telegraph, Du Pont, General Electric, and General Motors "there is no display of greed or graspiness: there is no attempt to push off onto the workers or the community at large part of the social costs of the enterprise. The modern Corporation is a soulful corporation". Others have pursued the soulful theme even further by suggesting that the executives of transnational corporations are the real "world citizens" whose efforts may soon usher in a new era of permanent peace. The third prong in the ideological package is the tacit-but breathtaking-assertion or premise that capitalism no longer exists. "A research report of the United States Information Agency," C. L. Sulzberger revealed in a typically incisive column back in 1964, "has ruefully discovered that the more our propaganda advertises the virtues of 'capitalism' and attacks 'socialism' the less the world likes us . . . Most foreigners don't regard 'capitalism' as descriptive of an efficient economy or a safeguard of individual rights. To them it means little concern for the poor, unfair distribution of wealth, and undue influence of the rich." 37 But what the USIA allegedly needed a research report to discover concerning capitalism's image in other countries was already well understood by capitalism's major publicists and spokesmen at home. As far back as 1941, in his "American Century" editorial, Henry Luce used the well-established term "free economic system" instead of "capitalism." The international capitalist market protected by American hegemony became the "free world" and "freedom" became the code word for both domestic capitalism and capitalist empire. In Carl Kaysen's article on the soulful corporation, the nasty word "capitalism" makes not a single entry. Its use would have introduced a jarring note. It would also have violated a powerful norm among economists namely, that instead of trying to analyze the workings of modern capitalism, capitalism should be discussed mainly in the framework of criticizing Marxian economics or making passing references to the imperfections in Adam Smith's model of perfect competition. When Governor George Romney of Michigan announced that "Americans buried capitalism long ago, and moved on to consumerism," what was really being buried was the old-time conservative defense of capitalism as unadulterated self-interest as superior to socialistic altruism. True believers like Ayn Rand were of no avail in charging that "if the 'conservatives' do not stand for capitalism, they stand for and are nothing" and in proclaiming (like one of her characters in Atlas Shrugged) "We choose to wear the name 'Capitalism' printed on our foreheads boldly, as our badge of nobility." The most intelligent spokesmen for the changing capitalist order wear a variety of names on their foreheads. The first term-and still the most appealing-has been "mixed economy." The persuasive power of this concept stems mainly from lip service to the perfect-competition model as defined in classical or neoclassical ideologies. If capitalism used to be what Adam Smith advocated, the reasoning goes, then capitalism has been replaced by a mixture of private and public enterprise-or even of capitalism and socialism. This mixture blends the (alleged) productive efficiency of the former with the social justice sought by the latter. At the same time, it preserves the beautiful equilibrium of the classical model by providing opportunities for all interests in society to organize in their own behalf. From this competition in both the political and economic marketplaces comes a peaceful resolution of conflicts through the negotiation, bargaining, pressure and counter-pressure, propaganda and counterpropaganda that underlie electoral campaigns and executive, legislative, and judicial decision making. From this confused but peaceful process of political competition among selfish interests there emerges-as though by some invisible guiding hand-the best possible satisfaction of the public interest. Granted, there may be some imperfections in this political marketplace, too much strength at some points and too much weakness at others. But then enlightened government, with the help of Ivy League professors, can come in as a balancing factor and restore the equilibrium. This pluralistic myth is often reinforced by statistical exercises suggesting that the unfair distribution of wealth and influence was on its way out and the majority of the population had attained "affluence." Thus the mere contemplation of the "objective data" carefully selected under his direction induced the usually self-contained Arthur Burns (later named chairman of the Council of Economic Advisers and the Federal Reserve Board) into the following orgasmic spasm of economic hyperbole: "The transformation in the distribution of our national income . . . may already be counted as one of the great social revolutions in history." 30 With such well-certified "evidence" coming across their desks, former Marxists or revolutionaries were able to explain their conversion to the existing order with something more convincing than diatribes (which often appeared in the form of Trotskyism) against Stalinism and more self-satisfying than the attacks on former comrades made by the former communists who converted to professional anticommunism. By 1960, Seymour Martin. Lipset was able to proclaim that "the fundamental political problems of the industrial revolution have been solved." 40 This viewpoint was enlarged by Daniel Bell's sadly joyous funeral oration over the end of socialist or communist ideology in the Western world: "For the radical intelligentsia, the old ideologies have lost their 'truth' and their power to persuade . . . there is a rough consensus among intellectuals on political issues: the acceptance of the Welfare State; the desirability of decentralized power; a system of mixed economy and political pluralism. In that sense, too, the ideological age has ended." In continuation of the same argument, Bell has moved to replace the old ideologies of competing systems with a new end-of-ideology ideology, celebrating the new power of theory, theoreticians, and his best friends. With more wit, passion, and inventiveness than most competing sociologists, Bell has capitalized on the fact that both Western capitalism and Russian socialism have been forms of industrialism. In so doing he defines industrialism loosely as something that has to do with machines, almost completely glossing over the organizational and imperial aspects of industrial capitalism. This allows him to proclaim the coming of something called "postindustrialism," which is characterized by the increasing relative importance of services as contrasted with goods, of white-collar employment, and of more technical and professional elites. The essence of this allegedly "post" industrialism is "the preeminence of the professional and technical class." This preeminence, in turn, is based on "the primacy of theoretical knowledge-the primacy of theory over empiricism and the codification of knowledge into abstract systems of symbols." The masters of the new theory and symbols are the "knowledge elites" and their domicile is the university, "the central institution of post-industrial society." With equal wit and a larger audience, Galbraith propounded a similar theme when, in 1968, he claimed that power in the new industrial state has shifted from capital to the "organized intelligence" of the managerial and bureaucratic "technostructure." For Bell, if the new knowledge elites do not make the ultimate decisions, it is because of a combination of old-fashioned politics and new cultural styles, particularly among younger people who tend to revolt against the rule of reason itself. If these obstacles can be overcome and if enough resources are channeled into R & D and the universities, then man's reason shall at last prevail and rational calculation and control will lead to stable progress. For Galbraith, the remedy was similar, since the system of industrial oligarchy "brings into existence, to serve its intellectual and scientific needs, the community that, hopefully, will reject its monopoly on social purpose." Galbraith's hope lay (at that time) in the wistful presumption that "the educational and scientific estate, with its allies in the larger intellectual community" might operate as a political force in its own right. Although both Bell and Galbraith have been willing to concede the existence of capitalism (and Galbraith has more recently revealed himself as an advocate of public ownership of the one thousand corporate giants whom he describes as the "planning system," 44 most Establishment social scientists in both the Ivy League and the minor leagues seem to have adopted methodological premises that rule capitalism out of existence. Without the wit, wisdom, or vision of a Bell or Galbraith, they have busied themselves in efforts to provide technical solutions to political, moral and socio-economic problems. 'The problems they presume to solve-or in Daniel P. Moynihan's more modest terms, to cope with-are defined at the higher or middle levels of the Establishment where decisions are made on which research grants or contracts are to be approved and which professors are to be hired. They are carefully subdivided into categories that reflect the division of labor within the foundations and government contracting agencies. In turn, the presumably independent "knowledge elites" of the educational, scientific, and intellectual estates-having usually abjured efforts to analyze the morality and political economy of the so-called "market system"-are now rated on their performance in the grant-contract market. The badges of achievement are the research proposals accepted by the Establishment, with the rank order determined by the amount of funds obtained. Alongside the older motto "Publish or perish" (which puts the fate of many younger people in the hands of establishment faithfuls on editorial boards), has risen an additional imperative: "Get a grant or contract and prosper." This imperative also applies to department heads, deans, and college presidents who-like professors-are expected to bring in the "soft money" to supplement the "hard money" in the regular college and university budgets. During the early 1960s the largest amounts of "soft money," came from the government agencies involved in the "hardware" and "software" needed by the military and outer-space agencies, and including the many programs of "area studies" focused on Asia, Africa, the Middle East, and Latin America. Later, with the civil rights and antiwar movements, a minor avalanche of "soft money" was let loose for research, field work, and demonstration projects in the so-called "anti-poverty" and "model cities" programs. The word went quickly around among the new generation of academic hustlers that "Poverty is where the money is." Under these new circumstances, the serious applicant for funds was well advised to steer clear of root causes or systemic analysis. There was no prohibition against proposing research work or field organization designed to challenge the capitalist system, but no applicant has ever been known to openly propose anything so patently "unsound." Moreover, many of the wisest heads in the academic community-whether from profound inner disillusionment or in the heat of professional arrogance-openly advocated the treatment of symptoms only and inveighed against wasting time with the examination of systemic roots of poverty, unemployment, inflation, crime, or environmental degradation. On a broader scale, methodology became the "name of the game." A new generation of methodologists learned that with unspoken constraints upon the purpose and content of research and theory, greater importance must be attached to means and form. Younger people who scorned the catch-as-catch-can methodologies of a Bell, Galbraith, or Moynihan- and were embarrassed by their unseemly interest in turning a good phrase -became the new ideologues of scientific methods. On the one hand, "abstracted empiricists" (as C. Wright Mills called them) became frenetic data-chasers eager to produce reams of computer printouts. On the other hand, enthusiastic model-builders erected pretty paradigms from which hypotheses might be deduced. Both sought verification through the application of methods long proven useful in the natural sciences. In this process, they had the aid and participation of many natural scientists perfectly willing to accept admiration from those naive enough to think that their skills in physics, biology, engineering or mathematics were readily transferable to the analysis of social problems. They also enjoyed the guidance or blessings of old-time radicals who-scorched by the heat of the purges or disillusioned by Stalinism-were eager to build a new God in the image of so-called scientific method. These activities became intensely competitive, with ever-changing cliques and currents providing endless opportunities for innovative nuances in the production of iconoclastic conformity and irrelevant relevance. Occasionally, the existence of capitalist society has been allowed to enter into the frame of reference-but only marginally. Thus, it has become fashionable for many social science departments to have a well-behaved "Marxist" in residence: an element of good behavior, of course, is to accept the subdivision of mental labor and be a "Marxist" economist, socialist, or political scientist rather than dealing with capitalist society as a whole. A more widespread form of marginal acceptance of capitalist reality is the idea of "putting the profit motive to use in achieving social purposes." The reiteration of this imperative in every area from narcotics control to education has become one of the most effective methods of pledging allegiance to the undescribed and unexamined capitalist order. Although these many establishment ideologies have not produced any dedicated loyalty or deep commitment to modern capitalism, they have nonetheless been a major factor in the purification process. They have made it possible for purges and induced conversions of dissidents to be reduced in relative significance and conducted on a low-key, routine basis. They have helped absorb some of the activists of the old "New Left" of the 1960s into the Establishment, purify thoughts and behavior during the 1970s, and channel into harmless-if not profitable-ways the resentments and grievances fed by the many crises and traumas of a more perfect capitalism. p153 During the so-called "Hundred Year Peace" (1815-1914), all wars among the Great Powers were minor, short, or localized. General peace was preserved in an environment of unending limited war. The period since 1945 has also been one of limited war. Whatever military action has taken place-whether in Korea, Indochina, the Middle East, Africa, or Latin America-has been geographically limited. Although the devastation has been ghastly, no nuclear weapons have been used. But limited war has created a baffling problem for the leading capitalist powers, particularly the United States: A reduction in military stimulants to economic expansion and capital accumulation. The present condition of the American industrial establishment, writes David Bazelon, "is unthinkable without the benefit of the capacity-building expenditures of the past twenty years induced by war and preparedness measures." The U.S. Arms Control and Disarmament Agency has thought about this in terms that are themselves unthinkable to most Establishment economists: "It is generally agreed that the great expanded public sector since World War II, resulting from heavy defense expenditures, has provided additional protection against depression, since this sector is not responsible to contraction in the private sector and has provided a sort of buffer or balance wheel in the economy." Strangely enough, the use of military-growth stimulants in the United States also served to stimulate growth in the two major capitalist societies with relatively small military budgets: Japan and West Germany. An important part of U.S. military expenditures spilled over into both Japan and West Germany in the form of both procurement of supplies and payments for the maintenance of U.S. installments. More indirectly, the U.S. concentration of war-related technology (which includes advanced computerization, communication systems, and electronic controls) gave the largest corporations in other leading countries of the "Free World," particularly Japan and West Germany, an opportunity to catch up with, or plunge ahead of, the United States in civilian technologies and thereby make spectacular advances in world trade. As the United States began its slow withdrawal from Indochina in 1969, military expenditures began to level off and then-while prices for military goods were still rising-to fall by almost $4 billion from 1969 to 1972. As a proportion of total GNP, military spending fell even more drastically-from 9.1 percent in 1967 and 1968 to around 6 percent in 1979. Expenditures for "international affairs" (closely related to military expenditures) also declined. The size of the U.S. armed forces fell from over 3.5 million in 1968 to 2.1 million in 1979. In other words, the military slowdown under conditions of de-escalation and d?tente deprived the American economy of a defense against recession that had been provided during the 1960s. This was one of the factors in the recessions that began in 1970, 1974, and 1979. In each case unemployment rose. In 1975, the total end to the hugely destructive war in Indochina was a retrogressive economic force, as unemployment in the United States and other capitalist countries rose to the highest levels since the Great Depression. The response of the industrial-military portion of the Establishment has been prompt, publicly warning against the great perils of becoming weaker than the communist enemy and privately warning against the disastrous economic effects of the slowdown. The positive action has been in two directions: the expansion of new and costly weapons systems and the sale of arms to other countries. Under conditions of d?tente, however, the two of these together were insufficient to restore defense spending to the proportions of GNP reached during Indochinese wars. Thus the American industrial establishment was subjected to a slow withdrawal of the stimulus to which it had become accustomed. The NATO countries were subjected to a sharp decline in the vigor of the Soviet "threat," which was the official raison d'?tre for NATO's existence. The capitalist world was subjected for a while to the "threat" of a peaceful coexistence in which the economic stimulus of war and preparedness would no longer be available at the level to which it had become accustomed. With any decline in d?tente, of course, these conditions change. UNLIMITED OVERKILL The dominant logic of "Free World" militarism in a period of limited warfare has been slowly developing during the 1970s. If unlimited warfare is "dysfunctional," then two lines of operation are indicated. The first has been to channel a larger portion of military resources into weapons systems produced by the largest military contractors, even though this means a dwindling number of people in the armed services. The result has been a continuous increase in "overkill" capabilities whose actual use would surely destroy capitalism itself, but whose production and deployment contribute to the maintenance of a capital accumulation. Overkill itself is matched by various forms of "overdelivery": globe-circling missiles in addition to bombers; multiple warheads on a single missile (MIRVs); launchings from roving submarines, ocean-floor emplacements and eventually satellite space stations; ocean explosions to produce tsunamis (tidal waves); antiballistic missiles that would themselves emit vast radiation dosages over the territory presumably defended; and, more recently, cruise missiles that could be launched from submarines, planes, or ships, fly at radar-eluding altitudes, and maneuver around defensive fire. Less publicized, and often excluded from official estimates of nuclear megatonnage, is the armory of "tactical" nuclear weapons. These include huge numbers of air-to-ground, ground-to-air, and ground-to-ground missiles, of which over seven-thousand are stationed in Europe for use by NATO forces. The average yield of these weapons, according to Robert McNamara as far back as 1964, was about 100 kilotons, about five times greater than the strength of Hiroshima's Little Boy. Moreover, considerable "progress" has been made in developing the biological, chemical, physiological, and nuclear instrumentalities that could offer the prospect, in the words of a high U.S. Navy official, of attaining "victory without shattering cities, industries and other physical assets." The extent of this progress was revealed by the announcement in 1977 of the "neutron bomb" and its promotion for NATO use. The second has been a massive escalation of arms sales and government-subsidized arms gifts to Third World countries. In the United States, this program-which represents a huge stimulus to American industry-reached $11.2 billion in fiscal year 1977, and then, under the Carter administration rose to $13.5 billion in fiscal 1979. This activity has been paralleled by similar arms exports from other "First World" countries. A large part of these exports has gone to the Middle East, thereby recycling "petrodollars" for such countries as Iran and Saudi Arabia. A considerable part of the U.S. exports, in contrast to those from most other First World countries, have gone to Israel, as well as to Third World regimes threatened by domestic upheaval. ______________________________________________________________________ The Specter of Friendly Fascism excerpted from the book Friendly Fascism The New Face of Power in America http://www.thirdworldtraveler.com/Fascism/Specter_FriendlyFascism_FF.html The Unfolding Logic p161 ... as I survey the entire panorama of contending forces, I can readily detect something more important: the outline of a powerful logic of events. This logic points toward tighter integration of every First World Establishment. In the United States it points toward more concentrated, unscrupulous, repressive, and militaristic control by a Big Business-Big Government partnership that-to preserve the privileges of the ultra-rich, the corporate overseers, and the brass in the military and civilian order-squelches the rights and liberties of other people both at home and abroad. That is friendly fascism. p162 At any particular moment First World leaders may respond to crisis like people in a crowded night club when smoke and flames suddenly billow forth. They do not set up a committee to plan their response. Neither do they act in a random or haphazard fashion. Rather, the logic of the situation prevails. Everyone runs to where they think the exits are. In the ensuing melee some may be trampled to death. Those who know where the exits really are, who are most favorably situated, and have the most strength will save themselves. Thus it was in Italy, Japan, and Germany when the classic fascists came to power. The crisis of depression, inflation, and class conflict provided an ideal opportunity for the cartels, warmongers, right-wing extremists, and rowdy street fighters to rush toward power. The fascist response was not worked out by some central cabal of secret conspirators. Nor was it a random or accidental development. The dominant logic of the situation prevailed. Thus too it was after World War II. Neither First World unity nor the Golden International was the product of any central planners in the banking, industrial, political, or military community. Indeed, there was then-as there still is-considerable conflict among competing groups at the pinnacle of the major capitalist establishments. But there was a broad unfolding logic about the way these conflicts were adjusted and the "Free World" empire came into being. This logic involved hundreds of separate plans and planning committees-some highly visible, some less so, some secret. It encompassed the values and pressures of reactionaries, conservatives, and liberals. In some cases, it was a logic of response to anticapitalist movements and offensives that forced them into certain measures-like the expanded welfare state-which helped themselves despite themselves. Although the friendly fascists are subversive elements, they rarely see themselves as such. Some are merely out to make money under conditions of stagflation. Some are merely concerned with keeping or expanding their power and privileges. Many use the rhetoric of freedom, liberty, democracy, human values, or even human rights. In pursuing their mutual interests through a new coalition of concentrated oligarchic power, people may be hurt-whether through pollution, shortages, unemployment, inflation, or war. But that is not part of their central purpose. It is the product of invisible hands that are not theirs. For every dominant logic, there is an alternative or subordinate logic. Indeed, a dominant logic may even contribute to its own undoing. This has certainly been the case with many strong anticommunist drives as in both China and Indochina-that tended to accelerate the triumph of communism. If friendly fascism emerges on a full scale in the United States, or even if the tendencies in that direction become still stronger, countervailing forces may here too be created. Thus may the unfolding logic of friendly fascism-to borrow a term from Marx-sow the seeds of its destruction or prevention. p163 A few years before his death, John D. Rockefeller III glimpsed- although through a glass darkly-the logic of capitalist response to crisis. In The Second American Revolution (1973) he defined the crises of the 1960s and early 1970s as a humanistic revolution based mainly on the black and student "revolts," women's liberation, consumerism, environmentalism, and the yearnings for nonmaterialistic values. He saw these crises as an opportunity to develop a humanistic capitalism. If the Establishment should repress these humanistic urges, he wrote, "the result could be chaos and anarchy, or it could be authoritarianism, either of a despotic mold or the 'friendly fascism' described by urban affairs professor Bertram Gross." p167 A similar note of urgency is trumpeted by General Maxwell Taylor who, in contrast with Zoll's response to internal dangers, warns mainly against external dangers. "How can a democracy such as ours," he asks, "defend its interests at acceptable costs and continue to enjoy the freedom of speech and behavior to which we are accustomed in time of peace?" Although his answer is not as candid as Zoll's, he replies that such traditional and liberal properties must be dispensed with: "We must advance concurrently on both foreign and domestic fronts by means of integrated rational power responsive to a unified national Will''. Here is a distressing echo of Adolf Hilter's pleas for "integration" (Gleichschaltung) and unified national will. p167 James Madison "I believe there are more instances of the abridgement of the freedom of the people by gradual and silent encroachments of those in power than by violent and sudden usurpations." p168 Although friendly fascism would mean total ruin of the American dream, it could hardly come suddenly- let alone in any precisely predictable year. This is one of the reasons I cannot go along with the old-fashioned Marxist picture of capitalism or imperialism dropping the fig leaf or the mask. This imagery suggests a process not much longer than a striptease. It reinforces the apocalyptic vision of a quick collapse of capitalist democracy-whether "not with a bang but a whimper," as T. S. Eliot put it, or with "dancing to a frenzied drum" as in the words of William Butler Yeats. In my judgment, rather, one of the greatest dangers is the slow process through which friendly fascism would come into being. For a large part of the population the changes would be unnoticed. Even those most alive to the danger may see only part of the picture-until it is too late. For most people, as with historians and social scientists, 20-20 vision on fundamental change comes only with hindsight. And by that time, with the evidence at last clearly visible, the new serfdom might have long since arrived. p168 ... in the movement toward friendly fascism, any sudden forward thrust at one level could be followed by a consolidating pause or temporary withdrawal at another level. Every step toward greater repression might be accompanied by some superficial reform, every expansionist step abroad by some new payoff at home, every well-publicized shocker (like the massacres at Jackson State, Kent State, and Attica, the Watergate scandals or the revelations of illegal deals by the FBI or CIA) by other steps of less visibility but equal or possibly greater significance, such as large welfare payments to multinational banks and industrial conglomerates. At all stages the fundamental directions of change would be obscured by a series of Hobson's choices, of public issues defined in terms of clear-cut crossroads-one leading to the frying pan and the other to the fire. Opportunities would thus be provided for learned debate and earnest conflict over the choice among alternative roads to serfdom . . . The unifying element in this unfolding logic is the capital-accumulation imperative of the world's leading capitalist forces, creatively adjusted to meet the challenges of the many crises I have outlined. This is quite different from the catch-up imperatives of the Italian, German, and Japanese leaders after World War I. Nor would its working out necessarily require a charismatic dictator, one-party rule, glorification of the State, dissolution of legislatures, termination of multiparty elections, ultranationalism, or attacks on rationality. As illustrated in the following oversimplified outline, which also points up the difference between classic fascism and friendly fascism, the following eight chapters summarize the many levels of change at which the trends toward friendly fascism are already visible. Despite the sharp differences from classic fascism, there are also some basic similarities. In each, a powerful oligarchy operates outside of, as well as through, the state. Each subverts constitutional government. Each suppresses rising demands for wider participation in decision making, the enforcement and enlargement of human rights, and genuine democracy. Each uses informational control and ideological flimflam to get lower and middle-class support for plans to expand the capital and power of the oligarchy and provide suitable rewards for political, professional, scientific, and cultural supporters. A major difference is that under friendly fascism Big Government would do less pillaging of, and more pillaging for, Big Business. With much more integration than ever before among transnational corporations, Big Business would run less risk of control by any one state and enjoy more subservience by many states. In turn, stronger government support of transnational corporations, such as the large group of American companies with major holdings in South Africa, requires the active fostering of all latent conflicts among those segments of the American population that may object to this kind of foreign venture. It requires an Establishment with lower levels so extensive that few people or groups can attain significant power outside it, so flexible that many (perhaps most) dissenters and would-be revolutionaries can be incorporated within it. Above all, friendly fascism in any First World country today would \ use sophisticated control technologies far beyond the ken of the classic fascists. p177 Although American hegemony can scarcely return in its Truman-Eisenhower-Kennedy-Johnson form, this does not necessarily signify the end of the American Century. Nor does communist and socialist advance on some fronts mark American and capitalist retreat on all fronts. There are unmistakable tendencies toward a rather thoroughgoing reconstruction of the entire "Free World." Robert Osgood sees a transitional period of "limited readjustment" and "retrenchment without disengagement," after which America could establish a "more enduring rationale of global influence." Looking at foreign policy under the Nixon administration, Robert W. Tucker sees no intention to "dismantle the empire" but rather a continued commitment to the view that "America must still remain the principal guarantor of a global order now openly and without equivocation identified with the status quo." He describes America as a "settled imperial power shorn of much of the former exuberance." George Liska looks forward to a future in which Americans, having become more mature in the handling of global affairs, will at last be the leaders of a true empire. p184 Amaury De Riencourt "Caesarism can come to America constitutionally without having to break down any existing institution." p184 ... a friendly fascist power structure in the I United States, Canada, Western Europe, or today's Japan would be far more sophisticated than the "caesarism" of fascist Germany, Italy, and Japan. It would need no charismatic dictator nor even a titular head... it would require no one-party rule, no mass fascist party, no glorification of the State, no dissolution of legislatures, no denial of reason. Rather, it would come slowly as an outgrowth of present trends in the Establishment. p189 Under the full-fledged oligarchy of friendly fascism, the Chief Executive network would become much more powerful than ever before. And the top executive-in America, the president-would in a certain sense become more important than before. But not in the sense of a personal despotism like Hitler's. Indeed, the president under friendly fascism would be as far from personal caesarism as from being a Hirohito-type figurehead. Nor would a president and his political associates extort as much "protection money" from big-business interests as was extracted under Mussolini and Hilter. The Chief Executive would neither ride the tiger nor try to steal its food; rather, he would be part of the tiger from the outset. The White House and the entire Chief Executive network would become the heart (and one of the brain centers) of the new business-government symbiosis. Under these circumstances the normal practices of the Ultra-Rich and the Corporate Overlords would be followed: personal participation in high-Ievel business deals and lavish subsidization of political campaigns, both partly hidden from public view. p190 This transformation would require a new concept of presidential leadership, one emphasizing legitimacy and righteousness above all else. As the linchpin of an oligarchic establishment, the White House would continue to be the living and breathing symbol of legitimate government. "Reigning" would become the first principle of "ruling". Only by wrapping himself and all his agents in the trappings of constitutionality could the President succeed in subverting the spirit of the Constitution and the Bill of Rights. The Chief Executive Network, Big Business, and the UltraRich could remain far above and beyond legal and moral law only through the widely accepted image that all of them, and particularly the president, were fully subservient to law and morality. In part, this is a matter of public relations-but not the old Madison Avenue game of selling perfume or deodorants to the masses. The most important nostrils are those of the multileveled elites in the establishment itself; if things smell well to them, then the working-buying classes can probably be handled effectively. In this context, it is not at all sure that the personal charisma of a president could ever be as important as it was in the days of Theodore or Franklin Roosevelt, Dwight Eisenhower, or John F. Kennedy. It is no easy task to erect a shield of legitimacy to cloak the illegitimate. Doing so would require the kind of leadership that in emphasizing the long-term interests of Big Business and the Ultra-Rich would stand up strongly against any elements that are overly greedy for short-term windfalls. Thus in energy planning, foreign trade, labor relations, and wage-price controls, for example, the friendly fascist White House would from time to time engage in activities that could be publicly regarded as "cracking down on business." While a few recalcitrant corporate overseers might thus be reluctantly educated, the chief victims would usually be small or medium-sized enterprises, who would thus be driven more rapidly into bankruptcy or merger. In this sense, conspicuous public leadership would become a form of followership. p191 During the 1970s, as its forces slowly retreated from the Asian mainland, the U.S. military establishment seemed to dwindle. Even with veterans' and outer-space expenditures included, war spending declined as a portion of the GNP. Conscription ended in 1973. All proposals for overt military intervention in the Third World-whether in Angola, West Asia, Afghanistan, the Horn of Africa, the Caribbean, or Central America-were sidetracked. From an earlier high of 3.5 million people in 1968, the active military fell to 2 million at the beginning of the 1980s. But in real terms the military establishment is enormous, much more than most people know To the million on active duty must be added another 2 million in the reserves, and a million civilians in the defense department. This 5-million-figure total is merely the base for a much larger number of people in war industries, space exploration, war think tanks and veterans' assistance. Behind this total group of more than 12 million-and profiting from intercourse with them-stands an elaborate network of war industry associations, veterans' organizations, special associations for each branch of the armed services, and general organizations such as the American Security Council and the Committee on the Present Danger. But there is something else that George Washington could never have dreamed of when he warned against an overgrown military establishment and that Dwight D. Eisenhower never mentioned in his warning against the military-industrial complex: namely, a transnational military complex. This American-led complex has five military components beyond the narrowly defined U.S. military-industrial complex itself: 1. The dozen or so countries formally allied with the United States through NATO 2. Other industrialized countries not formerly part of NATO, such as Spain, Israel, Japan, Australia, and New Zealand 3. A large portion of the Third World countries 4. Intelligence and police forces throughout the "Free World" 5. Irregular forces composed of primitive tribesmen, often operating behind the lines of the Second World countries. All these forces are backed up by a support infrastructure which includes training schools, research institutes, foreign aid, and complex systems of communication and logistics. If there is one central fact about this transnational military complex at the start of the 1980s it is growth. Paradoxically, every arms-control agreement has been used as a device to allow growth up to certain ceilings, rather than prevent it. And since those ceilings apply only to selected weapons systems, growth tends to be totally uncontrolled in all other forms of destruction. In the United States, total military expenditure has started to move upward at a rate of about 5 percent annual growth in real terms-that is, after being corrected for the declining value of the dollar. A drive is under way to register young people for a draft, while also providing alternative forms of civilian service (at poverty wages) for people objecting to military service on moral, religious, or political grounds. New weapons systems are being initiated-particularly the MX missile, which holds forth the promise of a "first strike" capability against the Soviet Union. Major steps are being taken to increase the military strength of all the other components of the transnational complex- particularly through the expansion of both tactical and strategic nuclear weapons in Western Europe and the beefing up of the defense forces and nuclear capabilities of the Japanese. Above all, despite some internal conflicts on when and where, the leaders of the U.S. Establishment have become more willing to use these forces. Richard Falk of Princeton University presents this thesis: "A new consensus among American political leaders favors intervention, whenever necessary, to protect the resource base of Trilateralistic nations'-Europe, the United States and Japan-prosperity and dominance." 3 This has required strenuous propaganda efforts to overcome the so-called "post-Vietnam syndrome," that is, popular resistance to the sending of U.S. troops into new military ventures abroad. Equally strenuous efforts are made to convince people in Western Europe that as East-West tensions have been relaxing and East-West trade rising, the West faces a greater threat than ever before of a Soviet invasion. The logic of this growth involves a host of absurdities. First of all, statistical hocus-pocus hides the overwhelming military superiority of the "Free World." One trick is to compare the military spending of the United States with the Warsaw Pact countries but to exclude NATO. Another trick is to compare the NATO countries of Europe with the Warsaw Pact countries, but to exclude the United States. Still another is to exclude not merely Japan, but also the huge Chinese military forces lined up on China's border with the Soviet Union. Any truly global picture shows that while the geographical scope of the "Free World" has been shrinking, its military capability has been expanding. This expansion has been so rapid that there may even be good reason for the nervous old men in the Kremlin to feel threatened. Second, much of this expanding military power involves nothing more than overkill. Thus just one Poseidon submarine carries 160 nuclear warheads, each four times more powerful than the Hiroshima bomb. These warheads are enough, as President Carter stated in 1979, "to destroy every large and medium-sized city in the Soviet Union." Pointing out that the total U.S. force at that time could inflict more than fifty times as much damage on the Soviet Union, President Carter then went on to raise the level of overkill still higher. Third, the advocates of new interventionism foster the delusion that military force can solve a host of intertwined political, economic, social, and moral problems. This delusion was evidenced in the long-term and highly expensive U.S. support for the Shah of Iran and the Somoza dictatorship in Nicaragua. As U.S. strike forces are being prepared for intervention in West Asia (whether in Saudi Arabia, Libya, or elsewhere) the presumption is that military action of this type would preserve the availability of petroleum for the West. What is blindly lost sight of is the high probability-and in the judgment of many, the certainty-that any such intervention would precipitate the blowing up of the very oil fields from which the deep thinkers in the White House, Wall Street, and the Pentagon want to get assured supplies. Yet in the words of Shakespeare's Polonius, "If this be madness, yet there is method in it." It is the not-so-stupid madness of the growing militarism which is an inherent part of friendly fascism's unfolding logic. "Militarism," Woodrow Wilson once pointed out at West Point in 1916, "does not consist of any army, nor even in the existence of a very great army. Militarism is a spirit. It is a point of view." 10 That spirit is the use of violence as a solution to problems. The point of view is something that spills over into every field of life-even into the school and the family. Under the militarism of German, Italian, and Japanese fascism violence was openly glorified. It was applied regionally-by the Germans in Europe and England, the Italians in the Mediterranean, the Japanese in Asia. In battle, it was administered by professional militarists who, despite many conflicts with politicians, were guided by old-fashioned standards of duty, honor, country, and willingness to risk their own lives. The emerging militarism of friendly fascism is somewhat different. lt is global in scope. It involves weapons of doomsday proportions, something that Hitler could dream of but never achieve. It is based on an integration between industry, science, and the military that the old-fashioned fascists could never even barely approximate. It points toward equally close integration among military, paramilitary, and civilian elements. Many of the civilian leaders-such as Zbigniew Brzezinski or Paul Nitze-tend to be much more bloodthirsty than any top brass. In turn, the new-style military professionals tend to become corporate-style entrepreneurs who tend to operate-as Major Richard A. Gabriel and Lieutenant Colonel Paul L. Savage have disclosed-in accordance with the ethics of the marketplace. The old buzzwords of duty, honor, and patriotism are mainly used to justify officer subservience to the interests of transnational corporations and the continuing presentation of threats to some corporate investments as threats to the interest of the American people as a whole. Above all, in sharp contrast with classic fascism's glorification of violence, the friendly fascist orientation is to sanitize, even hide, the greater violence of modern warfare behind such "value-free" terms as "nuclear exchange," "counterforce" and "flexible response," behind the huge geographical distances between the senders and receivers of destruction through missiles or even on the "automated battlefield," and the even greater psychological distances between the First World elites and the ordinary people who might be consigned to quick or slow death. p195 William W. Turner "Leadership in the right has fallen to new organizations with lower profiles and better access to power . . . What is characteristic of this right is its closeness to government power and the ability this closeness gives to hide its political extremism under the cloak of respectability." p196 Although most of these right-wing extremists avoid open identification with the classic fascists, the similarities with the early fascist movements of the 1920s are clear. Small clusters of highly strung, aggressive people think that if Hitler and Mussolini (both of whom started from tiny beginnings) could make it into the Big Time under conditions of widespread misfortune, fortune might someday smile on them too. I doubt it. Their dreams of future power are illusory. To view them as the main danger is to assume that history is obliging enough to repeat itself in unchanged form. Indeed, their major impact-apart from their contribution to domestic violence, discussed in "The Ladder of Terror," (chapter 14)-is to make the more dangerous right-wing extremists seem moderate in comparison. The greatest danger or the right is the rumbling thunder, no longer very distant, from a huge array of well-dressed, well-educated activists who hide their extremism under the cloak of educated respectability. Unlike the New Left of the 1960s, which reached its height during the civil rights and antiwar movements, the Radical Right rose rapidly during the 1970s on a much larger range of issues. By the beginning of the 1980s, they were able to look back on a long list of victories. Their domestic successes are impressive: * Holding up ratification of the Equal Rights Amendment * Defeating national legislation for consumer protection * Defeating national legislation to strengthen employees' rights to organize and bargain collectively * Undermining Medicare payments for abortions * Bringing back capital punishment in many states * Killing anti-gun legislation * Promoting tax-cutting programs, such as the famous Proposition 13 in California, already followed by similar actions in other parts of the country * Promoting limitations on state and local expenditures, which in effect (like the tax-cutting measures) mean a reduction in social programs for the poor and the lower middle-classes * Undermining affirmative-action programs to provide better job opportunities for women, blacks and Hispanics * Killing or delaying legislation to protect the rights of homosexuals They have also succeeded in getting serious attention for a whole series of "nutty" proposals to amend the Constitution to require a balanced federal budget or set a limit on the growth of federal expenditures. By the beginning of 1980, about 30 State legislatures had already petitioned the Congress for a Constitutional convention to propose such an amendment; only 34 are needed to force such a convention, the first since 1787. The major purpose of this drive, however, was not to get a Constitutional amendment. Rather, it was to force the president and Congress to go along with budget cutting on domestic programs. By this standard it has been remarkably successful. On foreign issues, the Radical Right came within a hair's breadth of defeating the Panama Canal Treaty and the enabling legislation needed to carry it out. They have been more successful, however, on these matters: * Reacting to the Iranian and Afghanistan crises of 1979 with a frenetic escalation of cold war * Helping push the Carter administration toward more war spending and more militarist policies * Making any ratification of the SALT II treaty dependent on continued escalation in armaments * Preventing Senate consideration, let alone ratification, of the pending UN covenants against genocide, on civil and political rights, and on economic, social, and cultural rights. In a vital area bridging domestic and foreign policy, they provide a major portion of support for the drive to register young people for possible military service and then, somewhat later, reinstitute conscription. Almost all of these issues are "gut issues." They can be presented in manner that appeals to deep-seated frustrations and moves inactive people into action. Yet the New Right leaders are not, as the Americans for Democratic Action point out in A Citizen's Guide to the Right Wing, "rabid crackpots or raving zealots." The movement they are building is "not a lunatic fringe but the programmed product of right wing passion, plus corporate wealth, plus 20th century technology-and its strength This strength has been embodied in a large number of fast-moving organizations: * American Legislative Exchange Council (ALEC) * American Security Council * Americans Against Union Control of Government * Citizens for the Republic * Committee for Responsible Youth Politics * Committee for the Survival of a Free Congress * Committee on the Present Danger * Conservative Victory Fund * Consumer Alert Council * Fund for a Conservative Majority * Gun Owners of America * Heritage Foundation * National Conservative Political Action Committee * National Rifle Association Political Action Committee (PAC) * Our PAC * Public Service PAC * Right To Keep and Bear Arms Political Victory Fund * Tax Reform Immediately (TRIM) * The Conservative Caucus (TCC) * Young Americans for Freedom/The Fund for a Conservative Majority Many of these groups, it must be understood, include nonrabid crackpots and nonraving zealots. They are often backed up-particularly on fiscal matters-by the National Taxpayers Union and many libertarian groups which may part company from them on such issues as the escalation of war spending or the return of military conscription. All of them, it should be added, seem to be the recipients of far more funds than were ever available to the less respectable extremists. Much of this money unquestionably seeps down, as the ADA insists, from corporate coffers. Some of it unquestionably comes from massive mail solicitations by Richard Viguerie, who has been aptly christened the "Direct Mail Wizard of the New Right." Since 1964, when he was working on Senator Goldwater's campaign for the presidency, Viguerie has been developing a mailing list operation which puts the New Right into touch with millions upon millions of Americans. Today, the momentum of the Radical Right is impressive. It has defeated many well-known liberal candidates for reelection to national, state, and local offices. Having helped elect a quarter of the members of the House of Representatives in 1976, it looks forward to much greater influence by the mid-1980s. Like the American labor movement, which has always supported some Republicans as well as many Democrats, the Radical Right has no firm commitment to any one party. Its strength among Democrats is much larger than that of labor among Republicans. It supports candidates of the two major parties and is closely associated with small-party movements, which sometimes have a decisive impact on electoral or legislative campaigns. Its biggest success, however, is that many of its positions which first sounded outrageous when voiced during the Goldwater campaign of 1964 are now regarded as part of the mainstream. This is not the result of Radical Right shifts toward the center. On the contrary, it is the result of a decisive movement toward the right by the Ultra-Rich and the Corporate Overseers. The unfolding logic of the Radical Right, however, is neither to remain static or to become more openly reactionary. "We are no longer working to preserve the status quo," says Paul Weyrich, one of its ablest leaders. "We are radicals working to overturn the present power structure." To understand what Weyrich means, we must heed Amo J. Mayer's warning-based on his study of classic fascism-that in a time of rapid change "even reactionary, conservative and counter-revolutionary movements project a populist, reformist and emancipatory image of their purpose." More populism of this type can be expected: in a word, more attacks on the existing Establishment by people who want to strengthen it by making it much more authoritarian and winning for themselves more influential positions in it. p200 The routinized reiteration of this older conservative doctrine, however, is buttressed by a new ideological reformation that emphasizes the excellence of hierarchy, the wonders of technology, and the goodness of hard times. In The Twilight of Authority, Robert Nisbet makes an eloquent call for a return to the old aristocratic principle of hierarchy: "It is important that rank, class and estate in all spheres become once again honored rather than, as is now the case, despised or feared by intellectuals." If democracy is to be diminished and if rank, class, and estate are once again to be honored, the intellectuals at the middle and lower levels of the establishment must be brought into line on many points. Those who advocate a somewhat more egalitarian society must be pilloried as "levellers" who would reduce everybody to a dull, gray uniformity. They must be convinced that the ungrateful lower classes whom they hope to raise up are, in fact, genetically and culturally inferior. They must be flattered into seeing themselves as part of a society in which true merit, as defined by the powerful, is usually recognized and rewarded. The power of the Ultra-Rich and the Corporate Overlords must be publicly minimized and the endless plutocratic search for personal I gratification must be obscured by lamenting the self-gratifying hedonism | of the masses. p202 A successful transition to friendly fascism would clearly require a J lowering of popular aspirations and demands. Only then can freer rein be given to the corporate drives for boundless acquisition. Since it is difficult to tell ordinary people that unemployment, inflation, and urban filth are good for them, it is more productive to get middle-class leaders on the austerity bandwagon and provide them with opportunities for increased prestige by doing what they can to lower levels of aspirations. Indeed, the ideology of mass sacrifice had advanced so far by the end of the 1970s that the most serious and best-advertised debate among New York liberals on the New York City fiscal crisis rested on the assumption that the level of municipal employment and services had to be cut. The only questions open for debate were "Which ones?" and "How much?" This ideology-although best articulated in general form by political scientists like Samuel Huntington and sociologists like Daniel Bell-also receives decisive support from Establishment economists. Religious doctrines on the goodness of personal sacrifice in this world have invariably been associated with promises of eternal bliss in the next world. Similarly, the emerging ideologies on the virtues of austerity are bound to be supplemented by visions of "pie in the sky by and by." In their most vulgar form these ideologies may simply reiterate the economistic notion that reduced consumption now will mean more profitability, which will mean more capital investment that in turn will mean increased consumption later. In more sophisticated form, these ideologies take the form of a misty-eyed humanism. While moving toward friendly fascism we might hear much talk like Jean-Francois Revel's proclamation that "The revolution of the twentieth century will take place in the United States" or Charles Reich's view that the counterculture of the young will, by itself, break through the "metal and plastic and sterile stone" and bring about "a veritable greening of America." Indeed, work at such "think-tanks" as the Rand Corporation and Hudson Institute increasingly foregoes its old base in economics and related "dismal" disciplines for straight and unadulterated "humanism," the rhetorical promotion of which seems directly related to their involvement in dehumanized and dehumanizing technologies. As with the ideologies of classic fascism, there is no need for thematic consistency in the new ideologies. An ideological menu is most useful when it provides enough variety to meet divergent needs and endless variations on interwoven melodic lines. Unlike the ideologies of classic fascism, however, these new ideologies on market virtue, hierarchic excellence, wondrous technology, and the goodness of hard times are not needed to mobilize masses to high peaks of emotional fervor. In contrast, they help prevent mass mobilization. Yet their growing function is to maintain the loyalty of intellectuals, scientists, and technicians at the Establishment's middle and lower ranks, thereby minimizing the need for systemic purges. On this score the two streams of conservative ideology have been remarkably effective. They have taken over the most commanding heights on the intellectual fronts, reducing to a "small section" those anti-Establishment intellectuals who try to swim against the main currents. Indeed, through a remarkable dialectic, the opponents of the so-called "new class" have themselves become a dominant new class of intellectuals who provide the moral and intellectual guidance on the harsh and nasty imperatives of imperial survival in the era of the stagflation-power tradeoff and the movement toward Super-America, Inc. p204 TRIPLESPEAK During the take-off toward a more perfect capitalism, the debasement of the language moved no slower than the abasement of the currency through creeping inflation. The myths of the cold war gave us the imagery of a "free world" that included many tyrannical regimes on one side and the "worldwide communist conspiracy" to describe the other. The "end of ideology" ideologies gave us the myth of all-powerful knowledge elites to flatter the egos of intellectuals and scientists in the service of a divided Establishment. The accelerating rise of scientific and pseudoscientific jargon fragmented social and natural scientists into small ingroups that concentrated more and more on small slices of reality, separating them more than ever before from the presumably unsophisticated (although functionally literate) working-buying classes. In the early days of this process, George Orwell envisioned a future society in which the oligarchs of 1984 would use linguistic debasement as a conscious method of control. Hence the Party Leaders imposed doublethink on the population and set up a long-term program for developing newspeak. If Orwell were alive today, I think he would see that many of his ideas are now being incorporated in something just as sophisticated and equally fearful. I am referring to the new triplespeak: a three-tiered language of myth, jargon, and confidential straight talk. Unlike Orwell's doublethink and newspeak, triplespeak is not part of any overall plan. It merely develops as a logical outcome of the Establishment's maturation, an essential element in the tightening of oligarchic control at the highest levels of the Golden International. Without myths, the rulers and their aides cannot maintain support at the lower levels of the major establishments, and the might itself-as well as the legitimacy of empire-may decay. Jargon is required to spell out the accumulating complexities of military, technological, economic, political, and cultural power. Straight talk is needed to illuminate the secret processes of high decision making and confidential bargaining and to escape the traps created by myth and jargon. Herein lie many difficulties. With so much indirection and manipulation in the structure of transnational power, there is no longer any place for the pomp and ceremony that helped foster the effulgent myths surrounding past empires-no imperial purple, no unifying queen, king, or imperial council, no mass religion or ideology to fire the emotions of dependent masses. Hence the symbolic trappings of past empires must be replaced by smaller mystifications that at least have the merit of helping maintain the self-respect and motivations of the elites at the middle and lower levels of the national Establishments. Thus the operating rules of modern capitalist empire require ascending rhetoric about economic and social development, human rights, and the self-effacing role of transnational corporations in the promotion of progress and prosperity. The more lies are told, the more important it becomes for the liars to justify themselves by deep moral commitments to high-sounding objectives that mask the pursuit of money and power. The more a country like the United States imports its prosperity from the rest of the world, the more its leaders must dedicate themselves to the sacred ideal of exporting abundance, technology, and civilization to everyone else. The further this myth may be from reality, the more significant it becomes-and the greater the need for academic notables to document its validity by bold assertion and self-styled statistical demonstration. "The might that makes right must be a different right from that of the right arm," the political scientist, Charles Merriam, stated many years ago. "It must be a might deep rooted in emotion, embedded in feelings and aspirations, in morality, in sage maxims, in forms of rationalization . . .~, 30 Thus, in 1975 and 1976, while the long right arm of the American presidency was supporting bloody dictatorships in Chile, Brazil, Indochina, and Iran (to mention but a few), Daniel P. Moynihan, the U.S. ambassador at the United Nations, wrapped himself in the flag of liberty and human rights. His eloquent rhetoric-deeply rooted in emotion and embedded in feelings and aspirations-set a high standard of creative myth-making. At that time, his superiors in Washington failed to realize that Moynihan's approach was, in Walter Laqueur's terms, "not a lofty and impractical endeavor, divorced from the harsh realities of world endeavor, but itself a kind of Realpolitik." Within two years, however, the next president, Jimmy Carter, seized the torch from Moynihan's hand and, without thanks or attribution, set a still higher standard by clothing the might of his cruise missile and neutron bomb in human-rights rhetoric even more deeply rooted in morality, sage maxims, and forms of rationalization. Domestic myths are the daily bread of the restructured Radical Right and the old-style and new-style conservatives. Many of the ideologies discussed in the last section of this chapter serve not only as cover-ups for concentrated oligarchic power. They provide code words for the more unspoken, mundane myths that define unemployed people as lazy or are brought into being. unemployable, women, blacks and Hispanics as congenitally inferior to other people. Presidential candidates invariably propagate the myth that Americans are innately superior to the people of other countries and that therefore they have a high destiny to fulfill in the leadership of the world's forces for peace, freedom, democracy, and-not to be forgotten- private corporate investment and profitability. Trying to flatter the voting public as a whole, they ascribe most of America's difficulties to foreign enemies or a few individuals at home-like Richard Nixon-who have betrayed the national goodness. Not so long ago, General Westmoreland went much further when, to reassure the more naive members of the American officer corps, he soberly declared that "Despite the final failure of the South Vietnamese, the record of the American military of never having lost a war is still intact." 33 With the arrival of friendly fascism, myths like these would no longer be greeted, at least not publicly, with the degree of skepticism they still provoke. Instead, the Establishment would agree that the domestic tranquility afforded by these convenient reassurances qualified them, in contrast to more critical, less comforting diagnoses, as "responsible." As old myths get worn out or new myths punctured, still newer ones (shall we call them "myths of the month"?) are brought into being. The momentum of jargon would not abate in a friendly fascist society but move steadily ahead with the ever-increasing specialization and subspecialization in every field. New towers of Babel are, and would be, continuously erected throughout the middle and lower levels of the Establishment. Communication among the different towers, however, becomes increasingly difficult. One of the most interesting examples is the accumulation of complex, overlapping, and mystifying jargons devised by the experts in various subdivisions of communications itself (semiotics, semantics, linguistics, content analysis, information theory, telematics, computer programming, etc.), none of whom can communicate very well with all the others. In military affairs, jargon wraps otherwise unpleasant realities in a cloak of scientific objectivity. Thus, "surgical strike," "nuclear exchange," and even the colloquial "nukes" all hide the horrors of atomic warfare. The term "clean bomb" for the new neutron bomb hides the fact that although it may not send much radioactive material into the atmosphere it would kill all human life through radiation in a somewhat limited area; this makes it the dirtiest of all bombs. Similarly, in global economics the jargon of exchange rates and IMF conditions facilitates, while also concealing, the application of transnational corporate power on Third World countries. The jargon of domestic economics, as 1 have already shown, hides the crude realities of corporate aggrandizement, inflation, and unemployment behind a dazzling array of technical terms that develop an esprit de corps which unites the various sectors of Establishment economics. Rising above the major portion of jargon and myth is straight talk, the blunt and unadorned language of who gets what, when and how. If money talks, as it is said, then power whispers. The language of both power and money is spoken in hushed whispers at tax-deductible luncheons or drinking hours at the plushest clubs and bars or in the well-shrouded secrecy of executive suites and boardrooms. Straight talk is never again to be recorded on Nixon-style tapes or in any memoranda that are not soon routed to the paper shredders. As one myth succeeds another and as new forms of jargon are invented, straight talk becomes increasingly important. Particularly at the higher levels of the Establishment it is essential to deal frankly with the genuine nature of imperial alternatives and specific challenges. But the emerging precondition for imperial straight talk is secrecy. Back in 1955, Henry Kissinger might publicly refer to "our primary task of dividing the USSR and China." * By the time the American presidency was making progress in this task, not only Kissinger but the bulk of foreign affairs specialists had learned the virtues of prior restraint and had carefully refrained from dealing with the subject so openly. It may be presumed that after the publication of The Crisis Democracy, Samuel Huntington learned a similar lesson and that consultants to the Trilateral Commission will never again break the Establishment's taboos by publicly calling for less democracy. Nor is it likely that in discussing human rights the American president will talk openly on the rights and privileges of American-based transnationals in other countries. Nor am I at all sure that realists like Irving Kristol, Raymond Aron, George Liska, and James Burnham will continue to be appreciated if they persist in writing boldly about the new American empire and its responsibilities. Although their "empire" is diligently distinguished from "imperialism," it will never be allowed to enter official discourse. For imperial straight talk to mature, communication must be thoroughly protected from public scrutiny. Top elites must not only meet together frequently; they must have opportunities to work, play, and relax together for long periods of time. Also, people from other countries must be brought into this process; otherwise there is no way to avoid the obvious misunderstandings that develop when people from different cultural backgrounds engage in efforts at genuine communication. If the elites of other countries must learn English (as they have long been doing), it is also imperative for American elites to become much more fluent in other tongues than they have ever been in the past. In any language there are niceties of expression-particularly with respect to money and power-that are always lost or diluted if translated into another language. With or without the help of interpreters, it will be essential that serious analysis, confidential exchanges, and secret understandings be multilingual. Thus, whether American leadership matures or obsolesces, expands or contracts, English can no longer be the lingua franca of modern empire. The control of "Fortress America" would require reasonable fluency in Spanish by many top elites (although not necessarily by presidents and first ladies). Trilateral Empire, in turn, imposes more challenging-but not insuperable- linguistic burdens. p209 Daniel Fusfield "There is a subtle three-way trade-off between escalating unemployment together with other unresolved social problems, rising taxes, and inflation. In practice, the corporate state has bought all three." p209 What will daily life be like under friendly fascism? In answering this question I think immediately of Robert Theobald's frog: "Frogs will permit themselves to be boiled to death. If the temperature of the water in which the frog is sitting is slowly raised, the frog does not become aware of its danger until it is too late to do anything about it." Although I am not sure it can ever be too late to fight oppression, the moral of the frog story is clear: as friendly fascism emerges, the conditions of daily life for most people move from bad to worse-and for many people all the way to Irving Kristol's "worst." To Fusfeld's trio of more unemployment, taxes, and inflation, however, we must also add a decline in social services and a rise in shortages, waste and pollution, nuclear poison and junk. These are the consequences of corporate America's huge investment in the ideology of popular sacrifice and in the ``hard times" policies that have US "pull in the belts" to help THEM in efforts to expand power, privilege, and wealth. p210 Slogan of the Medici family "Money to get power, power to protect money." p210 Capital has always been a form of power. As physical wealth (whether land, machinery, buildings, materials, or energy resources), capital is productive power. As money, it is purchasing power, the ability to get whatever may be exchanged for it. The ownership of property is the power of control over its use. In turn, the power of wealth, money, and ownership has always required both protection and encouragement through many other forms of power. Businessmen have never needed theorists to tell them about the connection. It has taken economic theorists more than a century to develop the pretense that money and power are separate. Indeed, while Establishment militarists persistently exaggerate the real power of destructive violence, the same Establishment's economic policymakers increasingly present destructive economic policies as though they have no connection with power. The vehicle for doing this is becoming the so-called "tradeoff" policy. The more conservative Establishment notables argue that the way to fight inflation is to curtail growth, even though the inescapable side effect is recession and higher unemployment. Their more liberal colleagues politely beg to differ, arguing that the way to cope with unemployment is to "reflate" the economy. For scientific support, both sides habitually refer to a curve developed by A. W. Phillips on the relation between unemployment and changing money rates in England from 1861 to 1957. Giving modern support to part of Karl Marx's theory on the "reserve army of the unemployed," Phillips showed that when more people were jobless, there was less chance of an increase in money wage rates. Phillips also made a sharp distinction between wages and prices, mentioning prices only to point out in passing that a wage increase does not by itself require a proportionate increase in prices. On this side of the Atlantic, Paul Samuelson and various colleagues applied Phillips's curve to prices instead of wages, and hiding their biases behind Phillips's data, developed the current tradeoff theory. In its more virulent form at the beginning of the 1980s, this theory means the following: Recession is needed to bring the rate of inflation down below the double-digit level-that is, to less than 10 percent. The most naive backers of the theory suggest that once this is done, the "back of inflation will be broken," inflationary expectations will be buried, never to rise again, and the country can return to the good old days of Lyndon Johnson and Richard Nixon. Many liberal opponents of this theory, in turn, accept on good faith the credentials of the self-styled inflation fighters. Apparently operating on the premise that economic policymaking is a technical exercise in puzzlesolving, they argue that the conservatives are simply mistaken in their understanding of economic behavior, and in failing to see that untold millions may be injured by pro-recession policies. In my judgment, however, the liberals who take this view fail to understand or face up to the nature of Establishment power. In a world of many divergent objectives that must be reconciled with each other, the leaders of any Establishment are continuously engaged in complex juggling acts. Whether developing global investment policies or apportioning economic or military aid around the world, everything cannot be done at the same time. Above all, in planning for corporate profitability, compromises must continuously be made. Profitability in one area is often accompanied by unavoidable losses in another. Short-term profits must often be sacrificed in the interest of the greater profitability that can come only from the fruition of long-term investment programs. Above all, the maintenance or strengthening of the power to protect future profitability often requires the sacrifice of some present, even future, profits. Neither market power nor the political power supporting it are free goods. They too cost money-and in periods of stagflation they tend to cost more money than before. Toward the end of 1979, more than 100 corporate executives attended a meeting of the Business Council at Hot Springs, Virginia. Almost to a man, they enthusiastically supported the recessionary policies of the Federal Reserve Board and the Treasury. "The sooner we suffer the pain," stated Irving S. Shapiro, chairman of Du Pont, "the sooner we will be through. I'm quite prepared to endure whatever pain I have to in the short term." Steven Rattner, the reporter for The New York Times, pointed out that signs of suffering were nowhere in sight: "The long black limousines and private jet planes were still evident in abundance." Rattner also suggested that Shapiro was apparently referring not to any loss in his personal income but rather to the "pain" that might be inflicted on Du Pont's profits. How much profit a company like Du Pont might lose in the short run is a matter of conjecture. Unlike American workers, a giant corporation can engage in fancy tax-juggling that pushes its losses on to ordinary taxpayers. Unlike middle-class people, the Ultra-Rich billionaires and centimillionaires can shift the costs of recession or social expenditures to the lowly millionaires, who in turn can pass them along to the middle classes. Above all, the hyenas of economic life can get theirs from recession as well as inflation. Any serious effort to control stagflation either its recession side or its inflation side-would require serious limitations on both Big Business and the support given to it by Big Government. Any such limitations, in turn, would have to be backed up by anti-Establishment coalition including, but not limited to, organized labor. The other side of this coin may now be seen in stark clarity: The price of preventing any such coalition and of preserving, if not expanding, Establishment power, is to choose continuing stagnation as the price that must be paid to protect future profitability. The real tradeoff by the big-time traders is not between price stability and high employment. Rather, it is the sacrifice of both in order to curtail union power, dampen rising aspirations among the population at large, and take advantage of both inflationary windfalls and recessionary bargains. Indeed, not only the U.S. Establishment but the Golden International as a whole has in practice accepted the realities of continuing stagflation (with whatever ups and down may materialize in the proportions of combined inflation and unemployment) as the new economic order of the "Free World." This has long been the operating doctrine of the International Monetary Fund in Third World countries. It is now emerging as a doctrinal strategy for the 1980s in the entire First World. In the 1960s and early 1970s no one ever dreamed that Americans could become accustomed to levels of either inflation or official unemployment as high as 6 or 7 percent a year. As the Big Business-Big Government partnership becomes closer, the levels previously regarded as unacceptable will-like the hot water to which a frog has become accustomed-be regarded not only as normal but as objectives of official policy. Indeed, 8 percent unemployment is already being regarded as full employment and 8 percent inflation as price stability. Under the emerging triplespeak-in a manner reminding us of "War Is Peace" and "Freedom Is Slavery" in Orwell's 1984-the norm for unemployment could reach and the norm for inflation far exceed the double-digit level of ten apiece. When the two are added together, this provides what I call a "limited misery index"-limited because no similar arithmetic value can be given to such things as job insecurity, crime, pollution, alienation, and junk. The so-called "tradeoff" theory merely tells us that either of the two elements in the index may go down a little as the other one goes up. What the tradeoffers fail to point out is that despite fluctuations the long-term trend of the two together is upward. Thus in the opening months of the 1980s, even without correcting for the official underestimation of unemployment, the limited misery index approached 20. Under friendly fascism it would move toward 30.... MORE MONEY MOVING UPWARD As the limited misery index creeps or spurts ahead, a spiraling series of cure-alls are brought forth from the Establishment's medicine chest. Logically, each one leads toward the others. Together, apart from anyone's intentions, the medicines make the malady worse. To cure inflation, interest rates are raised. This cannot be done by bankers alone. Intervention by central banks, acting on their behalf, is necessary. This results in a quick upward movement in prices and a further increase in government spending on new debt service. The companion step is to cut government spending on most social services- education, health, streetcleaning, fire and police protection, libraries, employment projects, etc. The deepest cuts are made in the lowest income areas, where the misery is the sharpest and political resistance tends to be less organized. To cure stagnation or recession, there are two patent medicines. The first is more Big Welfare for Big Business-through more reductions in capital gains taxes, lower taxes on corporations and the rich, more tax shelters, and, locally, more tax abatement for luxury housing and office buildings. These generous welfare payments are justified in the name of growthmanship and productivity. Little attention is given to the fact that the major growth sought is in profitability, an objective mentioned only by a few ultra-Right conservatives who still believe in straight talk. Less attention is given to the fact that the productivity sought is defined essentially as resulting from investment in capital-intensive machinery and technology that displace labor and require more fossil fuels. The second patent medicine, justified in terms of national emergencies with only sotto voce reference to its implications for maintaining employment, is more spending on death machines and war forces. This, in turn, spurs the growth of the federal deficit. To keep the deficit within limits and provide enough leeway for alleviation of the worst cuts in social services, higher taxes are required. This is done by a hidden national sales tax. The preparations for this have already been made by preliminary legislative action toward the imposition of the so-called Value Added Tax (VAT), already in force in France and England. VAT takes a bite out of every stage of production. At the end of the line, this means higher prices for consumers.... And so the dismal round continues-higher interest rates, cuts in social services, more tax subsidies for Big Business, and higher sales taxes hitting the middle- and lower-income groups. Over the short run (which may be stretched out longer than some expect), the net effect of this cycle is to move purchasing power upward toward the most privileged people. This compensates in part for the paradox that making money by raising prices reduces the value of the money made. Over the longer run, however, it intensifies the older contradiction of capitalism, namely, that profit maximization undermines the mass purchasing power required for continued profitability. p219 The major responsibility of corporate executives, so long as they are not constrained by enforced law, is to maximize their long-term accumulation of capital and power no matter what the cost may be to ... people or physical resources. ______________________________________________________________________ Subverting Democratic Machinery excerpted from the book Friendly Fascism The New Face of Power in America http://www.thirdworldtraveler.com/Fascism/Subvert_Demo_Machine_FF.html p229 Murray B. Levin "No truly sophisticated proponent of repression would be stupid enough to shatter the facade of democratic institutions. " p229 Thomas R. Dye and Harmon Ziegler "It is the irony of democracy that the responsibility for the survival of liberal democratic values depends on elites, not masses." p230 In the constitutional democracies, capitalist establishments have tended to use the democratic machinery as a device for sidetracking opposition, incorporating serious opponents into the junior and contingent ranks, and providing the information-the ``feedback"- on the trouble spots that required quick attention. As pressures were exerted from below, the leaders of these establishments consistently-in the words of Yvonne Karp's commentary on the British ruling elites-"allowed concessions to be wrung from them, ostensibly against their will but clearly in their own long term interests." Eleanor Marx, Karl Marx's youngest daughter, described their strategy (often opposed by the more backward corporate types) in these pungent words: `'to give a little in order to gain a lot." Throughout the First World the Ultra-Rich and the Corporate Overseers have been in a better position than anyone else to use the democratic machinery. They have the money that is required for electoral campaigns, legislative lobbying, and judicial suits. They have enormous- technical expertise at their beck and call. They have staying power. Hence it is-as Dye, Ziegler, and a host of political scientists have demonstrated-that the upper-class elites of America have the greatest attachment to constitutional democracy. They are the abiding activists in the use of electoral, legislative, and judicial machinery at all levels of government. It is their baby. Ordinary people-called the masses by Dye and Ziegler-tend to share this perception. The democratic machinery belongs to them, "the powers that be," not to ordinary people. It is not their baby. What will happen if more ordinary people should try to take over this baby and actually begin to make it their own? How would the elites respond if the masses began to ask the elites to give much more and gain much less-particularly when, under conditions of capitalist stagflation and shrinking world power, the elites have less to give. Some radical commentators claim that the powers that be would use their power to follow the example of the classic fascists and destroy the democratic machinery. I agree with Murray Levin that this would be stupid. I see it also as highly unlikely. No First World Establishment is going to shatter machinery that, with a certain amount of tinkering and a little bit of luck, can be profitably converted into a sophisticated instrument of repression. Indeed, the tinkering has already started. Some of it is being undertaken by people for whom the Constitution is merely a scrap of paper, a set of judicial decisions, and a repository of rhetoric and precedents to be used by their high-paid lawyers and public relations people. Some of it is being perpetrated by presidents and others who have taken formal oaths to "preserve, protect and defend the Constitution of the United States." Sometimes knowingly, often unwittingly, both types of people will spare no pains in preserving those parts of the written or unwritten constitution that protect the rights of "corporate persons" while undermining, attacking, or perverting those parts of the Constitution that promote the welfare and liberties of the great majority of all other persons. p231 Although there have always been ups and downs in the relationship between the president, the Congress and the Supreme Court, the general tendency has been toward a strengthening of the presidential network. This is particularly true in foreign affairs. Strangely, the first step toward greater domination of the Congress and the courts is to achieve greater mastery of the bureaucracy. This means tighter control of all appointments, including the review by White House staff of subordinate-level appointments in the various departments. It means tighter control of the federal budget, with traditional budgetary control expanded to include both policy review and efficiency analysis In his effort to master the bureaucracy, President Nixon and his aides went very far in subjecting various officials to quasi-legal wiretaps. President Carter broke new ground by having his economic advisers review the decisions of regulatory agencies that impose on corporations the small additional costs of environmental or consumer protection. Both presidents used their close associations with big-business lobbyists to bring recalcitrant bureaucrats into line and to see to it that they follow the "president's program" in dealing with the Congress or the courts. Throughout American history wags have suggested that the U.S. Congress has been the best that money could buy. This joke expresses popular wisdom on how far big money can go in "owning" or "renting" members of the House and the Senate. In the present era of megabuck money, however, the old wisdom is out of date. With enough attention to "congressional reform" and the cost-effectiveness of campaign and lobbying expenditures, the top elites of the modern Establishment could buy a "much better" Congress. p233 Every major group at the Establishment's highest levels already has avant garde representatives, proponents, and defenders among the members, committees and subcommittees of Congress. Thus at some date, earlier or later, we may expect new investigatory committees of Congress working closely with the major intelligence and police networks and handling their blacklists more professionally than those developed during the days of Joseph McCarthy. We may expect special investigations of monopoly, transnational corporations, international trade, education, science and technology, civil liberties, and freedom of the press. But instead of being controlled by unreliable liberal reformers, they would be initiated and dominated by a new breed of professional `'technopols" dedicated to the strengthening of oligarchic corporations, providing greater subsidization of the supranationals, strengthening the international capitalist market, filling "gaps" in military science and technology, extending the conformist aspects of the educational system, routinizing police-state restraints on civil liberties, and engineering the restraint of the press by judicial action. A small idea of what is involved here is provided by Professor Alexander Bickel's 1971 brief before the Supreme Court in the case of the Justice Department's effort to prevent publication of the famous "Pentagon Papers." The Yale University law professor proposed the establishment of clear guidelines for prior restraint of the press by the executive branch. Here is a challenging task for imaginative lawyers -particularly if they work for strategically placed members of Congress eager to find a loophole in the old Constitutional proviso against the making of laws that abridge the freedom of the press. In the winter of 1936, "the most liberal four members of the Supreme Court resigned and were replaced by surprisingly unknown lawyers who called President Windrip by his first name." This is part of how Sinclair Lewis-in his book lt Can't Happen Here-projected his vision of how "it" could suddenly happen here. Though a new "it" would happen more slowly, a decisive group of four or more justices can still be placed on the Court by sequential appointment during the slow trip down the road to serfdom. During this trip the black-robed defenders of the Constitution would promote the toughening of federal criminal law. They would offer judicial support for electronic surveillance, "no-knock entry," preventive detention, the suspension of habeas corpus, the validation of mass arrests, the protection of the country against "criminals and foreign agents," and the maintenance of "law and order." The Court would at first be activist, aggressively reversing previous Court decisions and legitimating vastly greater discretion by the expanding national police complex. Subsequently, it would probably revert to the older tradition of stare decisis-that is, standing by precedents. The result would be the elimination of opportunities for juridical self-defense by individuals and dissident organizations while maintaining orderly judicial review of major conflicts among components of the oligarchy and the technostructure. If this slow process of subverting constitutional freedoms should engender protest, the Men in Black may well respond with judicial jiujitsu. The administrative reform and reorganization of the judicial system, for example, is needed to overcome backlogs of cases and provide speedier trials. It would require the consolidation of the judicial system, the development of merit systems for judicial employees, the raising of judicial salaries, and stricter standards for outlawing "objectionable" lawyers, all of which poses ample opportunity for undermining legal protection in the name of reform or efficiency. Judicial approval of new functions for grand juries serves as another example. Historically, federal grand juries were created as a bulwark against the misuse of executive authority. The Fifth Amendment states that a person should not be tried for a serious crime without first being indicted by a grand jury. Thus, a prosecuting attorney's charges would not be sufficient-at least not until upheld by a specially selected jury operating in secret sessions. Historically, grand juries have been widely used to investigate charges of corruption in local government. More recently, they have been set up to investigate political cases under federal criminal laws dealing with subversion and the draft. There have been times when at least twelve federal grand juries were operating simultaneously and using their subpoena power vigorously. Collectively, these may be regarded 8S "trial runs" which a Supreme Court on the road to friendly fascism would perfect with decisions upholding the wide use of subpoena power by the grand juries and the denial of transcripts to witnesses. The strong point of a friendly fascist grand jury system is the "Star Chamber" secrecy that could be made operational throughout the fifty states. But this should not obscure the contrapuntal value of a few highly publicized trials. A grand jury indictment can do more than merely set the stage for a showcase trial. It can sort out conflicting evidence in such a way as to induce a self-defeating defense. This can be much more effective than the elaborately contrived "confessions" developed by the Russian secret police in the many purges of Old Bolsheviks. Shrewd and technically expert legal strategies could crucify opponents without allowing them-dead or alive-to be converted into martyrs. p239 Gary Wills "If a nation wishes, it can have both free elections and slavery." p239 President Richard M. Nixon "The average American is just like the child in the family." p239 If friendly fascism arrives in America, the faceless oligarchy would have little or nothing to gain from a single-party system. Neither an elitist party along Bolshevik lines nor a larger mass party like the Nazis would be necessary. With certain adjustments the existing "two party plus" system could be adapted to perform the necessary functions. The first function would be to legitimate the new system. With all increases in domestic repression, no matter how slow or indirect, reassurance would be needed for both middle classes and masses. Even in the past, national elections have provided what Murray Edelman has described as "symbolic reassurance." According to Edelman, elections serve to "quiet resentments and doubts about particular political acts, reaffirm belief in the fundamental rationality and democratic character of the system, and thus fix conforming habits of future behavior." Second, political-party competition would serve as a buffer protecting faceless oligarchs from direct attack This would not merely be a matter of politics-as when the slogan of "ballots not bullets" is used to encourage the alienated to take part in electoral processes. It would be a question of objectives. The more that people are encouraged to "throw the rascals out," the more their attention is diverted from other rascals that are not up for election: the leaders of macrobusiness, the ultra-rich, and the industrial-military-police-communications-health-welfare complex. Protests channeled completely into electoral processes tend to be narrowed down, filtered, sterilized, and simplified so that they challenge either empire nor oligarchy. p243 In their march to power in Germany, Italy, and Japan, the classic fascists were not stupid enough to concentrate on subverting democratic machinery alone. They aimed their main attack, rather, against the nongovernment organizations most active in using and improving that machinery; namely, the labor movement and the political parties rooted in it. In Germany, where these organizations seemed immensely powerful, many German leaders thought that even with Adolf Hitler as chancellor, fascism could make little headway. They underestimated the Nazis and their Big Business backers. "All at once," observed Karl Polanyi, the historian, "the tremendous industrial and political organizations of labor and other devoted upholders of constitutional freedom would melt away, and minute fascist forces would brush aside what seemed until then the overwhelming strength of democratic governments, parties and trade unions." In most First World democracies a slow meltdown has already started. As I pointed out in "The Take-Off toward a New Corporate Society", conglomerate or transnational corporations expand beyond the scope of any labor unions yet invented. In the more narrow spheres where labor organization is well established, the unions have usually been absorbed into the Establishment's junior and contingent levels, often becoming instruments for disciplining workers. As the work force has become more educated, sophisticated, and professionalized, many labor leaders have become stuffy bureaucrats, unable to communicate with their members, and terrified at the thought of widespread worker participation in the conduct of union affairs. Some of them have been open practitioners of racism, sexism, and ageism. The media have done their bit by exaggerating the power of organized labor and the extent of labor union racketeering and corruption. The new class of conservative intellectuals, in turn, has launched devastating attacks on labor unions as interferences with the "free market" and as the real villains behind high prices and low productivity. All these factors have contributed to a major loosening of the ties between organized labor and the intellectuals, ties that are quickly replaced by grants, contracts, and favors from foundations and government agencies. In the Third World countries of dependent fascism, antilabor activity has become much more blatant. There the response to trade unions is vigorous resort to the old-time methods used in Western Europe and America during the nineteenth century: armed union-busters, police and military intervention, machine guns, large-scale arrests, torture, even assassination. In countries like Argentina, Chile, Brazil, South Korea, Taiwan, the Philippines, Zaire, and many others, these measures have proved decisive in attracting transnational investment and keeping wages down. They have also helped beat back the forces of socialism and communism in these countries. Although First World establishments have generally supported (and often braintrusted) this kind of action in the Third World, I do not foresee them resorting to the same strategies at home. The logic of friendly fascism calls, rather, for a slow and gradual melting away of organized labor and its political influence. At the outset of the 1980s, major steps in this direction are already under way in the United States. They are being worked out by an impressive array of in-house labor relations staffs in the larger corporations and of out-house consulting firms made up of superslick lawyers, personnel psychologists, and specialists in the conduct of anti-union campaigns. The efforts of these groups are backed up by sectoral, regional, and national trade associations, the U.S. Chamber of Commerce, the National Association of Manufacturers, the Business Roundtable, and a long series of "objective" studies commissioned either by these groups or the new "think tanks" of the Radical Right. The heat for the meltdown is applied on four major fronts. First, the union-busters operate on the principle of containing labor organization to those places where unions already exist. This requires strenuous efforts to preserve a "union-free environment" in the South, in small towns, and among white-collar, technical, and migratory workers. When efforts are made to extend unionism into one of these areas, the union-busters come in to help the managers conduct psychological warfare. Often, the core of such a campaign is "the mobilization of supervisors as an anti-union organizing committee." Each supervisor may be asked to report back to a consultant, often daily, about the reactions of employees. There may be as many as twenty to twenty-five meetings with each employee during a union campaign. In one successful campaign at Saint Elizabeth's hospital outside of Boston, according to Debra Hauser, the methods used included the discriminatory suspension or firing of five union activists; surveillance, isolation, interrogation and harassment of other pro-union employees; and misrepresentation of the collective bargaining process by top management. "This resulted in the creation of an atmosphere of hysteria in the hospital." A second front is the dissolution of unions already in operation. Construction companies have found that this can be done by "double-breasting"-that is, by dividing into two parts, one operating under an existing union contract and the other part employing nonunion labor. The unions themselves can be dissolved through "decertification," a legal process whereby the workers can oust a union that already represents them. Under the National Labor Relations Law, management cannot directly initiate a decertification petition. But managers have learned how to circumvent the law and have such petitions filed "spontaneously" by employees. They have also learned how to set the stage for deunionization by forcing unions out on strikes that turn out to be destructively costly to both the unions and their members. The third front is labor legislation. In many states the business lobbies have obtained legislation which-under the label of "right-to-work" laws -make union shops or closed shops illegal. Nationally, they are trying to repeal the Davis-Bacon Act (which maintains prevailing union wage rates on government-sponsored construction) and impose greater restrictions on peaceful picketing. Fourth, the most generalized heat is that which is applied by the austerity squeeze of general economic policies. This heat is hottest in the public employment area, particularly among teachers and other municipal or state workers where unionization has tended to increase during recent years. As a result of all these measures, the labor movement in America has failed to keep up with population growth. Union membership in 1980 covered about 22 million employees. Although this figure is larger than that of any past year, it represents a 3 percent decline from 1970, when union members accounted for 25 percent of non-farm employment. This slow melting away of labor's organized force has not been a free lunch. It has cost money-lots of it. But the consequences have also been large: a reduction in the relative power of organized labor vis-a-vis organized business. Anybody who thinks this reduction is felt only at the bargaining table would be making a serious error. Its consequences have been extremely widespread. For one thing, the morale, crusading spirit, and reformist fervor has itself tended to dissipate within many, if not most, branches of the labor movement. Dedication toward the extension of democracy has often been replaced by cynical inactivism. This has been felt by all the many agencies of government that have traditionally looked to labor for support in the extension and improvement of government services in health, education, welfare, housing, environmental protection, and mass transportation. It has been felt by all candidates for public office, for whom labor support now means much less than in previous years. Above all, the weakening of the labor movement has been one of the many factors in the sharp conservative drift within the Democratic party. This drift reinforces the widespread idea that there is little likelihood of serious disagreement on major issues of policy between the two major parties. The continuation of this drift would be one of the most important factors in brushing aside what might still seem to some as the overwhelming strength of America's democratic machinery. p251 Ferdinand Lundberg "If the new military elite is anything like the old one, it would, in any great crisis, tend to side with the Old Order and defend the status quo, if necessary, by force. In the words of the standard police bulletin known to all radio listeners, "These men are armed -and they may be dangerous." p251 Edward Luttwak "A coup consists of the infiltration of a small but critical segment of the state apparatus, which is then used to displace the government from its control of the remainder." p251 Capitalist democracy has often been described as a poker game in which the wealthiest players usually win most of the pots and the poor players pick up some occasional spare change. p252 ... a first principle of any replacement coup in the First World is that the replacers operate in the name of "law and order" and appear as the defenders of the Constitution against others eager to use force against it. Something along these lines happened in Japan back in 1936 when a section of the army staged a short-lived revolt against the "old ruling cliques." The defeat of this "fascism from below," as Japanese historian Masao Maruyama points out, facilitated "fascism from above," respectable fascism on the part of the old ruling cliques. In modern America, much more than in Japan of the 1930s, the cloak of respectability is indispensable. Thus a "feint" coup by Know Nothing rightists or a wild outburst of violence by left-wing extremists could be effectively countered by the military establishment itself, which, in defending the Constitution, could take the White House itself under protective custody. A preventive coup is more sophisticated; it avoids the replacement coup's inherent difficulties by keeping an undesirable regime-after it has been elected-from taking power. Edward Luttwak, author of the first general handbook on how to carry out a coup, has himself published an excruciatingly specific application: "Scenario for a Military Coup d'Etat in the United States." He portrays a seven-year period-1970 through 1976-in which as a result of mounting fragmentation and alienation, America's middle classes become increasingly indifferent to the preservation of the formal Constitution. Under these circumstances two new organizations for restoring order are formed. With blue-ribbon financial support, the Council for an Honorable Peace (CHOP) forms branches in every state. The Urban Security Command (USECO) is set up in the Pentagon. CHOP prepares two nationwide plans: Hard Surface, to organize right-wing extremists, and Plan R for Reconstruction, based on the principle that "within the present rules of the political game, no solution to the country's predicament can be found." Then, during the 1976 election campaign the Republican candidate is exposed by a former employee as having used his previous senatorial position for personal gain. With a very low turnout at the polls, the Democratic candidate easily wins. Thus "an essentially right-of-center country is now about to acquire a basically left-of-center administration." Immediately after election day, CHOP and USECO put into effect Plan Yellow, the military side of Plan R. By January 4, 1977, the new regime is in power. A still more sophisticated form of preventive coup would be one designed to prevent the formal election of a left-of-center administration. In the event that the normal nominating processes fail to do this, any number of scenarios are possible before election day: character defamation, sickness, accidental injury, assassination. If none of these are feasible, the election itself can be constitutionally prevented. Urban riots in a few large central cities such as New York, Newark, and Detroit could lead to patrolling of these areas by the National Guard and Army. Under conditions of martial law and curfews during the last week of October and the first week of November large numbers of black voters would be sure to be kept from the polls. With this prospect before them many black leaders, liberals, and Democratic officials would ask for a temporary postponement of elections in order to protect the constitutional right to vote. Since there is no constitutional requirement that voting in national elections be held on the same day throughout the country, there might well be a temporary postponement in New York, New Jersey, and Michigan. The political leaders of these states, in fact, would soon see that postponement puts them in a remarkably influential bargaining position. After voting results are already in from all other states, the voting in their states would probably determine the election's outcome. Party leaders in Illinois and California would then seek postponement also. To restore equilibrium, elections could then be postponed in many other states, perhaps all of them. Tremendous confusion would thus be created, with many appeals in both state and federal courts-and various appeals to the Supreme Court anticipated. In short order Article II, Section 3 of the Constitution would come into effect. Under this provision the Congress itself declares "who shall then act as President" until new provisions for election are worked out by the Congress. If major differences prevent the Congress from making all these decisions, the stage is then set for the kind of regime described by Luttwak under a name such as The Emergency Administration for Constitutional Health (TEACH). In treating Americans like children in the family, the "Teachers" would not spoil the child by sparing the rod. The best form of prevention, however. is a consolidation coup, using illegal and unconstitutional means of strengthening oligarchic control of Society. This is the essence of the nightmares in The Iron Heel and It Can't Happen Here. Both Jack London's Oligarchy and Sinclair Lewis' President Windrip, after reaching power through constitutional procedures, used unconstitutional means in consolidating their power. This is rather close to the successful scenarios followed by both Mussolini and Hitler. If something like this should happen under-or on the road to- friendly fascism, I think it would be much slower. The subversion of constitutional democracy is more likely to occur not through violent and sudden usurpation but rather through the gradual and silent encroachments that would accustom the American people to the destruction of their freedoms. p255 Jean-Jacques Rousseau - Emile "There is no subjugation so perfect as that which keeps the appearance of freedom, for in that way one captures volition itself." p255 Information has always been a strategic source of power. From time immemorial the Teacher, the Priest, the Censor, and the Spy have helped despots control subject populations. Under the old-fashioned fascist dictatorships, the Party Propagandist replaced the Priest, and the control of minds through managed information became as important as terrorism, torture, and concentration camps. With the maturing of a modern capitalism, the managing of information has become a fine art and advancing science. More powerful institutions use world-spanning technologies to collect, store, process, and disseminate information. Some analysts see a countervailing equilibrium among these institutions. While computerized science and technology produce shattering changes, it is felt that the schools and the media tend to preserve the status quo. Actually, all these institutions have been involved in changing the world. Each has played a major role in easing the difficult transition from national to transnational capitalism by winning greater acceptance of manipulation or exploitation-even as it becomes more extensive and intensive - by those subjected to them. Only through managed information can volition itself be captured and, as Rousseau recognized, can minds be so perfectly subjugated as to keep "the appearance of freedom." Indeed, friendly fascism in the United States is unthinkable without the thorough integration of knowledge, information, and communication complexes into the Establishment. At that point, however, the faceless oligarchy could enjoy unprecedented power over the minds, beliefs, personalities, and behavior of men, women, and children in America and elsewhere. The information overlords, intellectuals, and technicians -sometimes unwillingly. more often unwittingly-would be invaluable change agents in subverting (without any law of Congress doing it openly) the constitutional freedoms of speech and press. So much "progress" has already been made in the management of minds that it is hard to distinguish between current accomplishments and future possibilities. The difficulty is compounded by the fact that the best critics of the information industry (like the best analysis of the American power structure) have often exaggerated the damage already done. This is a risk that I too must run, although I should prefer, rather, to understate what has already occurred and-for the sake of warning- overstate the greater terrors that may lie ahead. p256 Herbert Schiller "The content and forms of American communications-the myths and the means of transmitting them-are devoted to manipulation. When successfully employed, as they invariably are, the result is individual passivity, a state of inertia that precludes action. " p256 For Hitler, according to Hermann Rauschning, marching was a technique of mobilizing people in order to immobilize them. Apart from the manifest purpose of any specific march (whether to attack domestic enemies or occupy other countries) Hitler's marchers became passive, powerless, non-thinking, non-individuals. The entire information complex -which includes education, research, information services, and information machines as well as communications-has the potential of becoming the functional equivalent of Hitler's march. As I reflect on Hermann Rauschning's analysis of Hitler's use of marching as a means of diverting or killing thought, I feel that it would be no great exaggeration to rewrite one of these sentences with the word "TV" replacing "marching." That gives us this: "TV is the indispensable magic stroke performed in order to accustom the people to a mechanical, quasi-ritualistic activity until it becomes second nature." As a technique of immobilizing people, marching requires organization and, apart from the outlay costs involved, organized groups are a potential danger. They might march to a different drum or in the wrong direction . . . TV is more effective. It captures many more people than would ever fill the streets by marching-and without interfering with automobile traffic. It includes the very young and the very old, the sick and the insomniac. Above all, while marching brings people together, TV tends to separate them. Even if sitting together in front of the TV, the viewers take part in no cooperative activity. Entirely apart from the content of the messages transmitted, TV tends to fragment still further an already fragmented population. Its hypnotic effect accustoms "the people to a mechanical, quasi-ritualistic activity until it becomes second nature." And TV training may start as early as toilet training. Unlike marching, TV viewing can fill huge numbers of hours during both day and night. According to the Statistical Abstract, the average TV set in America is turned on, and viewed, for more than six hours a day, which amounts to over forty-two hours a week. This is much more than the average work week of less than thirty-six hours and still more than the hours anyone spends in school classrooms. Among women, blacks, and poor people generally, the average figure rises to over fifty five hours a week. Televised sports events attract huge numbers of spectators. Widely touted educational programs for children help "hook" children at an early age, thereby legitimating their grooming to become passive viewers all their lives. But it should not be assumed that the more adult, educated, and privileged elements in the population are immune to TV narcosis. The extension of educational TV in general-like "public interest" or "alternative" radio-caters mainly to elite viewers. If this trend continues, even intellectuals and scientists, as pointed out to me by Oliver Gray, a former Hunter College student, may well be trapped into hours upon hours of viewing the cultural heritages of the past, both artistic and scientific. Many parts of the information complex also serve a custodial function that separate people from the rest of society. This is a form of immobilization that goes far beyond the march. The hypnotizing effect of TV, both mass and elite, can also be augmented by allied developments in modern information processing and dissemination For example, the fuller use of cable and satellite technology could do much more than bring TV to areas outside the reach of ordinary broadcasting facilities. It could also provide for a much larger number of channels and a larger variety of programming. This could facilitate the kind of sophisticated, pluralistic programming which appeals to every group in the population. The danger is that an additional layer of "cultural ghettoization" might then be superimposed on residential ghettoization. With extensive control "banks" of TV tapes that can be reached by home dialing and with widespread facilities for taping in the home, almost every individual would get a personalized sequence of information injections at any time of the day-or night. TV fixes people in front of the tube in their own houses, without a marginal cent of additional social overhead to cover the cost of special buildings. The young people who walk the streets with transistor radios in their hands, or even with earphones on their heads, are imprisoned in their own bodies. During the 1967-74 period of the Greek junta, the number of TV receivers and viewers in Greece steadily rose-much more rapidly than the number of people released from jails in recurring amnesties. By the time the junta was replaced by a conservative civilian government and all the political prisoners were let free, TV sets were already being installed in the bars of Athens and the coffee houses of village Greece. In America meanwhile TV sets have been installed, as a reinforcement of the custodial functions, not only in jails and hospitals but also in nursing homes for the aged. One of the reasons why nursing homes are an important growth industry for the 1980s is the fact that TV, radio, and tapes provide the "indispensable magic stroke" needed to accustom older people to acceptance of life in a segregated warehouse. According to Arthur R. Miller, TV teaching programs, entirely apart from their content, "anesthetize the sensitivity and awareness" of students, no matter what their age. This paraphrase of Arthur Miller's comment p259 Adolf Hitler "Through clever and constant application of propaganda, people can be made to see paradise as hell, and also the other way around to consider the most wretched sort of life as paradise. " p259 "You may fool all of the people some of the time; you can even fool some of the people all of the time," said Abraham Lincoln, "but you can't fool all of the people all of the time." Yet Lincoln's famous statement antedates the modern-day information complex and its potentialities for service to modern capitalism. Hitler's boast about what he could do with "the clever and constant application of propaganda" is also outdated -so too, his more quoted statements that big lies are more easily believed than small ones. Improvements in the art of Iying have kept up with advances in communication hardware. The mass-consumption economy of transnational capitalism requires the ingenious invention of impressively (sometimes even artistically) presented myths to disguise the realities of capitalist exploitation. In the misleading advertisements of consumers goods the arts of professional Iying are technically referred to as "puffery . . . the dramatic extension of a claim area." With the rapid extension of puffery to include all aspects of politics and institutional advertising, it is not too hard to visualize the faceless oligarchs as managing to fool most of the people (including some of themselves and more of their professional aides) most of the time. The size of lies varies immensely with the directness or indirectness of propaganda. Thus advertising in the mass media deals mainly with small lies projected into the minds of millions of viewers, listeners, and readers. The truly big lies are those that create the myths of what George Gerbner calls the "symbolic environment." 6 These myths penetrate the innermost recesses of consciousness and effect the basic values, attitudes, and beliefs-and eventually volition and action themselves-of viewers, listeners, and readers. Herbert Schiller analyzes five of the myths, which in his judgment have represented the media's greatest manipulative triumphs of the past: (1) the myth of individualism and personal choice; (2) the myth that key social institutions are neutral instead of serving concentrated wealth and power; (3) the myth that human nature does not change, despite the mythmakers' successes in helping to change it; (4) the myth of the absence of serious social conflict; and (5) the myth of media pluralism.. Of making myths there is no end. In an era of friendly fascist "triplespeak," the imagery of major myths must constantly be updated, and one obvious technique in both mass and elite media is "take over the symbols of all opposition groups." Peace, equality, black power, women's rights, the Constitution, for example, may become prominent in the sloganry justifying increased armament, oligarchic wealth, institutionalized white and male supremacy, and the subversion of constitutional rights. The thin veneer of Charles Reich's Consciousness Three could become a useful facade to adorn the evolution of his Consciousness Two into a more highly developed technocratic ideology. Under friendly fascism, one could expect the shameless acceptance of a principle already cynically tolerated in advertising: "Exploit the most basic symbols of human needs, human kindness, and human feeling." For those hardened to such appeals, there would be a complementary principle: "Make plentiful use of scientific and technical jargon." Of course, not even the most skillful of media messengers can juggle their imagery so as to avoid all credibility gaps. In this sense, Lincoln was right: at least some of the people some of the time will be aware that someone is trying-very hard-to fool them. But it is wishful thinking to assume that these failures in mind management will necessarily have a positive outcome. Unfortunately even credibility gaps can be functional in the maintenance of a nondemocratic system. They may deepen the sense of cynicism, hopelessness, and alienation. A barrage of mythmaking can create a world of both passive acquiescence and of little real belief or trust. In such a world, serious opponents of friendly fascism would have but a slight chance of winning a hearing or keeping anyone's allegiance. p260 Aldous Huxley "Hitler's vast propaganda successes were accomplished with little more than the radio and loudspeaker, and without TV and tape and video recording . . . Today the art of mind control is in the process of becoming a science." p261 Fred Friendly head of CBS news ... pointed out that CBS was in business to make money and that informing the public was secondary to keeping on good terms with advertisers. p262 In George Orwell's 1984 Winston Smith and his fellow bureaucrats in the Ministry of Truth labored diligently to rewrite past history. Under friendly fascism, in contrast, skillful technicians and artists at scattered points in the information complex will create current history through highly selective and slanted reporting of current events. Like self-regulation of business, self-censorship is the first line of defense. "Prior restraint" is more effective when part of volition itself, rather than when imposed by courts or other outside agencies. Under friendly fascism the biggest secrets would no longer be in the thriller-story areas of old-fashioned espionage, military technology, and battle plans. Nor would there be little if any censorship-even among America's more prudish partners in the dependent fascist regimes of Brazil, Chile, Pakistan or Indonesia-of visual or written portrayals of frontal nudity and sexual intercourse. The primary blackout would be on any frontal scrutiny of the faceless oligarchs themselves and their exploitative intercourse with the rest of the world. It would not be enough to divert attention toward celebrities, scandals, and exposes at lower and middle levels of power, or new theories exaggerating the influence of knowledge elites, technicians, labor unions, and other minor pressure groups. Neither scholars, reporters, congressional committees, nor government statisticians would be allowed access to the internal accounts of conglomerates and transnationals. Whenever such information would be compiled, it would be done on the basis of misleading definitions that underestimate wealth, profit, and all the intricate operations necessary for serious capital accumulation. As already indicated, "straight talk" must never be recorded in any form, and, if recorded, must be promptly destroyed. Recurring clampdowns by "plumbers' groups" would also enforce established procedures for official leaks to favorite reporters or scholars. At present, information on corporate corruption at the higher levels is played down in both the mass and elite media. Under friendly fascism, while the same activities would take place on a larger scale, they would be protected by double cover-on the one hand, their legalization by a more acquiescent and cooperative state, and, on the other hand, the suppression of news on any such operations that have not yet been legalized. The whole process would be facilitated by the integration of the media into the broader structure of big business. Thanks to the recurrent shakeups, quasi-independent newspapers and publishing houses would become parts of transnational conglomerates, a trend already well under way. To make a little more money by exposing how the system works, bringing its secrets to light, or criticizing basic policies (as in the case of this book's publication) would no longer be tolerated. Dissident commentators would be eased out, kicked upstairs, or channeled into harmless activities. "Prior restraint" would be exercised through the mutual adjustments among executives who know how to "go along and get along." Although "actualities" have thus far been used mainly in political campaigns, it seems likely that in the transition to a new corporate society they will become a standard means of making current history. Whenever necessary, moreover, residual use would be made of direct, old-fashioned censorship: some matters cannot be left to decentralized judgment. Thus, where official violence leads to shooting people down in jails, hospitals or factories, or on the street or campus, there would be a blackout on bloodshed. If a My Lai should occur in Muncie, Indiana, the news would simply not be transmitted by the media. A combination of legal restraints, justified by "national security" or "responsibility," would assure that the episode would simply be a nonevent. p263 Larry P. Gross "While the Constitution is what the judges say it is, a public issue is something that Walter Cronkite or John Chancellor recognizes as such. The media by themselves do not make the decisions, but on behalf of themselves and larger interests they certify what is or is not on the nation's agenda." p263 A problem usually becomes a "public issue," as pointed out in an earlier chapter, when open disputes break out within the Establishment. But even then, there is a selection process. Many vital disputes-particularly those among financial groups-are never aired at all. Sometimes the airing is only in the elite media-business publications, academic journals, or the liberal or radical press. Those who seek to create a "public issue" must often first submit their petitions to the elite media, hoping that they may then break through to the mass media. Issues that are finally "certified" by a Walter Cronkite or John Chancellor are, in the words of Larry P. Gross, thereby placed on the "nation's agenda." But this privileged position cannot last any longer than a popular song on the "hit parade." Civil rights, busing, women's lib, pollution, energy shortages-such issues are quickly created and then unceremoniously even cast into the shadows of the elite media. Under such circumstances, the time available in the hit parade of vital issues is not enough for serious presentation, let alone sustained analysis, of alternative views. This kind of issue creation helps nourish the drift toward a new corporate society in which the range of public issues would be narrowed much more rigorously and the nation's agenda rendered much more remote from the real decision making behind the curtains of a more integrated establishment. In Don't Blame the People, a well-documented study of bias in the mass media, Robert Cirino shows in detail how "money buys and operates the media" and how this fact "works to the advantage of those with conservative viewpoints," namely, the radical right, the solid conservatives, and the moderate conservatives. The radical left and the solid liberals are outside the limits, thus leaving the moderate liberals to "compete alone against the combined mass media power of the conservative camp." But to have their petitions recognized by the mass media, the moderate liberals usually have to accept or operate within the unwritten rules of the game. Thus their tendency, I would argue, is increasingly to press upon moderate conservatives the kind of reforms which, although usually opposed by solid conservatives, are required to strengthen Establishment conservatism. Similarly, the tendency is among the solid liberals and the radical left to win some slight hearing for their own voices by accepting as a fact of life (what choice is there?) the agenda as certified by the media. The middle ground is moved still further to the right as conservative or moderate-liberal money subsidizes the radical left and the more militant liberals. Such shifts are supported by the growth of highly sophisticated conservatism, as illustrated by the National Review, Commentary, and The Public Interest. Within these elite circles the spirit of conservative controversy flourishes, both dominating the agendas of nonconservatives and giving the appearance of broader freedom. How much further a friendly fascist regime would go in narrowing still further the limits of elite opinion among solid liberals and the radical left is impossible to predict. The important point is that the basic trends in the information complex could render dissenting or critical opinions increasingly isolated and impotent. p267 Edmund Carpenter "The White House is now essentially a TV performance. " p267 Fred W. Friendly head of CBS news said of the American presidency "No mighty king, no ambitious emperor, no pope, or prophet ever dreamt of such an awesome pulpit, so potent a magic wand. " p267 In capitalist countries the business of all the private mass media is making money from advertising revenue. Their product is the seeing, listening, or reading audience-or more specifically the opportunity to influence the audience. Although the members of the TV and radio audience seem to be getting something for nothing, in reality they pay for the nominally free service through the prices they pay for advertised products. The larger the estimated audience, the more money the media receive from advertisers. The biggest exception is the provision of free time-usually prime time-to the chief executive. In return, the media feel they maintain the goodwill of a government which has granted them without any substantial charge the highly profitable right to use the airwaves. This indirect cash nexus is customarily smothered in a thick gravy of rhetoric about "public service." But no equivalent services are provided for the chief executive's political opposition, or for lesser politicians. And in the United States, as distinct from some other capitalist countries, the media extort enormous fees from all candidates for political office, a practice that heightens the dependence of all elected officeholders (including the president) upon financial contributions from more or less the same corporations who give the media their advertising revenue. Friendly fascism in the United States would not need a charismatic, apparently all-powerful leader such as Mussolini or Hitler-so I have argued throughout this book. The chief executive, rather, becomes the nominal head of a network that not only serves as a linchpin to help hold the Establishment together but also provides it with a sanctimonious aura of legitimacy through the imagery of the presidential person, his family, his associates, and their doings. The chief executive is already a TV performer, and his official residence in indeed "an awesome pulpit" from which he and his entire production staff can wield a potent "magic wand." p303 Ronald Reagan when governor of California "If it takes a bloodbath ... let's get it over with." p329 Baron De Montesquieu, The Spirit of the Laws "The tyranny of a prince in an oligarchy is not so dangerous to public welfare as the apathy of a citizen in a democracy." ______________________________________________________________________ Impossibility: It Couldn't Happen excerpted from the book Friendly Fascism The New Face of Power in America http://www.thirdworldtraveler.com/Fascism/It_Couldn%27t_Happen_FF.html p331 Karl Popper ""It can't happen here" is always wrong: a dictatorship can happen anywhere." p331 IMPOSSIBILITY: IT COULDN'T HAPPEN The thought that some form of new fascism might possibly-or even probably-emerge in America is more than unpleasant. For many people in other countries, it is profoundly disturbing; for Americans, it is a source of stabbing anguish. For those who still see America as a source of inspiration or leadership, it would mean the destruction of the last best hope on earth. Even for those who regard America as the center of world reaction, it suggests that things can become still worse than they are. An immediate-and all too human-reaction among Americans, and friends of America, is to deny the possibility. In other countries it might happen-but not here. In the Communist world, dictatorships of the proletariat or the Party . . . Military juntas in Argentina, Brazil, Chile, Nigeria, and many other places . . . Other dictatorial styles in India, Pakistan, Iran, Saudi Arabia, and the Philippines . . . But nothing like this in the prosperous, enlightened nations of Western civilization and the Judeo-Christian tradition. Above all, not in the United States of America, not in the land of the free and the home of the brave . . . But why not? Why is it impossible? Many of the arguments purporting to demonstrate impossibility actually demonstrate little more than an unwillingness to "think the unthinkable." Some people try to protect their sensibilities behind a tangle of terminological disputation. The word "fascism," they say, is an emotion-laden term of abuse, as though the brutal, inhuman realities behind other terms-whether "manipulatory authoritarianism," "bureaucratic collectivism," or "military junta"-do not also evoke deep human emotions. Some people argue that the future threat in America is socialist collectivism, not fascism, implying that those who detect a fascist danger are spreading leftist propaganda for the purpose of bringing on a different form of despotism. Others merely react to exaggerated claims that fascism is already here or is inevitable. Nonetheless, there are at least three serious arguments used by those who think that it could not happen here. One of the most subtle arguments is "American capitalism does not need fascism." On this point, let me quote from Corliss Lamont, who grew up as a member of one of the families most closely associated with the Morgans and other titans of American banking: The capitalist class in the United States does not need a fascist regime in order to maintain its dominance. The radical and revolutionary movements are weak and disunited. A large majority of the trade unions are conservative, and are actually part of the establishment . . . I do not see in the offing any constellation of forces that could put fascism across here. To buttress his case, Lamont points out that the threat to American civil liberties was much greater during the periods of the notorious Palmer raids after World War I and of McCarthyism after World War II. He also cites various judicial victories in recent civil liberties cases. Unfortunately, he does not deal directly with the structure of the "capitalist class" and the Establishment, nor with any of the domestic and international challenges to American capitalism. Moreover, his thesis on the weakness of "radical and revolutionary movements" and the conservatism of trade unions is a double-edged argument. True, these factors are no serious challenge to capitalist dominance. By the same token, they could not be regarded as serious obstacles to creeping fascism. On this matter, Lamont leaves himself an escape clause to the effect that he does not see the necessary constellation of forces "in the offing." A similar escape clause has been carved out by Theodore Draper. In a scholarly critique of an earlier article of mine on the subject, he added as an afterthought that he did not intend to give "assurances that we will not follow the German pattern of history into some form of fascism." And then he added that although the Republic is not "immediately in danger, if worse comes to worse, we may yet get some form of fascism. A more widespread argument is "American democracy is too strong." It is true, of course, that old-fashioned fascism never took root in a country with a solid tradition and history of constitutional democracy. The kind of democracy that grew up in both England and the United States was too much of a barrier to the Oswald Mosleys, the Huey Longs, and the Father Coughlins of a past generation. Even in France, the rise of the French fascists under Petain occurred only after military conquest by the Nazis. But this kind of argument boils down to nothing less than the identification of obstacles. It provides no evidence to suggest that these obstacles are immovable objects that cannot be overcome or circumvented in the future. In the early 1970s this argument took a more exhilarating-albeit occasionally flatulent-form. The democratic forces are becoming stronger. In The Greening of America, Charles Reich predicted a "revolution of the new generation." He saw in the counterculture of youth a movement that would break through the metal and plastic forms of the Corporate State (which he held was already here) and bring forth a new flowering of the human spirit. This optimistic spirit was repeated in global terms by Jean Francois Revel a year later. In Without Marx and Jesus, Revel pointed out that dissent has always thrived in America and that the new dissenters are building not merely a counterculture but a counter-society that rejects nationalism, inequality, racial and sexual discrimination, and all forms of authoritarianism. As the first and best hope of the world, America will soon produce "a homo novus, a new man very different from other men." I have never laughed at these salvationist predictions. They are based on an honest perception of many of the things that are not merely good, but wonderful, in my country. In fact, as I demonstrate in "The Democratic Logic in Action" (chapter 20), neither Reich nor Revel, nor other celebrants of America's potentialities have done sufficient justice to the variety of these hopeful currents. But they have tended to exaggerate their strength, perhaps on the theory that a strongly presented prophecy might be self-fulfilling. I think it imperative to articulate more fully hopeful visions and to ground them on the more hopeful parts of the present. But in doing so, it would be highly misleading to ignore the fact that the new democratic currents represent a threat to all those elements in the Establishment that look forward to a more integrated power structure. This means conflicts whose outcomes cannot be predicted. Revel himself writes that America is "composed of two antagonistic camps of equal size-the dissenters and the conservatives." Writing before the rise of the new Radical Right, he then hazarded the guess that "the odds are in favor of the dissenters." Nonetheless, he accepted the possibility of the authoritarian suppression, sidetracking, or co-opting of the dissenters. I think he would agree with me today that if this should happen there would be many subspecies of the new man-and new woman-faceless oligarchs, humanoid technocrats, and comatose addicts of loveless sex, drugs, madness, and cults. A third argument is that "While possible, a new form of fascism is too unlikely to be taken seriously." I see this view as a tribute that blindness pays to vision. It is merely a sophisticated way of conceding possibility while justifying inaction. The outside chance, after all, rarely deserves to be a focus of continuing attention. In terms of its implications, therefore, "unlikely" may be the equivalent of either "impossible" or "so what?" In daily life, of course, people and groups do take precautionary action to protect themselves or others against some unlikely events. This is the basis of the vast insurance industry in the capitalist world, which provides protection for some people against some of the monetary losses resulting from ill health, accidents, theft, fires, earthquakes, or floods. In all these cases of unlikely "bads," not insurance but prevention is the best protection. In the case of friendly fascism, it is the only protection. Yet prevention is always difficult and requires entry into many fields. The prevention of disease and the prolongation of life go far beyond mere medical services; they involve nutrition, exercise, housing, peace of mind, and the control of pollution. The prevention of theft and corruption goes far beyond anything that can be done by police, courts, and jailers; it involves employment opportunities, working conditions, the reduction of discrimination and alienation, and a cleaning of higher-level corruption. The record is also discouraging in the case of all the unlikely major calamities of the modern age: power blackouts, the disposal of radioactive wastes from nuclear power plants, the control of plutonium from fast-breeder reactors, the spread of nuclear weapons, and the escalation in ever-deadlier forms of nuclear, chemical, and bacteriological overkill. Here preventive action spreads into other fields, going far beyond anything that can be done by "fail-safe" mechanisms. It involves nothing less than alternative forms of energy, human as well as solar, and the destruction of the deadliest weapons, if not the elimination of war itself as a mode of resolving conflicts. There are two natural reactions in the face of the difficulties of prevention. One is to push the possibility into the background by mathematically based arguments that the statistical probability is very low. The other is to exaggerate both the horror and the probability of the calamities to be avoided, justifying such exaggeration on the grounds that it alone can move people to action. I cannot accept either. As in the following chapters, I prefer to deal with preventive action directly. I do so because in my considered judgment, the coming of some new form of fascism in the United States- and other First World countries-is not only more likely than the extreme catastrophe, but it would also contribute to conditions under which most of the others would become less unlikely. At times, I find myself saying that friendly fascism is a two-to-one probability well before the end of the century. Then I stop and remind myself that in diagnosing broad historical trends no quantitative calculus is really possible. A more balanced statement is that friendly-or even unfriendly-fascism is a truly significant, not an insignificant possibility. Perhaps it is even highly probable. INEVITABILITY: IT WILL HAPPEN When Herbert Marcuse writes about "incipient fascism," when Kenneth Lamott used ``para-fascism" to describe California as the "distant warning system for the rest of the United States," when Michael Parenti talks about "creeping fascism," the main purpose is to identify present tendencies and future dangers. Similar use might be made of "proto-fascism" or-better yet-"pre-fascism." These are unwhispered words of warning, often engulfed by the vast silences on such subjects by the mass and elite media. But the ambiguity of these words is often a weakness, one not to be overcome by stridency. They are wide open to anyone's interpretation that what creeps down the road will necessarily get to the road's end, that the latent must become full-blown. The "womb of history" metaphor used so vigorously by Marx tends to suggest that a little fascism is like a little pregnancy. With a strange innocence concerning the possibility of miscarriage or abortion, it can then be assumed that the pre- and the para- must eventually become the real thing itself. But even without the use of such words I have found that any strong argument on the possibility of neofascism in America leads many people to conclude that it is inevitable. For some, both the logical case and the empirical evidence in present-day tendencies appear overwhelming. The fact that friendly fascism may come in a variety of forms and circumstances-rather than in some single guise and scenario-strengthens the sense of high probability. For others, perhaps, the judgment of inevitability heightens whatever masochistic pleasure people may get from premonitions of doom, or provides justification for personal escapism from any form of political activism or commitment. For still others, I suspect, the sense of inevitability is intensified by disenchantment with liberalism, socialism, and communism. Many of the very people who in previous periods were attacked as agents of "creeping socialism" or "creeping communism,, now feel that if either were to arrive in America-unlikely though this possibility may be-the result might not be too much different from the fruition of "creeping fascism." Indeed the possible convergence of neofascist state-supported capitalism and high-technology state socialism tends to give the impression that there are few alternatives to some form of repressive collectivism as the profile of man's fate by the end of this century. The power of modern determinism lies in its "if-then" formulation: "If one does A, then B will result." In truly scientific terms the "will result" is generally a probability statement. But in the real world of political or managerial control, there is always a strong tendency to let the probabilistic tone fade into the background and to exploit the propagandistic potentialities of a more deterministic mood. In the work of many self-styled Marxists, this has led to an interesting contradiction. On the one hand, the collapse of capitalism under the battering ram of a proletarian revolution is often seen as inevitable. On the other hand, the leaders of the working class must not merely ride the waves of an inevitable future. Rather, they must work strenuously to bring the inevitable into being. Expressing the essence of a long stream of philosophic thought from Kant through and past Hegel, Engels put this powerfully in his cryptic thesis that "freedom is the recognition of necessity." While anti-Marxists are always eager to attack the alleged determinism of Karl Marx, they are rarely unloath to voice their own form of determinism. Thus Friedrich Hayek vigorously argues that (1) it was the socialist trends in Germany that led to German fascism, (2) a little bit of socialism leads inevitably to large-scale collectivism, and (3) socialism inevitably leads to fascism. In other words: "If s, then f." Finally, in modern science there is a large strain of hope and faith in the eventual discovery and elucidation of deterministic laws of social control. B. F. Skinner has expressed this hope and faith more frankly than most of his colleagues in psychology and other disciplines. His critics have argued cogently that his views have a totalitarian bent-and I have already suggested how Skinnerian reinforcements could be used to help economize on terror and develop what Stephen Spender once called "fascism without tears." Another critical comment is in order, however. The very idea of deterministic control tends to spread inner feelings concerning the inevitability of some repressive form of collectivism- whether Skinner's type or some other. In turn, the sense of inevitability tends to undermine any serious efforts to develop alternatives or fight. The prediction that "It must happen"-particularly if the subjective feeling is more powerful than the rationalistic qualifications and "ifs" that most self-respecting intellectuals will automatically tack on to it- can contribute to a sense of hopelessness and the apathetic acceptance of the unfolding logic. It thus holds forth the potentiality of possibly-not inevitably-becoming a self-confirming prophecy. p337 IRREVERSIBILITY: ETERNAL SERVITUDE OR HOLOCAUST To shake people out of apathy toward some future danger, the self-destroying prophecy is often attempted. Its essence is the confident prediction of doom, either confined or unconfined. Thus the coming of neofascism to the United States may be seen as the maturation of an invincible oligarchy, or even as prelude to the global holocaust of all-out nuclear warfare. I am peculiarly sensitive to this temptation. When a few of my students argued a decade ago that fascism would shake Americans from torpor and prepare the way for a more humanist society, I countered one irrationality with another by arguing that the "improbability of any effective internal resistance" to neofascism would doom all hopes of a humanist future. I drew an exaggerated parallel with the past by pointing out that after all serious internal resistance had been liquidated by the German, Japanese, and Italian fascists, "the only effective anti-fascism was defeat by external powers." Since the "only war that could defeat a neofascist America would be a nuclear war, a holocaust from which no anti-fascist victors would emerge," I concluded with the prophecy: "Once neofascism arrives, the only choice would be fascist or dead." 6 My phrasing at that time was an echo of Franklin D. Roosevelt's wartime rhetoric: "We, and all others who believe as deeply as we do, would rather die on our feet than live on our knees."-itself borrowed from the exhortation of the communist leader, Dolores Ibarruri ("La Pasionaria") in rallying the Loyalist forces against the Franco uprising in Spain. It was an effort to suggest "better dead than fascist." The aim in each case, of course, was to stress the urgency of vigorous and dedicated opposition to tyranny-indeed, to give up one's life, if necessary, to prevent the victory of tyranny. Today, while still agreeing with Roosevelt that there arc things worth dying for, I would rephrase the ancient rhetoric this way: "Better alive and fighting tyranny in any form than dead and unable to fight." If neofascism should come to America, people may have to learn how to fight on their knees. The guiding rhetoric should be Churchill's statement that "We shall fight in the fields and in the streets; we shall fight in the hills; we shall never surrender." ~ To paraphrase: "We shall face p349 William H. Hastie "Democracy is a process, not a static condition. It is becoming rather than being. It can easily be lost, but is never fully won. Its essence is eternal struggle." p351 "Sure, we'll have fascism, but it will come disguised as Americanism." This famous statement has been attributed in many forms to Senator Huey P. Long, the Louisiana populist with an affinity for the demagogues of classic European fascism. If he were alive today, I am positive he would add the words "and democracy." p356 Mary Parker Follett "We are not wholly patriotic when we are working with all our heart for America merely; we are truly patriotic only when we are working also that America may take her place worthily and helpfully in the world of nations . . . Interdependence is the keynote of the relations of nations as it is the keynote of the relations of individuals within nations." p359 James Fenimore Cooper "The vulgar charge that the tendency of democracies is to leveling, meaning to drag all down to the level of the lowest, is singularly untrue; its real tendency being to elevate the depressed to a condition not unworthy of their manhood." p359 Louis D. Brandeis "We can have democracy in this country or we can have great wealth in a few hands, but we can't have both." p382 Mahatma Ghandhi "For me patriotism is the same as humanity. I am patriotic because I am human and humane. It is not exclusive. I will not hurt England or Germany to serve India . . . My patriotism is inclusive and admits of no enmity or ill-will." p383 George Washington, Farewell Address "Guard against the impostures of pretended patriotism." p384 In his Militarism, USA, a sober critique based on years of experience in the U.S. Marine Corps, Colonel James A. Donovan: identifies the dangerous patriot: "the one who drifts into chauvinism and exhibits blind enthusiasm for military actions. He is a defender of militarism and its ideals of war and glory. Chauvinism is a proud and bellicose form of patriotism . . . which identifies numerous enemies who can only be dealt with through military power and which equates the national honor with military victory." p384 In The Reason for Democracy, published after his death in 1976, Kalman Silvert of New York University provided another pungent description of false patriots: "People who wrap themselves in the flag and proclaim the sanctity of the nation are usually racists, contemptuous of the poor and dedicated to keeping the community of 'ins' small and pure of blood, spirit and mind." p386 In Germany today the true patriots are those who, among other things, are trying to come to grips with the essence of past Nazi horrors. In the Soviet Union the true patriots are those who try to understand the nature and roots of Stalinism and the Stalinist legacy, rather than simply uttering some words about "the cult of personality" and running away from the subject. In America the true patriots are those who face the fact that Americans have always been both right and wrong and, instead of trying to squelch criticism, calmly take the position "My country right and wrong." They are those who defend the good, the true, and the beautiful in American life. They are willing to take risks in attacking what is wrong... ______________________________________________________________________ Quotations from the book Friendly Fascism The New Face of Power in America http://www.thirdworldtraveler.com/Fascism/Quotations_FF.html pxiii economist Robert Lekachman "Ronald Reagan must be the nicest president who ever destroyed a union, tried to cut school lunch milk rations from six to four ounces, and compelled families in need of public help to first dispose of household goods in excess of $1,000...1f there is an authoritarian regime in the American future, Ronald Reagan is tailored to the image of a friendly fascist." pxxiii Samuel Johnson "Power is always gradually stealing away from the many to the few, because the few are more vigilant and consistent." p32 Daniel R. Fusfeld As long as an economic system provides an acceptable degree of security, growing material wealth and opportunity for further increase for the next generation, the average American does not ask who is running things or what goals are being pursued. p43 James O'Conner "Both welfare spending and warfare spending have a two-fold nature: the welfare system not only politically contains the surplus population but also expands demand and domestic markets. And the warfare system not only keeps foreign rivals at bay and inhibits the development of world revolution (thus keeping labor power, raw materials and markets in the capitalist orbit) but also helps to stave off economic stagnation at home." p54 American Heritage Dictionary "Establishment: An exclusive group of powerful people who rule a ) government or society by means of private agreements or decisions." p62 Adam Smith "Wherever there is great property, there is great inequality. For one very rich man, there must be at least five hundred poor, and the affluence of the few supposes the indigence of the many." p63 C. Wright Mills No one can be truly powerful unless he has access to the command of major institutions, for it is over these institutional means of power that the truly powerful are, in the first instance, truly powerful . . . p63 Richard Barber Their [a few immense corporations] incredible absolute size and commanding market positions make them the most exceptional man-made creatures of the twentieth century.... In terms of the size of their constituency, volume of receipts and expenditures, effective power, and prestige, they are more akin to nation-states than business enterprises of the classic variety. p167 James Madison "I believe there are more instances of the abridgement of the freedom of the people by gradual and silent encroachments of those in power than by violent and sudden usurpations." p184 Amaury De Riencourt "Caesarism can come to America constitutionally without having to break down any existing institution." p195 William W. Turner "Leadership in the right has fallen to new organizations with lower profiles and better access to power . . . What is characteristic of this right is its closeness to government power and the ability this closeness gives to hide its political extremism under the cloak of respectability." p209 Daniel Fusfield There is a subtle three-way trade-off between escalating unemployment together with other unresolved social problems, rising taxes, and inflation. In practice, the corporate state has bought all three. p210 Slogan of the Medici family "Money to get power, power to protect money." p219 The major responsibility of corporate executives, so long as they are not constrained by enforced law, is to maximize their long-term accumulation of capital and power no matter what the cost may be to ... people or physical resources. p229 Murray B. Levin "No truly sophisticated proponent of repression would be stupid enough to shatter the facade of democratic institutions. " p229 Thomas R. Dye and Harmon Ziegler "It is the irony of democracy that the responsibility for the survival of liberal democratic values depends on elites, not masses." p239 Gary Wills "If a nation wishes, it can have both free elections and slavery." p239 President Richard M. Nixon "The average American is just like the child in the family." p251 Ferdinand Lundberg "If the new military elite is anything like the old one, it would, in any great crisis, tend to side with the Old Order and defend the status quo, if necessary, by force. In the words of the standard police bulletin known to all radio listeners, "These men are armed -and they may be dangerous." p251 Edward Luttwak "A coup consists of the infiltration of a small but critical segment of the state apparatus, which is then used to displace the government from its control of the remainder." p255 Jean-Jacques Rousseau - Emile "There is no subjugation so perfect as that which keeps the appearance of freedom, for in that way one captures volition itself." p256 Herbert Schiller "The content and forms of American communications-the myths and the means of transmitting them-are devoted to manipulation. When successfully employed, as they invariably are, the result is individual passivity, a state of inertia that precludes action. " p259 Adolf Hitler "Through clever and constant application of propaganda, people can be made to see paradise as hell, and also the other way around to consider the most wretched sort of life as paradise. " p260 Aldous Huxley "Hitler's vast propaganda successes were accomplished with little more than the radio and loudspeaker, and without TV and tape and video recording . . . Today the art of mind control is in the process of becoming a science." p261 Fred Friendly head of CBS news ... pointed out that CBS was in business to make money and that informing the public was secondary to keeping on good terms with advertisers. p263 Larry P. Gross "While the Constitution is what the judges say it is, a public issue is something that Walter Cronkite or John Chancellor recognizes as such. The media by themselves do not make the decisions, but on behalf of themselves and larger interests they certify what is or is not on the nation's agenda." p267 Edmund Carpenter "The White House is now essentially a TV performance. " p267 Fred W. Friendly head of CBS news said of the American presidency "No mighty king, no ambitious emperor, no pope, or prophet ever dreamt of such an awesome pulpit, so potent a magic wand. " p303 Ronald Reagan when governor of California "If it takes a bloodbath ... let's get it over with." p329 Baron De Montesquieu, The Spirit of the Laws "The tyranny of a prince in an oligarchy is not so dangerous to public welfare as the apathy of a citizen in a democracy." p331 Karl Popper ""It can't happen here" is always wrong: a dictatorship can happen anywhere." p349 William H. Hastie "Democracy is a process, not a static condition. It is becoming rather than being. It can easily be lost, but is never fully won. Its essence is eternal struggle." p351 "Sure, we'll have fascism, but it will come disguised as Americanism." This famous statement has been attributed in many forms to Senator Huey P. Long, the Louisiana populist with an affinity for the demagogues of classic European fascism. If he were alive today, I am positive he would add the words "and democracy." p356 Mary Parker Follett "We are not wholly patriotic when we are working with all our heart for America merely; we are truly patriotic only when we are working also that America may take her place worthily and helpfully in the world of nations . . . Interdependence is the keynote of the relations of nations as it is the keynote of the relations of individuals within nations." p359 James Fenimore Cooper "The vulgar charge that the tendency of democracies is to leveling, meaning to drag all down to the level of the lowest, is singularly untrue; its real tendency being to elevate the depressed to a condition not unworthy of their manhood." p359 Louis D. Brandeis "We can have democracy in this country or we can have great wealth in a few hands, but we can't have both." p382 Mahatma Ghandhi "For me patriotism is the sme as humanity. I am patriotic because I am human and humane. It is not exclusive. I will not hurt England or Germany to serve India . . . My patriotism is inclusive and admits of no enmity or ill-will." p383 George Washington, Farewell Address "Guard against the impostures of pretended patriotism." p384 In his Militarism, USA, a sober critique based on years of experience in the U.S. Marine Corps, Colonel James A. Donovan: identifies the dangerous patriot: "the one who drifts into chauvinism and exhibits blind enthusiasm for military actions. He is a defender of militarism and its ideals of war and glory. Chauvinism is a proud and bellicose form of patriotism . . . which identifies numerous enemies who can only be dealt with through military power and which equates the national honor with military victory." p384 In The Reason for Democracy, published after his death in 1976, Kalman Silvert of New York University provided another pungent description of false patriots: "People who wrap themselves in the flag and proclaim the sanctity of the nation are usually racists, contemptuous of the poor and dedicated to keeping the community of 'ins' small and pure of blood, spirit and mind." From checker at panix.com Sun Sep 25 20:02:38 2005 From: checker at panix.com (Premise Checker) Date: Sun, 25 Sep 2005 16:02:38 -0400 (EDT) Subject: [Paleopsych] CHE: Education Department Convenes New Commission to Develop 'National Strategy'for Higher Education Message-ID: Education Department Convenes New Commission to Develop 'National Strategy' for Higher Education News bulletin from the Chronicle of Higher Education, 5.9.20 http://chronicle.com/daily/2005/09/2005092001n.htm [So does this sort of national planning strike anyone as an example of at least the beginnings of friendly fascism?] By KELLY FIELD Washington The U.S. Education Department has created a commission to devise a "comprehensive national strategy" on higher education's future, Education Secretary Margaret Spellings announced on Monday in a speech at the University of North Carolina at Charlotte. The panel is to focus on rising enrollments, declining affordability, and colleges' role in America's global competitiveness. "Now is the time to have a national conversation on our goals for higher education," Ms. Spellings said. "I'm here to start that discussion." The federal government accounts for one-third of all spending on higher education, compared with just 10 percent of spending on elementary and secondary schools. But the federal government exercises more control over those schools than it does over colleges and universities, which have long enjoyed relative autonomy. In announcing the commission, the education secretary stressed that she was "not advocating for a bigger role for the federal government in higher education," but said that it is "time to examine how we can get the most out of our national investment." Critics said it's absurd to suggest that the "national strategy" will not increase federal intrusion into academe. "If they're going to have a national strategy, who is going to implement it other than the federal government?" asked Neal P. McCluskey, a policy analyst at the Center for Educational Freedom at the Cato Institute, a libertarian group. "My fear is that they're going to duplicate what they have in K-12 in higher education." Ms. Spellings provided few details on the commission in her speech, saying only that it would work to ensure that college is accessible to all Americans, and that students are prepared to compete in the global economy. Congress is tackling similar issues as part of its work to extend the Higher Education Act, but some say its efforts amount to little more than "tinkering." "We've just been rearranging the deck chairs on the Titanic," said David A. Longanecker, executive director of the Western Interstate Commission for Higher Education and an Education Department official under President Bill Clinton. "I think the secretary's effort provides an opportunity to look in front of the ship." Ms. Spellings, whose eldest daughter is a freshman in college, said the commission would also explore ways to ensure that parents and students get the information they need to compare colleges and universities. Speaking from personal experience, Ms. Spellings said she found "plenty" of information on dining-hall food, intramural sports, and campus architecture, but she had a harder time finding out "which courses to take, how long it takes the average student to graduate, and whether it's a better deal to graduate from a less-expensive state school in six years or a private school in four." "I learned just how confusing the college process can be for parents," she said. "And I'm the secretary of education!" The head of the 19-member commission is Charles Miller, a former chairman of the University of Texas System's Board of Regents. Ten other members have ties to higher education, including four presidents emeritus: James J. Duderstadt of the University of Michigan at Ann Arbor, Arthur J. Rothkopf of Lafayette College, Louis W. Sullivan of the Morehouse School of Medicine, and Charles M. Vest of the Massachusetts Institute of Technology. The commission also includes three faculty members: Arturo Madrid, a professor of humanities at Trinity University, in Texas; Richard Vedder, a professor of economics at Ohio State University and an adjunct scholar at the American Enterprise Institute; and Robert Zemsky, a professor of education and chairman and chief executive of the Learning Alliance for Higher Education at the University of Pennsylvania. But the panel does not include any leaders of teacher unions, an omission that troubles at least one union. In a letter sent to Secretary Spellings on Monday, the American Federation of Teachers asked that the commission be expanded to include "the perspective of people who work on the front lines with students, day in and day out." Other members of the commission are: * Carol Bartz, chairman of the board, Autodesk Inc. * Nicholas Donofrio, executive vice president for innovation and technolgy, International Business Machines Corporation. * Gerri Elliott, corporate vice president, worldwide public sector, Microsoft Corporation. * Jonathan Grayer, chairman and chief executive, Kaplan Inc. * Kati Haycock, director, the Education Trust. * James B. Hunt, Jr. chairman, Hunt Institute for Educational Policy and Leadership, and a former governor of North Carolina. * Robert Mendenhall, president, Western Governors University. * Charlene R. Nunley, president, Montgomery College. * Richard Stephens, senior vice president, human resources and administration, Boeing Company. * Sara Martinez Tucker, president and chief executive, Hispanic Scholarship Fund. * David Ward, president, American Council on Education. Mr. Miller said the commission would hold its first meeting in Washington in mid-October. That meeting will be followed by four others around the country. The panel must submit its final report to the secretary by August 1, 2006. _________________________________________________________________ Background articles from The Chronicle: * [71]Notable Provisions in the Senate Version of the Higher Education Act (9/23/2005) * [72]Senate Committee Approves a Higher-Education Bill Favorable to Students and Traditional Colleges (9/16/2005) * [73]House Committee Approves Bill to Extend Higher Education Act (8/5/2005) * [74]Plan to Track Students Steps Into Political Quicksand (5/6/2005) * [75]Accountability Panel Says Government Should Collect More Data on Students (3/18/2005) * [76]The Education Secretary's Knowledge Campaign (2/18/2005) Opinion: * [77]Colleges Must Get Used to Collaborating With Congress (7/15/2005) * [78]A Whining View of Higher Education (6/17/2005) * [79]Higher Education Isn't Meeting the Public's Needs (10/15/2004) References 71. http://chronicle.com/weekly/v52/i05/05a03301.htm 72. http://chronicle.com/weekly/v52/i04/04a03601.htm 73. http://chronicle.com/weekly/v51/i48/48a02102.htm 74. http://chronicle.com/weekly/v51/i35/35a00101.htm 75. http://chronicle.com/weekly/v51/i28/28a02501.htm 76. http://chronicle.com/weekly/v51/i24/24a02701.htm 77. http://chronicle.com/weekly/v51/i45/45b01601.htm 78. http://chronicle.com/weekly/v51/i41/41b01101.htm 79. http://chronicle.com/weekly/v51/i08/08b00601.htm E-mail me if you have problems getting the referenced articles. From checker at panix.com Sun Sep 25 20:03:47 2005 From: checker at panix.com (Premise Checker) Date: Sun, 25 Sep 2005 16:03:47 -0400 (EDT) Subject: [Paleopsych] Business Roundtable: Tapping America's Potential Message-ID: Actually, there are only five articles. Google <"polite totalitarianism"> and then for more. Business Roundtable: Tapping America's Potential: The Education for Innovation Initiative [This is much more explicitly fascistic than the statement of Education Secretary Spellings just sent. She did, however, endorse the goal of this report, to double the number of science, technology, engineering, and mathematics (STEM) graduates by 2015, as "doable." Of course, the degrees could simply be awarded like high school diplomas are to those who show up (which is 80% of life, according to Woody Allen). Or, free tuition and other expenses, plus a stipend, could be paid only to these majors. I had to look up the figures. There are 1.2 million undergraduate degrees awarded every year in the U.S. now. The report calls for a doubling of STEM graduates from 0.2 to 0.4 million. It would be quite a shift. [This report goes back to July, and I thought that that would be the end of it. I was wrong, as is seen from the new commission on higher education that Spellings created yesterday. Now I have vague memories of this Business Roundtable as a product of Kennedy's New Frontier, the subject of probably the best know of Ayn Rand's Ford Hall Forum lectures, "The Fascist New Frontier." They seem to have disappeared sometime during the Reagan years, at least from casual public view. Now they are back. [The whole problem with such national goals is that the economy is a system, a process. Products emerge from this process, to be sure, but it is the process that is the subject of law, not the products. There are the number of annual STEM graduates that there are now is because of students looking a job prospects, how much the jobs will pay, how rewarding they are, and what continued employment prospects will be. The salaries depend on how consumers value the products which STEM graduates help produce. [Capitalism means control, not by capitalists, but by consumers. If consumers value learning yoga techniques more than having better designed chairs, then yoga instructors will make more than ergonomic engineers. (This is not an exact statement, but you get the idea.) If the Business Roundtable's goal do come about, which would take some doing, then the salaries of STEM graduates would fall, since the annual supply of new entrants would double. Future students would shift to other majors. And STEM graduates would not be employed in STEM jobs. Some will wind up as yoga instructors. [The plan is fascistic--using the word to describe an economy with private ownership of the means of production but with government control--because of these very national goals. It is a national goal to be competitive with other countries. Yoga instruction is not exported and therefore does not contribute to any national goal. It has individual but not collective value. By contrast, ergonomic chairs can be exported and so has a collective value, as well as an individual one. [Both yoga and ergonomic chairs promote well-being. [It could be, however, that the rules governing the economy do not reward STEMs as well as they ought to. My uncle, George Forman, invented the reversible pitch propeller, an invention iirc that was used until the end of the propeller era. He got a bonus of $25. That's the agreement he had with his employer, you say. But these laws--process now, not product--can be modified. [I can expound on this but would prefer for now just to send an article that has influenced me enormously, Rutledge Vining, "On the problem of recognizing and diagnosing faultiness in the observed performance of an economic system," Journal of Law and Economics 5 (1962): 165-184, as a PDF only. If someone will scan it and turn it to txt, that would be most excellent! He was one of my great teachers. [Expect to see more of this as the new "civic" generation continues to roll into place. The generational change has little to do with Republicans or Democrats.] GOAL: To double the number of science, technology, engineering, and mathematics graduates by 2015. CONTENTS 1. A Letter to Leaders Who Care about America's Future 2. A Statement by... 5. From Rhetoric to Action 7. Why Education Reform Is Necessary but Insufficient 8. Recommendations 10. Conclusion 14. Endnotes 15. TAPPING AMERICA'S POTENTIAL The Education for Innovation Initiative July 2005 To Leaders Who Care about America's Future: Fifteen of our country's most prominent business organizations have joined together to express our deep concern about the United States' ability to sustain its scientific and technological superiority through this decade and beyond. To maintain our country's competitiveness in the 21st century, we must cultivate the skilled scientists and engineers needed to create tomorrow's innovations. Our goal is to double the number of science, technology, engineering and mathematics graduates with bachelor's degrees by 2015.1 The United States is in a fierce contest with other nations to remain the world's scientific leader. But other countries are demonstrating a greater commitment to building their brainpower. Consider these facts: Increasing international competition: By 2010, if current trends continue, more than 90 percent of all scientists and engineers in the world will be living in Asia.2 South Korea, with one-sixth of our population, graduates as many engineers as the United States.3 Increasing reliance on and reduced availability of foreign talent to work in the United States: More than 50 percent of all engineering doctoral degrees awarded by U.S. engineering colleges are to foreign nationals.4 However, security concerns in the United States are reducing the number of foreign students, while competition for this talent from other countries and the opportunity to return to their home countries to work is increasing. Alarming domestic trends: The number of engineering degrees awarded in the United States is down 20 percent from the peak year of 1985.5 Although U.S. fourth graders score well against international competition, they fall near the bottom or dead last by 12th grade in mathematics and science, respectively.6 Tapping America's Potential: The Education for Innovation Initiative Our organizations feel strongly that the United States must respond to this challenge as energetically as we did to the Soviet Union's launching of Sputnik in the 1950s. To remain the technological leader in the 21st century, we must establish and achieve an ambitious goal: We must double today's science, technology, engineering and mathematics graduates with bachelor's degrees by 2015. Current federal education reform programs, such as No Child Left Behind, and state efforts to redesign high schools provide a foundation that we can build on. However, to sustain American competitiveness in science and engineering, we need a focused, long-term, comprehensive initiative by the public and private sectors to: 1. Build public support for making improvement in science, technology, engineering and mathematics performance a national priority. 2. Motivate U.S. students and adults, using a variety of incentives, to study and enter science, technology, engineering and mathematics careers, with a special effort geared to those in currently underrepresented groups. 3. Upgrade K-12 mathematics and science teaching to foster higher student achievement, including differentiated pay scales for mathematics and science teachers. 4. Reform visa and immigration policies to enable the United States to attract and retain the best and brightest science, technology, math and engineering students from around the world to study for advanced degrees and stay to work in the United States. 5. Boost and sustain funding for basic research, especially in the physical sciences and engineering. The recommendations above and the statement, "Tapping America's Potential: The Education for Innovation Initiative," that follows echo the alarm expressed by numerous prestigious public and private groups about the need to inspire, recruit and train a larger domestic pool of technical talent. This is so vital for the security and continued prosperity of our country that we can no longer delay action. We are calling on business leaders to unite with government officials at all levels--national, state and local--to create the momentum needed to achieve this goal. We are committed to providing the leadership and sustained effort needed to help the American people realize the dimensions of the problem and the urgent need for solutions. Sincerely, William T. Archey, President & CEO, AeA Brian K. Fitzgerald, Executive Director, Business-Higher Education Forum John J. Castellani, President, Business Roundtable Deborah L. Wince-Smith, President, Council on Competitiveness Bruce Mehlman, Executive Director, Computer Systems Policy Project Harris N. Miller, President, Information Technology Association of America Rhett Dawson, President, Information Technology Industry Council Roger Campos, President & CEO, Minority Business RoundTable John Engler, President, National Association of Manufacturers Lawrence P. Farrell, Jr., President & CEO, National Defense Industrial Association George M. Scalise, President, Semiconductor Industry Association Ken Wasch, President, Software & Information Industry Association Lezlee Westine, President & CEO, TechNet Matthew J. Flanigan, President, Telecommunications Industry Association Thomas J. Donohue, President & CEO, U.S. Chamber of Commerce A Statement by ... AeA, Business Roundtable, Business-Higher Education Forum, Computer Systems Policy Project, Council on Competitiveness, Information Technology Association of America, Information Technology Industry Council, Minority Business RoundTable, National Association of Manufacturers, National Defense Industrial Association, Semiconductor Industry Association, Software & Information Industry Association, TechNet, Telecommunications Industry Association, and the U.S. Chamber of Commerce Almost 50 years ago, the Soviet Union shocked Americans by launching Sputnik, the first Earth orbit satellite. The U.S. response was immediate and dramatic. Less than a year later, President Eisenhower signed into law the National Defense Education Act, a major part of the effort to restore America's scientific pre-eminence.7 Today, our nation faces a more serious, if less visible, challenge. One of the pillars of American economic prosperity--our scientific and technological superiority--is beginning to atrophy even as other nations are developing their own human capital. If we wait for a dramatic event--a 21st-century version of Sputnik--it will be too late. There may be no attack, no moment of epiphany, no catastrophe that will suddenly demonstrate the threat. Rather, there will be a slow withering, a gradual decline, a widening gap between a complacent America and countries with the drive, commitment and vision to take our place. History is replete with examples of world economies that once were dominant but declined because of myopic, self-determined choices. The United States is at such a critical point in our own history. Virtually every major respected organization representing business, research and education, as well as government science and statistics agencies and commissions,8 has extensively documented the critical situation in U.S. science, technology, engineering and mathematics. The indicators range from measurable declines in U.S. innovation, such as patents and scientific articles, to soaring numbers of students in Asia majoring in these fields, to U.S. students' lagging interest and measured performance in math and science. oForeign competition: China not only graduates four times as many engineers as the United States,9 but it also offers lucrative tax breaks to attract companies to conduct research and development (R&D) in the country.10 oInterest in engineering: Out of the 1.1 million high school seniors in the United States who took a college entrance exam in 2002, just under 6 percent indicated plans to pursue a degree in engineering--nearly a 33 percent decrease in interest from the previous decade.11 oStudent achievement: On a recent international assessment of 15-year-olds' math problem-solving skills, the United States had the smallest percentage of top performers and the largest percentage of low performers compared to the other participating developed countries.12 This is not surprising when nearly 70 percent of middle school students are assigned to teachers who have neither a major nor certification in mathematics.13 oInvestment in basic research: In the United States, since 1970, funding for basic research in the physical sciences has declined by half (from 0.093 percent to 0.046 percent) as a percentage of the gross domestic product (GDP).14 For most of the 20th century, the American education system provided a substantial part of the talent and proficiency needed to sustain and improve our way of life. In addition, many foreign scientists were attracted to pursue research in the United States by the American scientific enterprise's top-notch facilities and financial support, and by their own desire to escape totalitarian regimes and live in a free society. Today, however, as the U.S. economy becomes even more reliant on workers with greater knowledge and technological expertise, the domestic supply of qualified workers is not keeping up with the skill demands. Employers are increasingly interested in hiring people who not only can execute well but also can create the next wave of innovation. One economist estimates that "trailing other developed countries on education measures may reduce U.S. economic growth by as much as a half percentage point a year."15 All projections suggest that the discrepancy between supply and demand of domestic talent will grow more pronounced. In the face of the declining interest and proficiency of Americans in science, math and engineering, American industry has become increasingly dependent--some would say overly dependent--on foreign nationals to fill the demand for talent in a variety of fields that require strong backgrounds in science, technology, engineering and mathematics. A number of developments--including heightened security after September 11, growing competition from other countries for the same foreign talent and the technological capacity for foreign talent to work in their home countries--have underscored the need for greater scientific and technological self-sufficiency in our country. The United States has always welcomed the best and brightest from other countries to study and work here, and we should continue to do so. We cannot and should not, however, rely so heavily on foreign talent to fill critical positions in teaching, research and industry. From Rhetoric to Action A remarkable consensus emerges from the recommendations in recent reports and statements about what the United States must do to maintain its pre-eminence in science and engineering and to prepare its future workforce for the high-skilled jobs created by a growing U.S. economy. The CEOs, university presidents, members of Congress, Cabinet secretaries, governors, Nobel Laureates, scientists, mathematicians, researchers and educators on different prestigious commissions and panels all agree that the United States risks a declining standard of living if America postpones taking aggressive, strategic action. The sense of urgency among those who see the problem at home and increased competition from abroad provides a catalyst for action. Those who have studied or experienced this challenge must provide leadership to build a broader understanding of what is at stake, as well as provide support to undertake a corrective course. Although numerous policy initiatives and programs are under way, none matches the coordinated vision, concentrated energy, attention and investment that emerged from the shock Americans faced when the Soviet Union beat the United States into space with Sputnik in 1957. We need a 21st-century version of the post-Sputnik national commitment to strengthen science, technology, engineering and math education. We need a public/private partnership to promote, fund and execute a new National Education for Innovation Initiative. It must be broader than the 1958 National Defense Education Act because federal legislation is only one component of a larger, more comprehensive agenda. Tapping America's Potential: The Education of Innovation Initiative The federal government must play a critical role in this endeavor. We understand that states and local communities determine most of the funding and governance of our public education system. We know that the private sector can and must do more. Nevertheless, this is a national problem that demands national leadership and a sense of national purpose to create the impetus for crucial state, local, private and individual action. We firmly believe that the federal government can maintain fiscal discipline and restrain discretionary spending while also making "smart investments" to secure our nation's future. It will require making hard choices, but the resources can be found if the national interest drives decisions. We recognize that we will have to make our case to the American people to build the political support for moving this issue to the top of the national agenda. Why Education Reform Is Necessary but Insufficient The United States spends more than $455 billion annually for elementary and secondary education.16 There is disagreement over whether the amount is enough and whether it is well-spent, but there is no argument that resources and reform must work in tandem to produce acceptable results. Past national and state efforts to improve U.S. math and science achievement clearly demonstrate that they cannot be isolated from the need to improve the overall quality and results of the entire U.S. education system, pre-K through 16. That is why the business community supports high-quality early childhood education; implementation of the No Child Left Behind Act; the Action Agenda for Improving America's High Schools, adopted at the 2005 National Education Summit on High Schools;17 the moral and economic imperative to address the reality that close to a third of teenagers drop out before they graduate from high school;18 expansion of charter schools; and greater access to and completion of higher education. The current local, state and national focus that No Child Left Behind has brought to closing the achievement gap between majority and minority students was long overdue and is beginning to pay off.19 These education reform initiatives represent significant progress. However, they must be supplemented by the recommendations in this paper because of four unique challenges that science, technology, engineering and math improvement must address: 1. Depletion of the teacher talent pool by the private sector: College graduates who major in math and science can earn far more as private sector employees than as teachers.20 Higher-aptitude students also find performance- based compensation in the private sector more appealing than the traditional teacher salary schedule based on years of experience and degrees.21 2. Cyclical employment trends: Labor supply in these fields is particularly sensitive to changes in the economy. Growth and decline in the number of annual majors in science and engineering closely track with hiring and layoff cycles; the supply of graduates typically lags behind the pace of economic recovery. To counter the impact of these trends on students' choices of majors, high school and college students need better information about the wide range of opportunities that science, technology, engineering and math degrees open up to them.22 3. Government security needs: U.S. government agencies and firms that handle sensitive national security research and development must hire qualified American citizens, a requirement that presents a further demand for domestic talent. 4. Baby boom retirement: More than 50 percent of the current science and engineering workforce is approaching retirement. It must be replaced by a larger pool of new talent from a more diverse population. Tapping America's Potential: The Education for Innovation Initiative Recommendations From the U.S. Commission on National Security/21st Century's report in 2001 to the Business' Higher Education Forum's report in 2005, we identified a core set of recommendations in a dozen recent reports that we can begin to initiate, even in this tight budget year. The recommendations may need to begin incrementally. However, to reach our goal of doubling the number of science, technology, engineering and math graduates by 2015, we must focus as quickly as possible in the years ahead on five critical areas that affect the choices made by students now in the pipeline. (For each action proposed within the five areas, we identify in parentheses who has primary responsibility.) 1. Build public support for making science, technology, engineering and math improvement a national priority. Launch a campaign to help parents, students, employees and community leaders understand why math and science are so important to individual success and national prosperity. (Business) Expand the State Scholars Initiative to encourage students to take rigorous core academic courses in high school and provide role models and other real world examples of the work that engineers and scientists do.23 (Business) 2. Motivate U.S. students and adults to study and enter science, technology, engineering and mathematics careers, with a special effort geared to those in currently underrepresented groups. Create more scholarships and loan-forgiveness programs for students who pursue two-year, four-year and graduate degrees in science, technology, math and engineering (including students who plan to teach math and science, particularly in high-poverty schools). Build on existing programs such as Science, Mathematics and Research for Transformation (SMART) at the Department of Defense;24 the Science and Technology Scholarship Program (STSP) at NASA;25 Robert Noyce Scholarships at the National Science Foundation (NSF);26 and federal loan forgiveness programs that provide up to $17,500 for secondary math and science teachers. Supplement Pell Grants for eligible students who successfully complete core academic courses in high school.27 (Federal, State, Business) July 2005 Increase the retention rate of undergraduates in science, technology, engineering and math majors by expanding programs such as NSF's Science, Technology, Engineering and Mathematics Talent Expansion Program (STEP Tech Talent)28 and by offering programs such as the Professional Science Masters that encourage college graduates to pursue fields outside of academia that combine science and/or math with industry needs.29 Encourage private sector involvement in consortia of industries and universities that establish clear metrics to increase the number of graduates. (Higher Education, Business, Federal, State) Eliminate the security clearance backlog that discourages many talented U.S. citizens--graduating students and adults--from entering key national security science, technology, engineering and math careers by providing an expedited clearance process. (Federal) Establish prestigious fellowships for exceptional recent college graduates or those at mid-career that lead to certification and a five-year commitment to teach math or science in schools with high-poverty populations.30 (Federal, State, Business) Create opportunities for high-achieving math and science students, such as advanced courses, math or science immersion experiences, corporate internships, charter schools, local magnet programs and regional/state magnet schools. (State, Business) Adopt curricula that include rigorous content as well as real world engineering and science experiences so that students learn what it means to do this work, what it takes to get there, and how exciting these fields are. (District, Business) 3. Upgrade K-12 math and science teaching to foster higher student achievement. Promote market- and performance-based compensation and incentive packages to attract and retain effective math and science teachers. Provide the flexibility for high school teachers, retirees and other qualified professionals to teach these subjects part time.31 Resources in No Child Left Behind that can be used to develop alternative teacher compensation systems and the proposed federal teacher incentive program are particularly crucial for helping to address shortages of math and science teachers. (Business, District, State, Federal) Tapping America's Potential: The Education for Innovation Initiative Support cost-effective professional development and other technical assistance to fill gaps in teachers' content knowledge and prepare them to teach the content effectively. Promote and strengthen use of existing resources in federal education laboratories, regional technical assistance centers, No Child Left Behind, and focused Math and Science Partnerships (MSP) to support best practices, with a priority on those who teach math in schools that are not making "adequate yearly progress" (AYP). (State, District, Higher Education, Federal, Business) Include incentives in the Higher Education Act and in state policies for colleges and universities to produce more math, science and engineering majors and to strengthen preparation programs for prospective math and science teachers. (Federal, State, Higher Education) Strengthen and enforce the highly qualified teacher provisions in No Child Left Behind for math and science teachers to ensure that they have the requisite knowledge in the subjects they are assigned to teach. (Federal, State) Launch a "Math Next" initiative as a logical next step to the U.S. Department of Education's focus on Reading First. (Federal, State) Provide high-quality online alternatives and postsecondary options for students in any middle school or high school that does not offer advanced math and science courses. (State) 4. Reform visa and immigration policies to enable the United States to attract and retain the best and brightest science, technology, math and engineering students from around the world to study for advanced degrees and stay to work in the United States. Provide an expedited process to obtain permanent residence for foreign students who receive advanced degrees in these fields at U.S. universities. (Federal) Ensure a timely process for foreign students who want to study science, technology, engineering and math fields at U.S. universities to obtain the necessary visas by clearing Department of Homeland Security requirements. (Federal) 5. Boost and sustain funding for basic research, especially in the physical sciences and engineering. Reverse declines in the federal share of total R&D spending, particularly for basic research in the physical sciences and engineering at the NSF, National Institute of Standards and Technology (NIST), U.S. Department of Defense basic research programs,32 and U.S. Department of Energy Office of Science, by adding a minimum of 7 percent per year to enable research to keep up with growth and inflation.33 (Federal) As a first step, all of the federal Cabinet secretaries with a stake in this issue--Defense, Education, Homeland Security, Commerce, Labor and Energy--should convene to map out how they can best mobilize to address the problem. To succeed, a strategic approach to the reauthorizations of relevant federal programs, a governmentwide focus across federal and state agencies, dynamic public-private partnerships, the frequent use of the bully pulpit, and vigorous private sector leadership and investment will be required. All of these efforts should be driven by a commitment to inspire and educate a new generation of mathematically and scientifically adept Americans. Conclusion This statement focuses on actions that can be initiated this year. Is this enough to solve the problem? Absolutely not. Clearly, a successful national Education for Innovation Initiative will need a comprehensive, long-term plan developed in partnership with the states. However, we must begin moving forward now. Business leaders are united around this agenda. We will work with the administration, members of Congress, governors, educators, colleges and universities, and member companies to identify specific legislative, regulatory, programmatic and corporate philanthropic vehicles to adopt these recommendations. We will provide the leadership needed to help the American public realize the dimensions of the problem and the urgent need to implement solutions. We must not disregard our history nor forget who we are. We are the people who pioneered in the air, built the first mass production assembly line, discovered vaccines for polio, harnessed the power of the atom, first set foot on the moon, and developed the best private and public biomedical research enterprise in the world. We are still that same people, still equal to the challenge if only we resolve to meet it. As World War II was drawing to a close, Congress approved the GI Bill, which provided billions of dollars in education and training benefits to nearly 10 million veterans between 1944 and 1956. Perhaps no greater investment in human capital has been made in American history. The return to American taxpayers on that investment has been incalculable. This generation now faces an entirely new challenge, both at home and abroad. Any number of countries in Asia and Europe are educating and training their citizens and competing with--and, in several cases, beginning to surpass--the United States for talent to develop new technologies, new cures, new frontiers. If we take our scientific and technological supremacy for granted, we risk losing it. What we are lacking at the moment is not so much the wherewithal to meet the challenge, but the will. Together, we must ensure that U.S. students and workers have the grounding in math and science that they need to succeed and that mathematicians, scientists and engineers do not become an endangered species in the United States. Endnotes 1. The baseline for the goal is taken from the most recent data (2001) in National Science Board's Science and Engineering Indicators, 2004: 2001 bachelor?s degrees earned by U.S. citizens/ permanent residents: o14,048 in physical sciences o4,001 in earth, atmospheric and ocean sciences o63,528 in biological sciences o11,256 in math o34,502 in computer sciences o17,986 in agricultural sciences o55,003 in engineering TOTAL: 200,324 Therefore, the goal is 400,000 bachelor's degrees earned by U.S. citizens/permanent residents by 2015. 2. Prediction by Richard E. Smalley, Gene and Norman Hackerman Professor of Chemistry and Professor of Physics & Astronomy, Rice University, in a PowerPoint presentation, "Nanotechnology, the S&T Workforce, Energy, and Prosperity," to the President's Council of Advisors on Science and Technology (PCAST), March 3, 2003. Available at http://cohesion. rice.edu/NaturalSciences/ Smalley/emplibrary/PCAST% 20March%203,%202003.ppt# 432,8,Slide8. 3. National Science Board, Science and Engineering Indicators, 2004. Volume 2, Appendix Table 2-34. 4. Ibid. Appendix Table 2-28. 5. Ibid. Appendix Table 2-22. 6. U.S. Department of Education, National Center for Education Statistics, Trends in International Mathematics and Science Study. Fourth- and eighth-grade results are available at http://nces.ed.gov/ pubs2005/2005005.pdf. Twelfth-grade results are available at http://nces.ed.gov/ pubs98/98049.pdf. 7. Enacted in 1958 and funded initially for $115,300,000, the National Defense Education Act (NDEA) provided support to all levels of education, public and private, in the United States. Its primary focus was on the advancement of student knowledge in mathematics, science and modern foreign languages. Institutions of higher education were provided with 90 percent of capital funds to use for low-interest loans to students. K-12 teachers educated with NDEA support were later able to get part of their loan forgiven for each year of teaching (5-7 years, forgiveness for amounts of 50-100 percent). NDEA also gave general support for improvements to elementary and secondary education, with statutory prohibitions against federal control or influence over curriculum, pedagogy, administration or personnel at any educational institution. Many individuals in the STEM workforce--those in their 50s and 60s today--cite NDEA as a major source of support for their postsecondary degrees. 8. A partial listing includes: Business-Higher Education Forum, A Commitment to America's Future: Responding to the Crisis in Mathematics and Science Education, February 2005; AEA, Losing the Competitive Advantage? The Challenge for Science and Technology in the United States, February 2005; Task Force on the Future of American Innovation, The Knowledge Economy: Is the United States Losing Its Competitive Edge? February 16, 2005; Council on Competitiveness, Innovate America, National Innovation Initiative Report: Thriving in a World of Challenge and Change, December 2004; Learning for the Future: Changing the Culture of Math and Science Education to Ensure a Competitive Workforce, Statement by the Research and Policy Committee of the Committee for Economic Development, 2003; President's Council of Advisors on Science and Technology (PCAST), Assessing the U.S. R&D Investment, 2002; Building Engineering & Science Tapping America's Potential: The Education for Innovation Initiative Talent, The Quiet Crisis: Falling Short in Producing American Scientific and Technical Talent, September 2002; Phase III Report of the U.S. Commission National Security/21st Century (The Hart-Rudman Commission), Road Map for National Security: Imperative for Change, March 15, 2001; National Commission on Mathematics and Science Teaching for the 21st Century (Glenn Commission), Before It's Too Late: A Report to the Nation from the The National Commission on Mathematics and Science Teaching for the 21st Century (Glenn Commission), September 27, 2000. 9. National Science Board, Science and Engineering Indicators, 2004. Appendix Table 2-34. 10. Matthew Kazmierczak, Losing the Competitive Advantage? The Challenge for Science and Technology in the United States (Washington, DC: AEA, 2005). 11. Richard J. Noeth et al., Maintaining a Strong Engineering Workforce: ACT Policy Report (Iowa City: ACT, Inc., 2003). Available at http://www.act.org/path /policy/pdf/engineer.pdf. 12. U.S. Department of Education, National Center for Education Statistics, International Outcomes of Learning in Mathematics Literacy and Problem Solving: 2003 PISA Results from the U.S. Perspective (Washington, DC: U.S. Department of Education, 2004). 13. Ibid, Qualifications of the Public School Teacher Workforce: Prevalence of Out-of-Field Teaching 1987-88 to 1999-2000--Statistical Analysis Report. Table 1. 14. American Association for the Advancement of Science, Report XXX: Research and Development FY 06, Chapter Two, "Historical Trends in Federal R&D." Available at http://www.aaas.org/spp/rd/ 06pch2.htm. 15. June Kronholz, "Economic Time Bomb: U.S. Teens Are Among the Worst at Math," The Wall Street Journal, December 7, 2004. 16. U.S. Department of Education, National Center for Education Statistics, Revenues and Expenditures for Public Elementary and Secondary Education: School Year 2002-03 (Washington, DC: U.S. Department of Education, May 2005). Available at http://nces.ed.gov/pubs2005/ 2005353.pdf. 17. The Agenda for Action released at the 2005 National Education Summit on High Schools calls on governors and business and education leaders to develop a comprehensive plan for their states to restore value to the high school diploma to ensure graduates are college- and work-ready, redesign the American high school, give high school students the excellent teachers and principals they need, hold high schools and colleges accountable for student success, and streamline educational governance. Available at http://www.achieve.org/achieve.nsf/ 2005Summit?OpenForm and http://www.nga.org. 18. Jay P. Greene and Marcus A. Winters, Public High School Graduation and College Readiness Rates: 1991-2002 (New York: Manhattan Institute for Policy Research, February 2005); Christopher B. Swanson, Who Graduates? Who Doesn't? A Statistical Portrait of Public High School Graduation, Class of 2001 (Washington, DC: Urban Institute, 2004); Andrew Sum, Paul Harrington et al., The Hidden Crisis in the High School Dropout Problems of Young Adults in the U.S.: Recent Trends in Overall School Dropout Rates and Gender Differences in Dropout Behavior (Washington, DC: Business Roundtable, February 2003). Available at http://www. businessroundtable.org. 19. The National Assessment of Educational Progress (NAEP) long-term trend assessment scores released on July 14, 2005, show gains among 9year-olds in reading, as well as a closing of the achievement gap in reading for African American and Hispanic students. The NAEP data also show significant improvement and a closing of the achievement gap in mathematics among 9- and 13-year-olds. 20. National Council on Teacher Quality (NCTQ), Higher Pay for Math, Science and Other Shortage Subjects (Washington, DC: NCTQ). 21. Carolyn Hoxby, "Changing the Profession," Education Next, Hoover Institution, 2001. Available at http://www.educationnext.org/2001sp/57.html. 22. For example, there is a high economic return for an engineering degree even if a graduate works in a non- engineering field. From Neeta P. Fogg, Paul E. Harrington and Thomas F. Harrington, College Majors Handbook with Real Career Paths and Payoffs: The Actual Jobs, Earnings, and Trends for Graduates of Sixty College Majors, 2nd ed. (Indianapolis: JIST Publishing, 2004). 23. The State Scholars Initiative is a business-led effort that focuses on preparing high school students for college and careers through rigorous coursework. The Initiative is currently offered in 14 states. Available at http://www. centerforstatescholars.org. 24. The Department of Defense Science, Mathematics and Research for Transformation (SMART) Scholarship provides financial assistance to students pursuing degrees in science, math and engineering fields in return for a commitment to work for the Defense Department. Available at http://www.asee.org/resources/ fellowships/smart/. 25. The Science and Technology Scholarship Program (STSP) is currently being developed by NASA. The scholarship-forservice program will provide scholarship and internship opportunities to undergraduate students pursuing degrees in engineering, mathematics, computer science and physical/life sciences. Students will compete for scholarship awards of up to $20,000 per year in exchange for a commitment to work full time at a NASA Center or one of its affiliates upon graduation. Available at http://education.nasa.gov/divisions/higher/overview/F_pathfinder_scholarship.html. 26. The Robert Noyce Scholarship Program at NSF provides funds to institutions of higher education to support scholarships, stipends and programs for talented science, technology, engineering and mathematics majors and professionals to become K-12 math and science teachers in high-need K-12 schools. Available at http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=5733 &org=NSF. 27. Signed into law on October 30, 2004, by President Bush, the Taxpayer-Teacher Protection Act (P.L. 108-409) authorizes up to $17,500 in loan forgiveness to eligible, highly qualified teachers in special education, secondary math or secondary science. Available at http://www.ifap.ed.gov/ dpcletters/GEN0414.html. 28. The goal of the Science, Technology, Engineering, and Mathematics Talent Expansion Program (STEP), created by the Tech Talent legislation, is to increase the number of students--U.S. citizens or permanent residents--receiving associate's or bachelor's degrees in science, technology, engineering and mathematics. Available at http://www.nsf.gov/funding/pgm_summ.jsp?pims_ id=5488. 29. The Professional Science Master's is a degree in science or mathematics for students interested in a wider variety of career options than provided by current graduate programs in the two subjects. Available at http://www.sciencemasters. com/. 30. The National Commission on Mathematics and Science report, Before It's Too Late: A Report to the Nation from The National Commission on Mathematics and Science Teaching for the 21st Century, identifies goals for improving mathematics and science teaching. Available at http://www.ed.gov/inits/Math/glenn /report.pdf. 31. The Teaching Commission's report, Teaching at Risk: A Call to Action, identifies the need to differentiate compensation and develop incentives to recruit and retain teachers in shortage fields. Available at http://www.theteaching commission.org/publications/ FINAL_Report.pdf. 32. The specific Department of Defense programs are 6.1 and 6.2. 33. The federal effort in research must keep pace with the overall growth of the economy, not fall, as it has outside of biomedical research. The 7 percent is based on 3 percent (real GDP growth) plus 4 percent (NIH and higher education price index), which equals 7 percent. The Business Roundtable 1717 Rhode Island Avenue, NW, Suite 800 Washington, DC 20036-5610 Telephone 202.872.1260 Facsimile 202.466.3509 Website businessroundtable.org From checker at panix.com Sun Sep 25 20:06:07 2005 From: checker at panix.com (Premise Checker) Date: Sun, 25 Sep 2005 16:06:07 -0400 (EDT) Subject: [Paleopsych] NS: Dark-matter basketballs could explain a lot Message-ID: There were six articles on fascism, after all. Here's something entirely different. Dark-matter basketballs could explain a lot http://www.newscientist.com/article.ns?id=mg18725174.800&eedId=space_atom03 * 20 September 2005 * Marcus Chown THE universe's invisible matter may not be made of exotic unknown particles after all. Instead, "dark" matter could be clumps of the ordinary stuff trapped in a previously unsuspected state of the vacuum of space. The dark-matter balls envisaged by Colin Froggatt of the University of Glasgow, UK, and Holger Nielsen of the Niels Bohr Institute in Copenhagen, Denmark, are relics of a vacuum state which theory suggests could have been widespread in the first second after the big bang (www.arxiv.org/abs/astro-ph/0508513). Each ball would not be much bigger than a basketball and atomic nuclei would have formed inside them just as they do everywhere else in the universe, only bound by a stronger nuclear force. The balls would be much denser than ordinary matter, with each one weighing 100 million tonnes. To account for the known density of dark matter in the cosmos, there would have to be just one such ball drifting through every volume of space about the size of our solar system. The new theory makes one important prediction - that there should be five times as much dark matter as ordinary matter. "That's exactly what is observed," says Froggatt. Ben Allanach of CERN, the European centre for particle physics near Geneva, Switzerland, admits the idea is wildly speculative. "But I can't think of anything specifically to rule it out," he says. >From issue 2517 of New Scientist magazine, 20 September 2005, page 10 From thrst4knw at aol.com Mon Sep 26 17:22:15 2005 From: thrst4knw at aol.com (Todd I. Stark) Date: Mon, 26 Sep 2005 13:22:15 -0400 Subject: [Paleopsych] Global Gulag: Steven Hassan: Political Propaganda is Cult Brainwashing In-Reply-To: References: Message-ID: <43382E47.1020209@aol.com> I'll say that Hassan has always seemed to me to be pretty consistent with the mainstream theoretical stance in social psychology, although his focus on destructive groups and on dealing with their victims gives his work a slant of its own. There is also a thread in psychology that is very critical of the "brainwashing" model because it takes the concept of influence so far with so few distinctions. Making the transition from 'suggestion' to constructing a new ego sense is a big leap both in theory and in practice, and the Lewin-Lifton-Singer-Hassan style models of influence take that leap easily because of their emphasis on clinical observation rather than experimental work. Most of the suggestion research is experimental. The clinical traditions around suggestion and influence have always been controversial in some regards. The mainstream tradition of hypnosis and suggestibility research is not nearly as well known as the clinical horror stories. Hassan's work is built partly on both. Todd Premise Checker wrote on 9/24/2005, 9:55 PM: > Steven Hassan: Political Propaganda is Cult Brainwashing > [Thanks to Laird for this. There are hundreds of theories about > persuasion. > Apparently this is just one man's theory. I don't know how his > theories compare > with other theories, how compatible they are with each other, and so on.] > > A successful induction by a destructive cult displaces a person's former > identity and replaces it with a new one. That new identity may not be > one that > the person would have freely chosen under her own volition. > > -Steven Hassan From checker at panix.com Mon Sep 26 23:53:39 2005 From: checker at panix.com (Premise Checker) Date: Mon, 26 Sep 2005 19:53:39 -0400 (EDT) Subject: [Paleopsych] NYT: Fossils Offer Support for Meteor's Role in Dinosaur Extinction Message-ID: Fossils Offer Support for Meteor's Role in Dinosaur Extinction http://www.nytimes.com/2005/09/20/science/space/20mete.html By [3]WILLIAM J. BROAD No guns materialized. Even so, the scientists kept a low profile while digging, eager to avoid security forces from the nearby air base - an important military site that helped provoke the Cuban missile crisis. The diggers had no permit and no interest in being asked to explain their presence. In the end, they found rare fossils that are shedding new light on what wiped out the dinosaurs at the end of the Cretaceous period 65 million years ago. For more than a decade, the standard view has envisioned a speeding object from space that crashed into the earth and kicked up enough dust and rock around the globe to blot out the sun. The smoking gun seemed to be the discovery beneath the Yucat?n peninsula of Mexico of a 110-mile-wide crater called Chicxulub, after a nearby town. But lately, doubters have argued that Chicxulub formed 300,000 years before the mass extinction - too early to have played a role in the demise of the dinosaurs and hundreds of other plant and animal species that vanished at the end of the Cretaceous. The team of scientists zeroed in on Cuba as an ideal place to seek clues, having heard from Cuban colleagues of a possible trove of fossils of the right age. The Cuban zone was 600 miles from the Mexican crater. Now, in the September issue of Geology, the scientists, from Spain, Cuba and Mexico, report that they have discovered a highly disturbed bed of fossils that bears numerous signatures of Chicxulub's mayhem. The date of the disturbance, 65 million years ago, is exactly at the end of the Cretaceous. "It's basic" to resolving the debate, Laia Alegret, a team geologist at the University of Zaragoza in Spain, said in an interview. "But it was difficult. The site is located opposite a military base. So it's almost impossible to get a work permit." The discovery was outside Santa Clara, a city in central Cuba whose nearby air base drew scrutiny in 1962 when American spy planes spotted Soviet jets and antiaircraft missiles. It turned out that the base held Soviet bombers and a half-dozen atom bombs. "It was definitely a hot spot," said Timothy Naftali, a cold war historian at the University of Virginia. Starting around 2000, Dr. Alegret and her European colleagues repeatedly sought work permits for a nearby hill but always met with stultifying delays, if not outright rejections. Finally, they slipped into the site with their Cuban colleagues, going in late 2000, 2002 and 2003. At other times, the Cubans went in alone. A rocky outcrop on the hill showed an exposed bed of sedimentary rock made up of broken bits of minerals and fossils. It was more than 30 feet thick. The team took 66 samples. Examination with microscopes showed numerous signs of cosmic violence, including quartz deformed by high temperatures and pressures, as well as tiny spheres of glass, both clearly debris from a spectacular fireball. Microscopic study also revealed the presence of thousands of tiny fossil creatures, most especially foraminifera. Those one-celled animals have a bewildering array of minuscule shells. Forams, as they are known, evolve so fast that geologists, paleontologists and oil companies use their shifting appearance as reliable guides to geologic dating. "They told the age of the sediments," Dr. Alegret said. "So we've definitely confirmed the age of these deposits." At the end of the Cretaceous, the rocky bed now in Cuba formed on the ocean bottom at a depth of perhaps 3,300 feet, over a few days or weeks as tons of debris rained down from the sky and huge waves generated by the Chicxulub event washed land out to sea. "It was geologically instantaneous," Dr. Alegret said of the deposit's formation. Earth movements over the ages turned that part of the seabed into land. Dr. Alegret's co-authors include Ignacio Arenillas, Jos? A. Arz, Alfonso Mel?ndez, Eustoquio Molina and Ana R. Soria of the University of Zaragoza; Consuelo D?az of the Institute of Geology and Paleontology in Havana; Jos? M. Grajales-Nishimura of the Mexican Institute of Petroleum in Mexico City; and Reinaldo Rojas of the National Museum of Natural History in Havana. Dr. Alegret said that because of the site's importance, her Cuban colleagues were talking with the government to have it protected from rain and erosion. The aim is to save the outcrop for scientific study. From checker at panix.com Mon Sep 26 23:53:51 2005 From: checker at panix.com (Premise Checker) Date: Mon, 26 Sep 2005 19:53:51 -0400 (EDT) Subject: [Paleopsych] National Journal: Jonathan Rauch: The Loss of New Orleans Wasn't Just A Tragedy. It Was a Plan. Message-ID: Jonathan Rauch: The Loss of New Orleans Wasn't Just A Tragedy. It Was a Plan. The National Journal September 17, 2005 First, the summary from the "Magazine and Journal Reader" feature of the daily bulletin from the Chronicle of Higher Education, 5.9.21 http://chronicle.com/daily/2005/09/2005092101j.htm A glance at the current issue of the National Journal: The battle never fought in New Orleans In the wake of Hurricane Katrina, what is most astonishing is that the plan for a powerful hurricane in New Orleans was "to lose the city," writes Jonathan Rauch, a scholar of governance studies at the Brookings Institution. "In other words," he says, "if a severe hurricane struck, the city's flooding and abandonment was not what would happen if the plan failed. It was the plan." For years, it was well known that the city's levees could not defend against a hurricane of Katrina's strength, says Mr. Rauch. Last year, for example, The Philadelphia Inquirer wrote that, in the case of such a storm, "evacuation is the only way to protect New Orleanians." Despite those warnings, lawmakers continued to sidestep levee improvement, leaving New Orleans extremely vulnerable to major storms. Officials figured a storm like Katrina happened only once every two or three centuries, says Mr. Rauch. The risk of inaction, he writes, seemed to outweigh the billions of dollars it would cost to improve the levees adequately. Some estimates put the amount of money saved because of this decision at as much as $33-billion, he writes. Characterizations of New Orleans's vulnerability as "tantamount to negligence" appear justified, he writes. "A far-larger flood-prevention program should have been under way." Mr. Rauch calls for a review of America's disaster-preparedness strategy "no less sweeping than the post-9/11 revision of America's security strategy." Congress, he says, should establish an independent disaster-review board to assess vulnerable locations and set priorities for spending on disaster prevention. "If there is another New Orleans out there, the public should know about it," he writes. "Katrina should change American habits of mind forever." --Jason Breslow _________________________________________________________________ Background articles from The Chronicle: * [55]Disaster Could Have Been Far Worse, Says Sociologist Who Thinks New Orleans 'Lucked Out' (9/19/2005) * [56]Opinion: New Orleans and the Probability Blues (9/14/2005) Special Report: * [57]Links to all of The Chronicle's coverage of Hurricane Katrina and its effect on colleges References 55. http://chronicle.com/daily/2005/09/2005091904n.htm 56. http://chronicle.com/daily/2005/09/2005091402n.htm 57. http://chronicle.com/indepth/katrina/ E-mail me if you have problems getting the referenced articles. ------------------- Even in foresight, it seems justified to call New Orleans's vulnerability 'tantamount to negligence.' -------------- The evacuation plans were inadequate and then bungled. The rescue was slow, confused, often nonexistent. Yet the most striking fact of the New Orleans catastrophe has received less notice than it deserves: The plan for New Orleans in case of a hit from a very powerful hurricane was to lose the city. In other words, if a severe hurricane struck, the city's flooding and abandonment was not what would happen if the plan failed. It was the plan. New Orleans is built between a lake, a river, and the Gulf of Mexico, and it is lower than the surrounding waters. It was kept dry by an extensive system of levees and pumps. That system was itself contributing to the slow subsidence of the city. The levee system was largely designed in the early 1960s. By the standards of their day, the levees were built conservatively, but within certain constraints. In particular, they were built to withstand a Category 3 hurricane. Hurricanes come in two jumbo sizes: Category 4 and, most severe but rarest, Category 5. A storm of either magnitude could deliver a surge that would overtop or breach the levees. The city would be flooded, to depths as great as 20 feet. It would become a lake. Much of it would be destroyed, and many people would die. All of this was well known. Press accounts and public officials have been quite open about it for years. "Evacuation is the only way to protect New Orleanians," reported the Philadelphia Inquirer last year. It quoted Terry C. Tullier, the New Orleans director of emergency preparedness, as saying, "It's only a matter of time." Col. Peter Rowan, the commander of the New Orleans District of the Army Corps of Engineers, told the Inquirer that the city was "at the mercy of chance for the foreseeable future." Media coverage was rife with such warnings. What could be done? "It's possible to protect New Orleans from a Category 5 hurricane," Al Naomi, a senior project manager with the Corps, told the Inquirer. "To do nothing is tantamount to negligence." In that interview, he estimated that hurricane-proofing the levees and building floodgates at the mouth of Lake Pontchartrain might cost $1 billion and take 20 years. In other interviews, Naomi estimated the cost at $2 billion to $2.5 billion and said the project could be completed in three to five years. "The point is to eliminate that storm-surge threat with one of these plans," Naomi told Riverside, a Corps of Engineers magazine. "The philosophy of what we do during a hurricane would change. We could spend more time protecting our homes and less time trying to get out of the city in these desperate evacuations." In 1999, reports the Chicago Tribune, Congress authorized the Army Corps to conduct a $12 million study to determine the cost of protecting New Orleans. But the study was not set to get under way until 2006, and it has so far received funding of only $100,000 to $200,000. "It was not clear why the study has taken so long to begin," the Tribune reported. Meanwhile, Congress and the White House consistently and sharply cut requests for levee-improvement funds. Katrina came ashore as a Category 4 storm. The levees failed and the city, only partially evacuated, was swamped. "The intensity of this storm simply exceeded the design capacity of this levee," Lt. Gen. Carl Strock, the commander of the Corps of Engineers, told reporters on September 2. Told so barely, the tale suggests shocking imprudence. But hindsight is 20/20. Remember, the odds of a Category 4 or 5 hurricane hitting New Orleans any given year were small. Strock told reporters, "We figured we had a 200- or 300-year level of protection. That means that an event that we were protecting from might be exceeded every 200 or 300 years. So we had an assurance that, 99.5 percent, this would be OK. We, unfortunately, have had that 0.5 percent activity here." Remember, too, that reinforcing the levees was a multibillion-dollar project. An ancillary project to restore the protective marshes of the Mississippi Delta, which would have reduced the force of storm surges reaching the city, would cost something like $14 billion over three decades. For that kind of money, there are always competing priorities, some of them urgent. The question, then, is not whether the failure to improve New Orleans's flood protection was a mistake in hindsight -- obviously, it was -- but whether it was a reasonable choice in foresight, based on the probable odds and costs as they appeared at the time. Weighing low-probability, high-cost events is, as it happens, something economists and engineers know a bit about. W. Kip Viscusi, an economist at Harvard Law School and the editor of the Journal of Risk and Uncertainty, points out that the Corps of Engineers was among the first to develop and apply what has become a common cost-benefit template. Using the more cautious of Strock's figures, assume the odds are that a storm surge would overtop or breach the existing New Orleans levees once every 200 years. This seems, if anything, optimistic, given that Category 4 storms hit the city in 1915 and 1947; that a Category 5 storm (Camille) narrowly missed in 1969; and that the devastating Katrina itself was not a direct hit. Still, assume it. Assume also that officials could reasonably expect the city's inundation, abandonment, and partial destruction to cost, ballpark, $200 billion in direct and indirect economic losses. In any given year, then, figure that the expected economic cost of the swamping of New Orleans is $1 billion (divide the $200 billion cost over 200 years). A $2 billion levee project could be expected to pay for itself, probabilistically speaking, in two years; a $14 billion Delta restoration project, in 14 years. But wait. New Orleans's 200-year flood might take place a century from now instead of right away (remember, this analysis is from a pre-Katrina standpoint), and money lost in the future matters less to us than money lost today. At an interest rate of 3 percent, Viscusi says, the present value of averting $1 billion in expected annual damage forever is $33 billion; at 5 percent, $20 billion; at 10 percent, $10 billion. Any of those numbers is higher than the estimated cost of hurricane-proofing the levees, and all but the smallest are higher than restoring the Delta. Now, recall that those calculations reflect only tangible monetary cost. They do not account for inconvenience, pain and trauma, lives uprooted, and, above all, lives lost. Even a superbly organized evacuation would leave thousands of people behind. Moving nursing home patients, emptying hospitals, and losing control of the streets are dangerous at best. To all of which, add the psychic and cultural blow of leaving one of the country's most historic cities an empty ruin. Strock told reporters that decisions about the levees were based on "whether it's worth the cost to the benefit, and then striking the right level of protection." Unless one uses very optimistic assessments of hurricane odds and economic costs, and also places a low value on human costs, New Orleans did not strike the right level of protection. Even in foresight, Naomi's characterization of New Orleans's vulnerability as "tantamount to negligence" appears justified. A far larger flood-prevention program should have been under way. "This was not a close call," Viscusi says. "It's a no-brainer that you do this." The immediate problem is to identify and bury the dead, tend to the refugees, and decide whether and how to rebuild. ("Whatever rebuilding is done in New Orleans, nothing very fancy should go there," says Richard A. Posner, a federal appeals court judge and the author of last year's book Catastrophe: Risk and Response.) After that should come a revision of America's disaster strategy no less sweeping than the post-9/11 revision of America's security strategy. For example, Congress should create an independent Disaster Review Board to perform and publish an annual inventory of catastrophic vulnerabilities, highlighting in red all the places where, as in New Orleans, more prevention or mitigation makes sense. The board should prioritize spending and send an overall disaster budget to Congress every year for an up-or-down vote, forcing politicians to confront the issue. If population centers lie over the San Andreas Fault, in the shadow of Mount Rainier (an active volcano that could devastate the Seattle area), or on the floodplains of the Mississippi, the disaster board should be able to propose protecting them, requiring them to protect themselves, or encouraging them to move. If there is another New Orleans out there, the public should know about it and should have to think about it. Katrina should change American habits of mind forever. From checker at panix.com Mon Sep 26 23:54:00 2005 From: checker at panix.com (Premise Checker) Date: Mon, 26 Sep 2005 19:54:00 -0400 (EDT) Subject: [Paleopsych] CHE: We're All Machiavellians Message-ID: We're All Machiavellians The Chronicle of Higher Education, 5.9.25 http://chronicle.com/weekly/v52/i05/05b01001.htm By FRANS B.M. DE WAAL Given the obvious "will to power" (as Friedrich Nietzsche called it) of the human race, the enormous energy put into its expression, the early emergence of hierarchies among children, and the childlike devastation of grown men who tumble from the top, I'm puzzled by the taboo with which our society surrounds this issue. Most psychology textbooks do not even mention power and dominance, except in relation to abusive relationships. Everyone seems in denial. In one study on the power motive, corporate managers were asked about their relationship with power. They did acknowledge the existence of a lust for power, but never applied it to themselves. They enjoyed responsibility, prestige, and authority. The power grabbers were other men. Political candidates are equally reluctant. They sell themselves as public servants, only in it to fix the economy or improve education. Have you ever heard a candidate admit he wants power? Obviously, the word "servant" is doublespeak: Does anyone believe that it's only for our sake that they join the mudslinging of modern democracy? Do the candidates themselves believe this? What an unusual sacrifice that would be. It's refreshing to work with chimpanzees: They are the honest politicians we all long for. When the political philosopher Thomas Hobbes postulated an insuppressible power drive, he was right on target for both humans and apes. Observing how blatantly chimpanzees jockey for position, one will look in vain for ulterior motives and expedient promises. I was not prepared for this when, as a young student, I began to follow the dramas among the Arnhem Zoo chimpanzees from an observation window overlooking their island. In those days students were supposed to be antiestablishment, and my shoulder-long hair proved it. We considered power evil and ambition ridiculous. Yet my observations of the apes forced me to open my mind to seeing power relations not as something bad but as something ingrained. Perhaps inequality was not to be dismissed as simply the product of capitalism. It seemed to go deeper than that. Nowadays this may seem banal, but in the 1970s human behavior was seen as totally flexible; not natural but cultural. If we really wanted to, people believed, we could rid ourselves of archaic tendencies like sexual jealousy, gender roles, material ownership, and yes, the desire to dominate. Unaware of this revolutionary call, my chimpanzees demonstrated the same archaic tendencies, but without a trace of cognitive dissonance. They were jealous, sexist, and possessive, plain and simple. I didn't know then that I'd be working with them for the rest of my life, or that I would never again have the luxury of sitting on a wooden stool and watching them for thousands of hours. It was the most revelatory time of my life. I became so engrossed that I began trying to imagine what made my apes decide on this or that action. I started dreaming of them at night, and, most significant, I started seeing the people around me in a different light. I am a born observer. My wife, who does not always tell me what she buys, has learned to live with the fact that I can walk into a room and within seconds pick out anything new or changed, regardless of how small. It could be just a new book inserted between other books, or a new jar in the refrigerator. I do so without any conscious intent. Similarly, I like to pay attention to human behavior. When picking a seat in a restaurant I want to face as many tables as possible. I enjoy following the social dynamics -- love, tension, boredom, antipathy -- around me based on body language, which I consider more informative than the spoken word. Since keeping track of others is something I do automatically, becoming a fly on the wall of an ape colony came naturally to me. My observations helped me see human behavior in an evolutionary light. By this, I mean not just the Darwinian light one hears so much about, but also the apelike way we scratch our heads if conflicted, or the dejected look we get if a friend pays too much attention to someone else. At the same time, I began to question what I'd been taught about animals: They just follow instinct; they have no inkling of the future; everything they do is selfish. I couldn't square this with what I was seeing. I lost the ability to generalize about "the chimpanzee" in the same way that no one ever speaks about "the human." The more I watched, the more my judgments began to resemble those we make about other people, such as this person is kind and friendly, and that one is self-centered. No two chimpanzees are the same. It's impossible to follow what's going on in a chimp community without distinguishing between the actors and trying to understand their goals. Chimpanzee politics, like human politics, is a matter of individual strategies clashing to see who comes out ahead. The literature of biology proved of no help in understanding the social maneuvering because of its aversion to the language of motives. Biologists don't talk about intentions and emotions. So I turned to Niccol? Machiavelli. During quiet moments of observation, I read from a book published four centuries earlier. The Prince put me in the right frame of mind to interpret what I was seeing on the island, though I'm pretty sure that the philosopher himself never envisioned this particular application of his work. Among chimpanzees, hierarchy permeates everything. When we bring two females inside the building -- as we often do for testing -- and have them work on the same task, one will be ready to go while the other hangs back. The second female barely dares to take rewards, and won't touch the puzzle box, computer, or whatever else we're using in the experiment. She may be just as eager as the other, but defers to her "superior." There is no tension or hostility, and out in the group they may be the best of friends. One female simply dominates the other. In the Arnhem colony, the alpha female, Mama, did occasionally underline her position with fierce attacks on other females, but she was generally respected without contest. Mama's best friend, Kuif, shared her power, but this was nothing like a male coalition. Females rise to the top because everyone recognizes them as leader, which means there is little to fight over. Inasmuch as status is largely an issue of personality and age, Mama did not need Kuif. Kuif shared in, but did not contribute to, Mama's power. Among the males, in contrast, power is always up for grabs. It's not conferred on the basis of age or any other trait, but has to be fought for and jealously defended in the face of contenders. If males form coalitions, it's because they need each other. Status is determined by who can beat whom, not just on an individual basis but in the group as a whole. It does not do a male any good if he can physically defeat his rival, if each time he tries to do so the whole group jumps on top of him. In order to rule, a male needs both physical strength and buddies who will help him out when a fight gets too hot. When Nikkie was alpha, Yeroen's assistance was crucial. Not only did Nikkie need the old male's help to keep a powerful third male in check, but he was also unpopular with the females. It was not unusual for females to band together against him. Yeroen, being highly respected, could stop such mass discontent by positioning himself between Nikkie and the screaming females. But with complex strategies come miscalculations, as Nikkie showed several years later, when he became so intolerant toward his partner, Yeroen, that he lost his support and immediately dropped in rank. Nikkie had underestimated his dependence on the old fox. This is why we speak of political "skills": It's not so much who you are, but what you do. We are exquisitely attuned to power, responding quickly to any new configuration. If a businessman tries to get a contract with a large corporation, he will be in meeting after meeting with all sorts of people from which a picture emerges of rivalries, loyalties, and jealousies within the corporation he is visiting, such as who wants whose position, who feels excluded by whom, and who is on his way down or out. This picture is at least as valuable as the organizational chart of the company. We simply could not survive without our sensitivity to power dynamics. Power is all around us, continuously confirmed and contested, and perceived with great accuracy. But social scientists, politicians, and even laypeople treat it like a hot potato. We prefer to cover up underlying motives. Anyone who, like Machiavelli, breaks the spell by calling it like it is, risks his reputation. No one wants to be called "Machiavellian," even though most of us are. Frans B.M. de Waal is a professor of psychology and director of the Living Links Center, part of the Yerkes National Primate Research Center, at Emory University. This article is excerpted from Our Inner Ape: A Leading Primatologist Explains Why We Are Who We Are, to be published next month by Riverhead Books. From checker at panix.com Mon Sep 26 23:54:06 2005 From: checker at panix.com (Premise Checker) Date: Mon, 26 Sep 2005 19:54:06 -0400 (EDT) Subject: [Paleopsych] =?iso-8859-1?q?CHE=3A_J=FCrgen_Habermas_and_Post-Se?= =?iso-8859-1?q?cular_Societies?= Message-ID: J?rgen Habermas and Post-Secular Societies The Chronicle of Higher Education, 5.9.25 http://chronicle.com/weekly/v52/i05/05b01601.htm By RICHARD WOLIN Among 19th-century thinkers it was an uncontestable commonplace that religion's cultural centrality was a thing of the past. For Georg Hegel, following in the footsteps of the Enlightenment, religion had been surpassed by reason's superior conceptual precision. In The Essence of Christianity (1841), Ludwig Feuerbach depicted the relationship between man and divinity as a zero-sum game. In his view, the stress on godliness merely detracted from the sublimity of human ends. In one of his youthful writings, Karl Marx, Feuerbach's most influential disciple, famously dismissed religion as "the opium of the people." Its abolition, Marx believed, was a sine qua non for human betterment. Friedrich Nietzsche got to the heart of the matter by having his literary alter ego, the brooding prophet Zarathustra, brusquely declaim, "God is dead," thereby pithily summarizing what many educated Europeans were thinking but few had the courage actually to say. And who can forget Nietzsche's searing characterization of Christianity as a "slave morality," a plebeian belief system appropriate for timorous conformists but unsuited to the creation of a future race of domineering ?bermenschen? True to character, the only representatives of Christianity Nietzsche saw fit to praise were those who could revel in a good auto-da-f? -- Inquisition stalwarts like Ignatius Loyola. Twentieth-century characterizations of belief were hardly more generous. Here, one need look no further than the title of Freud's 1927 treatise on religion: The Future of an Illusion. Today, however, there are omnipresent signs of a radical change in mentality. In recent years, in both the United States and the developing world, varieties of religious fundamentalism have had a major political impact. As Democratic presidential hopefuls Howard Dean and John Kerry learned the hard way, politicians who are perceived as faithless risk losing touch with broad strata of the electorate. Are contemporary philosophers up to the challenge of explaining and conceptualizing these striking recent developments? After all, what Freud, faithfully reflecting the values of the scientific age, cursorily dismissed as illusory seems to have made an unexpected and assertive comeback -- one that shows few signs of abating anytime soon. J?rgen Habermas may be the living philosopher most likely to succeed where angels, and their detractors, fear to tread. Following Jacques Derrida's death last October, it would seem that Habermas has justly inherited the title of the world's leading philosopher. Last year he won the prestigious Kyoto Prize for Arts and Philosophy (previous recipients include Karl Popper and Paul Ricoeur), capping an eventful career replete with honors as well as a number of high-profile public debates. The centerpiece of Habermas's moral philosophy is "discourse ethics," which takes its inspiration from Immanuel Kant's categorical imperative. For Kant, to count as moral, actions must pass the test of universality: The actor must be able to will that anyone in a similar situation should act in the same way. According to Kant, lying and stealing are immoral insofar as they fall beneath the universalization threshold; only at the price of grave self-contradiction could one will that lying and stealing become universal laws. Certainly, we can envisage a number of exceptional situations where we could conceivably justify lying or stealing. In Kant's example, at your door is a man intent on murdering your loved one and inquiring as to her whereabouts. Or what if you were too poor to purchase the medicine needed to save your spouse's life? In the first case you might well think it would be permissible to lie; and in the second case, to steal. Yet on both counts Kant is immovable. An appeal to circumstances might well complicate our decision making. It might even elicit considerable public sympathy for otherwise objectionable conduct. But it can in no way render an immoral action moral. It is with good reason that Kant calls his imperative a categorical one, for an imperative that admits of exceptions is really no imperative at all. Habermas's approach to moral philosophy is Kantian, although he takes exception to the solipsistic, egological framework Kant employs. Habermas believes that, in order to be convincing, moral reasoning needs a broader, public basis. Discourse ethics seeks to offset the limitations of the Kantian approach. For Habermas, the give and take of argumentation, as a learning process, is indispensable. Through communicative reason we strive for mutual understanding and learn to assume the standpoint of the other. Thereby we also come to appreciate the narrowness of our own individual perspective. Discourse ethics proposes that those actions are moral that could be justified in an open-ended and genuine public dialogue. Its formula suggests that "only those norms can claim to be valid that meet (or could meet) with the appro-val of all affected in their capacity as participants in a practical discourse." Until recently Habermas was known as a resolutely secular thinker. On occasion his writings touched upon religious subjects or themes. But these confluences were exceptions that proved the rule. Yet a few years ago the tonality of his work began to change ever so subtly. In fall 2001 Habermas was awarded the prestigious Peace Prize of the German Publishers and Booksellers Association. The title of his acceptance speech, "Faith and Knowledge," had a palpably theological ring. The remarks, delivered shortly after the September 11 terrorist attacks, stressed the importance of mutual toleration between secular and religious approaches to life. Last year Habermas engaged in a high-profile public dialogue with Cardinal Joseph Ratzinger -- who, on April 19, was named as Pope John Paul II's successor -- at the cardinal's behest. A number of the philosopher's left-wing friends and followers were taken aback by his willingness to have a dialogue with one of Europe's most conservative prelates. In 2002 Habermas had published In Defense of Humanity, an impassioned critique of the risks of biological engineering and human cloning. It was this text in particular, in which the philosopher provided an eloquent defense of the right to a unique human identity -- a right that cloning clearly imperils -- that seems to have piqued the cardinal's curiosity and interest. Yet if one examines the trajectory of Habermas's intellectual development, the Ratzinger exchange seems relatively unexceptional. Glance back at Habermas's philosophical chef d'oeuvre, the two-volume Theory of Communicative Action (1981), and you'll find that one of his key ideas is the "linguistification of the sacred" (Versprachlichung des Sakrals). By this admittedly cumbersome term, Habermas asserts that modern notions of equality and fairness are secular distillations of time-honored Judeo-Christian precepts. The "contract theory" of politics, from which our modern conception of "government by consent of the governed" derives, would be difficult to conceive apart from the Old Testament covenants. Similarly, our idea of the intrinsic worth of all persons, which underlies human rights, stems directly from the Christian ideal of the equality of all men and women in the eyes of God. Were these invaluable religious sources of morality and justice to atrophy entirely, it is doubtful whether modern societies would be able to sustain this ideal on their own. In a recent interview Habermas aptly summarized those insights: "For the normative self-understanding of modernity, Christianity has functioned as more than just a precursor or a catalyst. Universalistic egalitarianism, from which sprang the ideals of freedom and a collective life in solidarity, the autonomous conduct of life and emancipation, the individual morality of conscience, human rights, and democracy, is the direct legacy of the Judaic ethic of justice and the Christian ethic of love." Three years ago the MIT Press published Religion and Rationality: Essays on Reason, God, and Modernity, an illuminating collection of Habermas's writings on religious themes. Edited and introduced by the philosopher Eduardo Mendieta, of the State University of New York at Stony Brook, the anthology concludes with a fascinating interview in which the philosopher systematically clarifies his views on a variety of religious areas. (A companion volume, The Frankfurt School on Religion: Key Writings by the Major Thinkers, also edited by Mendieta, was published in 2004 by Routledge.) On the one hand, religion's return -- Habermas, perhaps with the American situation foremost in mind, goes so far as to speak of the emergence of "post-secular societies" -- presents us with undeniable dangers and risks. While theodicy has traditionally provided men and women with consolation for the harsh injustices of fate, it has also frequently taught them to remain passively content with their lot. It devalues worldly success and entices believers with the promise of eternal bliss in the hereafter. Here the risk is that religion may encourage an attitude of social passivity, thereby contravening democracy's need for an active and engaged citizenry. To wit, the biblical myth of the fall perceives secular history as a story of decline or perdition from which little intrinsic good may emerge. On the other hand, laissez-faire's success as a universally revered economic model means that, today, global capitalism's triumphal march encounters few genuine oppositional tendencies. In that regard, religion, as a repository of transcendence, has an important role to play. It prevents the denizens of the modern secular societies from being overwhelmed by the all-encompassing demands of vocational life and worldly success. It offers a much-needed dimension of otherness: The religious values of love, community, and godliness help to offset the global dominance of competitiveness, acquisitiveness, and manipulation that predominate in the vocational sphere. Religious convictions encourage people to treat each other as ends in themselves rather than as mere means. One of Habermas's mentors, the Frankfurt School philosopher Max Horkheimer, once observed that "to salvage an unconditional meaning" -- one that stood out as an unqualified Good -- "without God is a futile undertaking." As a stalwart of the Enlightenment, Habermas himself would be unlikely to go that far. But he might consider Horkheimer's adage a timely reminder of the risks and temptations of all-embracing secularism. Habermas stressed in a recent public lecture "the force of religious traditions to articulate moral intuitions with regard to communal forms of a dignified human life." As forceful and persuasive as our secular philosophical precepts might be -- the idea of human rights, for example -- from time to time they benefit from renewed contact with the nimbus of their sacral origins. Last April Habermas presented a more systematic perspective on religion's role in contemporary society at an international conference on "Philosophy and Religion" at Poland's Lodz University. One of the novelties of Habermas's Lodz presentation, "Religion in the Public Sphere," was the commendable idea that "toleration" -- the bedrock of modern democratic culture -- is always a two-way street. Not only must believers tolerate others' beliefs, including the credos and convictions of nonbelievers; it falls due to disbelieving secularists, similarly, to appreciate the convictions of religiously motivated fellow citizens. From the standpoint of Habermas's "theory of communicative action," this stipulation suggests that we assume the standpoint of the other. It would be unrealistic and prejudicial to expect that religiously oriented citizens wholly abandon their most deeply held convictions upon entering the public sphere where, as a rule and justifiably, secular reasoning has become our default discursive mode. If we think back, for instance, to the religious idealism that infused the civil-rights movement of the 1950s and 1960s, we find an admirable example of the way in which a biblical sense of justice can be fruitfully brought to bear on contemporary social problems. The philosopher who addressed these issues most directly and fruitfully in recent years was John Rawls. In a spirit of collegial solidarity, Habermas, in his Lodz paper, made ample allusion to Rawlsian ideals. Perhaps Rawls's most important gloss on religion's role in modern politics is his caveat or "proviso" that, to gain a reasonable chance of public acceptance, religious reasons must ultimately be capable of being translated into secular forms of argumentation. In the case of public officials -- politicians and the judiciary, for example -- Rawls raises the secular bar still higher. He believes that, in their political language, there is little room for an open and direct appeal to nonsecular reasons, which, in light of the manifest diversity of religious beliefs, would prove extremely divisive. As Habermas affirms, echoing Rawls: "This stringent demand can only be laid at the door of politicians, who within state institutions are subject to the obligation to remain neutral in the face of competing worldviews." But if that stringent demand is on the politician, Habermas argues, "every citizen must know that only secular reasons count beyond the institutional threshold that divides the informal public sphere from parliaments, courts, ministries, and administrations." With his broad-minded acknowledgment of religion's special niche in the spectrum of public political debate, Habermas has made an indispensable stride toward defining an ethos of multicultural tolerance. Without such a perspective, prospects for equitable global democracy would seem exceedingly dim. The criterion for religious belief systems that wish to have their moral recommendations felt and acknowledged is the capacity to take the standpoint of the other. Only those religions that retain the capacity to bracket or suspend the temptations of theological narcissism -- the conviction that my religion alone provides the path to salvation -- are suitable players in our rapidly changing, post-secular moral and political universe. Richard Wolin is a professor of history, comparative literature, and political science at the Graduate Center of the City University of New York. His books include The Seduction of Unreason: The Intellectual Romance With Fascism From Nietzsche to Postmodernism (Princeton University Press, 2004). From checker at panix.com Mon Sep 26 23:54:17 2005 From: checker at panix.com (Premise Checker) Date: Mon, 26 Sep 2005 19:54:17 -0400 (EDT) Subject: [Paleopsych] Live Journal: The Grim Meathook Future Message-ID: The Grim Meathook Future http://www.livejournal.com/users/jwz/543348.html [Just some ideas on a new term being batted about. Links removed.] jwz wrote on September 21st, 2005 at 04:30 pm Previous Entry Add to Memories Next Entry The Grim Meathook Future People have been asking about the origin of the phrase "Grim Meathook Future" that I've been tagging posts with for a while now; it is the creation of one Joshua Ellis, who wrote the following. This isn't on his site, and I like it, so I might as well paste it here for posterity: Feeding poor people is useful tech, but it's not very sexy and it won't get you on the cover of Wired. Talk about it too much and you sound like an earnest hippie. So nobody wants to do that. They want to make cell phones that can scan your personal measurements and send them real-time to potential sex partners. Because, you know, the fucking Japanese teenagers love it, and Japanese teenagers are clearly the smartest people on the planet. The upshot of all of this is that the Future gets divided; the cute, insulated future that Joi Ito and Cory Doctorow and you and I inhabit, and the grim meathook future that most of the world is facing, in which they watch their squats and under-developed fields get turned into a giant game of Counterstrike between crazy faith-ridden jihadist motherfuckers and crazy faith-ridden American redneck motherfuckers, each doing their best to turn the entire world into one type of fascist nightmare or another. Of course, nobody really wants to talk about that future, because it's depressing and not fun and doesn't have Fischerspooner doing the soundtrack. So everybody pretends they don't know what the future holds, when the unfortunate fact is that -- unless we start paying very serious attention -- it holds what the past holds: a great deal of extreme boredom punctuated by occasional horror and the odd moment of grace. Tags: grim meathook future music: Fluke -- Goodnight Lover _________________________________________________________________ From: final_girl Date: September 22nd, 2005 - 06:32 am nicely encapsulated. concise. elegant. like a shiv. thanks. :) From: bq_mackintosh Date: September 22nd, 2005 - 06:40 am [Holds up lighter in rock-concert tribute] From: bq_mackintosh Date: September 22nd, 2005 - 07:14 am On cue, from the NYTimes: WASHINGTON, Sept. 21 - The House of Representatives rushed through a $6.1 billion package of hurricane-related tax breaks today, sending the bill to the Senate, whose members also planned to pass it quickly. [...] One cut being considered is a delay in the start of the new Medicare prescription drug coverage for one year to save $31 billion and eliminating $25 billion in projects from the newly enacted transportation measure. The list also proposes...ending support for the Corporation for Public Broadcasting. Because a great way to make it up to the poor folk who got their clocks cleaned by the hurricane is to cut Medicaid. And if they hurry up with the elimination of public broadcasting, maybe there won't be as much bad news following Rita. From: wsxyz Date: September 22nd, 2005 - 08:19 pm One cut being considered is a delay in the start of the new MediCARE prescription drug coverage for one year Because a great way to make it up to the poor folk who got their clocks cleaned by the hurricane is to cut MediCAID. Hopefully the adult reading programs will be fully funded. From: bq_mackintosh Date: September 22nd, 2005 - 09:16 pm Five of one, half a dozen of the other. From: darkengobot Date: September 22nd, 2005 - 09:56 pm Actually, the Medicare prescription drug program isn't the same thing as Medicaid at all. Medicaid is for the poor--Medicare prescription drug is for the richest per-capita section of the population, but who also vote regularly and reliably. Vote-buying is vote-buying, and IMO it's more heinous to cater to a large voting block with government goodies so that you can maintain your power than it is to give tax breaks to corporations. Corporations at least have payrolls. Old farts just clog up Denny's and wield their AARP cards like shurikens. From: bq_mackintosh Date: September 22nd, 2005 - 11:02 pm Actually, the Medicare prescription drug program isn't the same thing as Medicaid at all. Yeah, I realized my typo and was apparently too subtle in my self deprecation -- 'specially since I didn't do [DEL: poor :DEL] old folks. Anyway, base point still stands: the Big Help sent by the neo-cons is actually a tax-break bonanza and opportunity grab at program killing. It's arguably good to give something to small businesses to spark job growth in the area, but to me it just smacks of more of the same failed trickle-down program. (Reply) (Parent) From: autodidactic Date: September 22nd, 2005 - 06:40 am my unasked-for opinion "Grim meathook future" sounds a lot like growing up in Anacostia, circa 1979-1982. And, Fischerspooner sounds like they sniff Giorgio Moroder's jockstrap for inspiration. This doesn't mean I don't like them, somewhat, but... L. From: relaxing Date: September 22nd, 2005 - 08:26 am Re: my unasked-for opinion since Giogio isn't making records anymore I welcome any record smelling remotely like his jockstrap. From: king_mob Date: September 22nd, 2005 - 09:13 am Re: my unasked-for opinion I believe the point was more "How futuristic is retro disco, exactly?" Not that I don't kinda like Fischerspooner too, the little I've heard, but I'm really, really not Mr. Dance Music Guy. My own tangential observation would be that punk rock clearly failed and I have lived far too long. From: relaxing Date: September 22nd, 2005 - 09:25 am Re: my unasked-for opinion The point of the article was that at this moment you, I, and Cory Doctorow (or some reasonable caricature of that trio) want Fischerspooner playing in our future, rather than whatever grim meathook folk music undoubtedly awaits. My point was that being "retro" does not in itself make music good or bad, but anyway fuck it I've got to go do some more cocaine before oil prices make it too expensive to import. From: 7leaguebootdisk Date: September 22nd, 2005 - 07:04 am I'd throw in that Hunter S. Thompson used the phrase "grim meat hook reality" in Fear And Loathing In Las Vegas, this feels like a reasonable extrapolation of the phrase. Nice little essay. From: saltdog Date: September 22nd, 2005 - 07:09 am Holiday in Mogadishu Very well put. While the rest of the world is distracted by all this middle-east hoo-ha, The battle lines for the real opening batttlefields of the 'grim meathook future', eastern africa, are falling quietly into place. From: jcterminal Date: September 22nd, 2005 - 07:12 am this is excellent. thank you. From: ultranurd Date: September 22nd, 2005 - 07:20 am Thanks! I googled it, but was confused by hits for different people named Ellis. TSOR failed me, but your explanation is great. From: jwz Date: September 22nd, 2005 - 07:30 am Turkish Society of Rochester? From: ultranurd Date: September 22nd, 2005 - 10:24 am TSOR = thirty seconds of research. Also known as "glance at the summaries on the first page returned by Google". Or possibly a Wikipedia search. Forgot where I was posting; to my knowledge, the abbreviation was coined by a member of a mailing list I'm on. From: pavel_lishin Date: September 22nd, 2005 - 07:56 am It's scary that I recognize the authors and the band. Makes me feel... like... i'm in tune with some sort of culture, God Forbid. From: dr_wrebagzhoe Date: September 22nd, 2005 - 08:13 am It's... wow.. I mean.. um.. wow. From: ultranurd Date: September 22nd, 2005 - 10:25 am That icon captures the best scene in the movie. From: taffer Date: September 22nd, 2005 - 11:24 pm Currently in my browser that icon is punching to the beat of Howard Jones - What is Love? and making me snicker. From: wetzel Date: September 22nd, 2005 - 09:28 am it's pathetic, but every time i see that phrase i think of this meathook. From: cyn_goth_prog Date: September 22nd, 2005 - 11:21 am Doomed So what's the difference between "grim meathook future" and "doomed"? [j] From: jwz Date: September 22nd, 2005 - 01:29 pm Re: Doomed Well, the "grim meathook future" is a very specific, peak-oil-and-fascism sort of doomed. "Doomed" covers all sorts of upsetting eventualities, like man-made black holes, killdozers, and our new robot overlords. I don't pretend that my categories are the most concise possible. From: transgress Date: September 22nd, 2005 - 12:02 pm Of course, nobody really wants to talk about that future, because it's depressing and not fun and doesn't have Fischerspooner doing the soundtrack. So everybody pretends they don't know what the future holds, when the unfortunate fact is that -- unless we start paying very serious attention -- it holds what the past holds: a great deal of extreme boredom punctuated by occasional horror and the odd moment of grace. I only wish I could write a response that was as graceful, honest, and insightful. From: valacosa Date: September 22nd, 2005 - 01:26 pm "Feeding poor people is useful tech, but it's not very sexy and it won't get you on the cover of Wired. Talk about it too much and you sound like an earnest hippie. So nobody wants to do that." I would say it's very difficult to make money off of poor people, but very easy to make money off of horny Japanese teenagers. Very well written piece. From: king_mob Date: September 22nd, 2005 - 08:18 pm I would say it's very difficult to make money off of poor people, Citibank does all right. From: raemus Date: September 22nd, 2005 - 09:56 pm One word bum-vertising From: darkengobot Date: September 22nd, 2005 - 10:09 pm Grameen Bank. Microloans in the Third World can be quite profitable. From: ciphergoth Date: September 22nd, 2005 - 02:24 pm Also, of course, poor people do not primarily go hungry through lack of technology. From: cananian Date: September 22nd, 2005 - 10:01 pm Bullshit What the fuck is that supposed to mean? The third-world is a technological wasteland. There are *millions* of subsistence farmers who can't grow enough to feed their family, but could if they had access to, say, modern agricultural techniques (which is certainly a form of technology). Millions more die because they don't have clean water -- and wells and purification are sure as hell technology. Browse through ThinkCycle if you need a ton more examples of simple technology which the underprivileged desperately need. Of course, you may not think that low cost eyeglasses, better sand filters, cheap IV drip controls for cholera treatments, or passive incubators are sexy technology, but technology they are, and poor people definitely need it. People don't want to invest in it. But there is plenty of technology which could help. Even if you mean "email and computers" by your use of technology, I can assure you that these are used and useful in the third world as well. Small farmers can make more money (thus feed their families better) if they know the prices of their crops in surrounding communities. They can't afford to take a week off from tending their crops to walk around and survey the prices. One project here is trying to supply extremely low-cost solar-powered email stations to connect villages so that farmers can know their markets better. These also help get medical aid, etc, to the rural communities when it is needed. When your mother is sick, walking three days to the nearest hospital and then walking three days back with the medicines for your mother is no fun, let me assure you. Most of these projects rely on a sneaker-net/pony-express style of message transfer, since there is no network infrastructure. But store-and-forward email works just fine as long as there are some people making regular rounds of the villages. It's easy to say, "oh, technology can't help" but it *can*. As a higher-level example, wireless technologies are a godsend to the third world, because they can't afford to install wired infrastructure. (Heck, even MIT can't afford wired infrastructure: they are giving up on trying to upgrade their wires and moving their entire dormitory network to wireless.) The third world has really impressive cell-phone penetration, because the national telcos are corrupt, incompetent, unreliable, and habitually ignore poor areas (because they can't afford to bribe their local bureaucrats). Similarly, email is much more reliable than the national postal services (although it doesn't work so well for packages). From: mister_borogove Date: September 22nd, 2005 - 11:00 pm Re: Bullshit I assume ciphergoth's point is that there's already enough food calories being produced in the world, it's just that no one in power has any particular incentive to distribute the food evenly. Why are you responding so angrily? From: cananian Date: September 22nd, 2005 - 11:06 pm Re: Bullshit Because it seemed a cheap "well I can't do anything about it" cop-out, since I would guess that most of the readers of this blog are technologists, not politicians. Even if we can't fix the food distribution problem (global politics), we can certainly *do something*, and we can even use the *technology* we have/know. If the comment was meant as just a cynical aside, then my anger stands. If it was making some other point entirely (or was just ill-informed) then I'll apologize. From: zenmonkeykstop Date: September 22nd, 2005 - 11:41 pm Re: Bullshit It's kindof wallpapering over the cracks though, innit. If you have a social problem (unfair distribution of calories, say) then chances are a technological solution is just a profitable way to postpone or ignore the social solution. It's like trying to solve inner-city smog by proposing that SUVs be required to use hybrid engines, rather than by restricting downtown traffic. It's a great solution from the point of view of the car manufacturers, coz they get to make people buy new cars. But it doesn't solve the problem. Traffic just keeps scaling up. The whole point of the meathook is that technology is not the answer, it's the distraction from the question. Obey the meathook. From: mark242 Date: September 23rd, 2005 - 12:08 am That response shows why you're (we're) part of the Cory Doctorow side of the grim meathook equation. I'm sorry, but coding up novel new uses for consuming Flickr photos doesn't mean jack shit to the refugee in Darfur. And I'm pretty sure that the woman locked in her attic in New Orleans doesn't really care about the DRM in her DVD player. As for "we can certainly *do something*" -- yeah, we can, absolutely. We give money to organizations that know that the answer is getting feet on the ground and meals on (makeshift) tables. From: darkengobot Date: September 23rd, 2005 - 12:25 am Dunno how much we can do. Giving money to organizations is a lot like an appeal to sorcery. "Well, I tossed *my* virgin into the volcano--if it erupts, it sure won't be *my* fault." Whether those organizations can actually do anything or not is often dependent on the country in question not being run by corrupt sociopath dictators. Getting rid of corrupt sociopath dictators requires either A) a long, uncertain process dependent on international diplomatic pressure; or B) a short, uncertain process dependent on military might. Well, I guess there's always C) God turns said sociopath dictator into a donkey, but that hasn't happened since Nebuchadnezzar. From: cananian Date: September 23rd, 2005 - 12:43 am Um, I certainly didn't mention Flickr in my post. Wrt DRM -- I would say that wikipedia (for instance) is a heck of a lot more useful to poor folk than a DRM'ed Brittanica. If you want to prevent pervasive DRM from eventually swallowing up wikipedia, more power to you. But the refugees in Darfur (or New Orleans, or Galveston) *would* like an application to help them locate and contact lost family members. That can be done with code, and that does help real people. I refuse to lump that together java sex-locators on 3G phones. From: saltdog Date: September 23rd, 2005 - 12:00 am Re: Bullshit The technology that powers my six hundred and sixteen foot ship, the technology that helps us navigate from Galveston or Lake Charles or where ever we load our 41,054 short tons of cargo to Africa certianly helps distribute the calories. And I'm going to generalize here, and most people won't understand unless they have seen the way they live through tradition over there, but it's rare that folks there want to work, even for subsistance. They would take your non-sexy technology and find some way they could make a quick buck off of it instead of using it as intended. But that's just my thoughts on the matter. I'm just a simple seaman. From: lovingboth Date: September 22nd, 2005 - 02:58 pm There are other crazies mostly in bits of the world that the US isn't interested in, but yeah. I always thought Goodnight Lover was Fluke's pitch for doing a Bond soundtrack. From: baconmonkey Date: September 22nd, 2005 - 06:12 pm wrong song right song: The Cure - "Meathook" from "Three Imaginary Boys", 1979 From: lohphat Date: September 22nd, 2005 - 10:45 pm Yeah, but... ...where can you get one of those phones? From: susano_otter Date: September 23rd, 2005 - 12:57 am That's It? This "grim meathook future" is just shorthand for what has consistently been the human experience, in all its ups and downs and all its successes and failures, since the dawn of time? Sounds like a pretty unrealistic (and pessimistic) view of a process that has brought more polio vaccines to more people with every passing decade. And a greater percentage of people today are part of the wealthy elite, by historical standards, than in the middle ages, or in slave-powered ancient Egypt? Oh noes! Teh grim meathooke future si upon us!!etc. From checker at panix.com Mon Sep 26 23:54:30 2005 From: checker at panix.com (Premise Checker) Date: Mon, 26 Sep 2005 19:54:30 -0400 (EDT) Subject: [Paleopsych] Original of Grim Meathook Future, ext'd Message-ID: Full text of the Grim Meathook Future thing http://www.zenarchery.com/full-text-of-the-grim-meathook-future-thing [Thanks to Eugen for this.] So a lot of people seem to be wanting to track this down. This was originally posted on a private message board as a response to a post by Matt .Blackbelt. Jones. This is what I originally wrote: .What are the fictions of .the long now. (what is the relationship of the long now and the long tail . does abundance and narrowcast-culture lead to the feeling of a thousand million divergent futures - the future got wide, again). I think the problem is that the future, maybe for the first time since WWII, lies on the far side of an event horizon for us, because there are so many futures possible. There.s the wetware future, the hardware future, the transhumanist future, the post-rationalist (aka fundamentalist) future. And then there.s the future where everything just sort of keeps going on the way it has, with incremental changes, and technology is no longer the deciding factor in things. You don.t need high tech to change the world; you need Semtex and guns that were designed by a Russian soldier fifty-odd years ago. Meanwhile, most of the people with any genuine opportunity or ability to effect global change are too busy patting each other on the back at conventions and blue-skying goofy social networking tools that are essentially useless to 95% of the world.s population, who live within fifteen feet of everyone they.ve ever known and have no need to track their fuck buddies with GPS systems. (This, by the way, includes most Americans, quite honestly.) You can.t blame them for this, because it.s fun and it.s a great way to travel and get paid, but it doesn.t actually help solve any real problems, except the problem of media theory grad students, which the rest of the world isn.t really interested in solving. Feeding poor people is useful tech, but it.s not very sexy and it won.t get you on the cover of Wired. Talk about it too much and you sound like an earnest hippie. So nobody wants to do that. They want to make cell phones that can scan your personal measurements and send them real-time to potential sex partners. Because, you know, the fucking Japanese teenagers love it, and Japanese teenagers are clearly the smartest people on the planet. The upshot of all of this is that the Future gets divided; the cute, insulated future that Joi Ito and Cory Doctorow and you and I inhabit, and the grim meathook future that most of the world is facing, in which they watch their squats and under-developed fields get turned into a giant game of Counterstrike between crazy faith-ridden jihadist motherfuckers and crazy faith-ridden American redneck motherfuckers, each doing their best to turn the entire world into one type of fascist nightmare or another. Of course, nobody really wants to talk about that future, because it.s depressing and not fun and doesn.t have Fischerspooner doing the soundtrack. So everybody pretends they don.t know what the future holds, when the unfortunate fact is that . unless we start paying very serious attention . it holds what the past holds: a great deal of extreme boredom punctuated by occasional horror and the odd moment of grace. .- I.m also working on a long essay expanding these ideas. Keep your eyes open for it. September 22nd, 2005 at 1:41 pm Posted in General From checker at panix.com Tue Sep 27 00:06:53 2005 From: checker at panix.com (Premise Checker) Date: Mon, 26 Sep 2005 20:06:53 -0400 (EDT) Subject: [Paleopsych] Mancur Olson: How Bright are the Northern Lights? Some Questions about Sweden Message-ID: Here's a long and good one. Enough for today. Mancur Olson: How Bright are the Northern Lights? Some Questions about Sweden Institute of Economic Research, Lund University, Sweden http://www.mobergpublications.se/other/olson.htm [Click this on to get the original PDF, the graphs, and better tables.] Contents Introduction Preface Chapter 1: Why Isn't Sweden Worse Off? 1 Is the Standard Answer to the First Question Wrong? 3 Are Time Lags the Answer? 6 Is Swedish Culture the Explanation? 7 Government Size and Economic Growth in the West 7 The Route to an Answer to the Second Question 14 Chapter 2: International Trade, Competitive Markets, and Economic Growth 16 Some Surprisingly Strong Statistical Relationships 18 The Historical Relationship Is Also Strong 23 Wider Evidence on Free Trade, Competitive Markets, and Growth 25 Why Does Protectionism Hurt Growth More than Welfare Does? 27 Chapter 3: Explicit, Implicit, and Efficient Redistribution 28 "Implicit" and "Explicit" Redistributions 28 The Criteria for Redistributions Generate the Social Costs 29 Slower Innovation as a Deadweight Loss 31 Conditions on Explicit Redistributions to the Poor 33 The Theory of Efficient Redistribution 33 We are not Done Yet 34 Chapter 4: "Rational Ignorance" and the Bias of Collective Action 36 The Difficulties of Collective Action 38 The Inegalitarian Bias of Collective Action 40 Chapter 5: Why Implicit and Inefficient Redistribution is Commonplace 42 Will Coalitions Seek Unconditional Cash Transfers? 44 Rational Ignorance Makes Implicit Redistributions Possible 45 The Implicit Redistributions that Rational Ignorance Permits are Almost Never 46 Efficient Redistributions Aggregate Evidence on the Inefficiency of Implicit Redistributions 49 The Salience of the Evidence in Chapter 2 50 Sudden Increases in the Size of the Market and the Polity that Determines 52 Trade Policy Chapter 6: The Lower Costs and Ultimate Limits of Explicit Redistribution 54 What Limits the Amount of Implicit Redistribution in Sweden? 56 Encompassing Organizations 58 Factors Lowering the Costs of Explicit Redistributions 59 A Recapitulation 62 Too Much of A Good Thing is Bad: Nonlinearities and Lags 65 How Bright Are the Northern Lights? 68 lntroduction The Swedish welfare state has for several decades been discussed as the "middle way" between capitalism and communism. The recent collapse of the communist economies of Eastern Europe has brought new fuel to the debate on the future of the Swedish Model. Mancur Olson, professor of economics at the University of Maryland, is one of the pioneers in the development of public choice theory. The argument presented in this volume has been developed from his 1986 Crafoord lecture in Lund. Starting from concepts proposed in earlier works - "The Logic of Collective Action" (1965) and "The Rise and Decline of Nations" (1982)1 - Mancur Olson develops a framework for analysing the distinctive set of policies and institutions that together form the Swedish welfare state. He shows that different types of redistributions may have quite different effects on economic performance. Less transparent "implicit" redistributions, aiming to protect certain industries, seem to have more damaging effects on performance than "explicit" cash transfers to low income groups. The social cost of income redistributions depend on the conditions attached to them - i.e. on their effects on incentives. Mancur Olson's analysis provides a number of concepts, distinctions and questions that together may help us better understand the long term dynamics of the Swedish Model. Allan T Malm Director Institute of Economic Research 1 Also published in Swedish, "Nationers uppg?ng och fall", Ratio 1978. Preface This little book is not intended to argue either for or against the "Swedish model" of public policies and institutions. The purpose is rather to look at the somewhat distinctive set of public policies and institutions in Sweden from a new angle. When one looks at the Swedish situation from this unusual perspective, some new questions and options for public policy emerge. I asked many experts in Sweden and elsewhere to criticize an earlier draft of this brief book, and found that there was about as much sympathy for my argument on the one side of the political spectrum as on the other. I had to ask for so much help because I am not any kind of expert on Sweden. The argument in this essay emerged as much out of observation of other countries and from theoretical reflection as it did from what little knowledge I have of the Swedish scene. In part because of the fear that my limited knowledge would lead me into error, I have also given myself a lot of time for second thoughts. As it happened, the argument here began when Professor Ingemar St?hl kindly asked me to present the Holger Crafoord lecture at the University of Lund in the Fall of 1986. Had those whose help I have sought not been so generous with their time I would not have felt able to provide a written version of that talk that Professors Allan Malm and Ingemar St?hl asked of me. Thus any value this book has is due in large part to the help of my critics. Thick as my file of criticisms is, it is possible that some comment has been misplaced. But, at a minimum, I am greatly indebted for helpful criticisms to each of the following: Peter Bohm, Karl-Olof Fax?n, Robert J. Flanagan, Anna Hedborg, J?rgen Holmquist, Sten Johansson, Jan Karlsson, Peter J. Katzenstein, Walter Korpi, Jan-Erik Lane, Assar Lindbeck, Carl Johan Ljungberg, Per-Martin Meyerson, R. M. Mitra, Victor A. Pestoff, Olof Ruin, G?ran Therborn, Claudio Vedovato, and Carl-Johan Westholm. Chris Bartlett, Brendan Kennelly, Adele Krokes, Richard Lewis, Venka Macintyre, and Young Park provided me indispensable help in doing the research for this manuscript or in getting it prepared for publication. All of the shortcomings of this manuscript are nonetheless entirely my responsibility. Postscript of 1 November, 1990: This essay was entirely written before - and much of it long before - the economic emergency of October and the Swedish government's crisis package of reductions in public expenditure announced on October 26. The argument of this book (particularly in the closing chapter) leads one to expect crises such as the one that has just occurred - and to fear that there may be others as well. Nonetheless, I see no purpose in adding a discussion of the short-term problem arising from fears of a devaluation of the Swedish crown to the book. This book is about the more fundamental long-run characteristics of the Swedish economy, not about the emergencies or the successes of the season. In a long term perspective, the crisis of October, 1990, does not, taken by itself, appear to be so momentous: the possibility of a single devaluation would not raise fears about the long-run future of the Swedish economy if there were not uneasiness about a series of devaluations in the past and fears that a further devaluation would not be the last. Thus I believe that the focus should be on the fundamental structural issues with which this book deals - when they are understood, the way out of the short run difficulties will also be reasonably clear. Chapter 1: Why Isn't Sweden Worse Off? For more than half a century, Sweden has been known the whole world over for a distinctive set of policies and institutions - for the "middle way" between communism and capitalism or the "Swedish model".1 This Swedish system includes an unusually generous welfare state, but the country also has some other distinctive arrangements that, I shall argue, are no less significant. Are the distinctive Swedish arrangements really northern lights that nations can use to get a rough sense of direction when they choose economic and social policies? We cannot say until we know how well the distinctive Swedish arrangements are working in comparison with the different arrangements in other countries. Thus to answer our question - "How bright are the northern lights?" - we must first ask, "How is Sweden doing?" Sweden has, however, several distinctive institutions and policies, each of which has a different impact on the country's performance. Some may be lifting Sweden up at the same time that others are dragging it down, so that the overall performance may not reveal the actual value of any one of these distinctive arrangements. The inadequacy of an undifferentiated or monocausal approach becomes clear simply from breaking the familiar question about how Sweden is doing into two separate questions: 1) Why isn't the Swedish economy performing better than it is? 2) Why isn't the Swedish economy performing worse than it is? Although these questions are obviously parallel, their answers are not. There is a standard, straightforward answer to the first question, but no familiar or obvious answer to the second. 1 Erik Lundberg, "The Rise and Fall of the Swedish Model," Journal of Economic Literature 23, 1985. 2 The answer to the first question is so familiar that I shall merely evoke it here: income transfers and the public sector have often been larger in Sweden than in any other noncommunist country, with the government spending and transferring at times more than three fifths of the nation's Gross Domestic Product; tax rates have accordingly been exceptionally high, with large proportions of the population having paid half, or even two-thirds or four-fifths of marginal income in taxes. On top of this, an extraordinarily powerful union, the LO, has often reduced wage differentials, thereby attenuating incentives and discouraging investment in skills. Robert Flanagan, a leading labor economist contributing to the Brookings Institution study of the Swedish economy, said, "When an outside economist first views the Swedish labor market, with its compressed wage differentials, comparatively high marginal tax rates, and numerous government-financed alternatives to work, the first reaction tends to be amazement that the labor market works at all."2 Thus the standard answer to my first question is not only well known, but emphatic as well. This makes the second question - why Sweden is not worse off - all the more puzzling. Amazed as he was, Robert Flanagan concluded that Sweden "works." In an important sense, this is surely true. According to the best available measures, Sweden is one of the richer countries in the world, with a real per capita income that is as high as (or a trifle higher) than that of most nations in Western Europe. Why does it not have a lower per-capita income than countries with smaller public sectors, lower tax rates, and no comprehensive union engaged in economy-wide wage leveling? Why, for example, is per capita income in Sweden about double the level in Ireland? In the countries to which both Irish and Swedes have migrated, their incomes are similar, but there is no question that the Swedish economy is vastly more productive than the Irish. Similarly, why are per capita incomes now higher in Sweden than in Great Britain? They were for a long time much higher in Britain; in 1870 Sweden was a relatively poor country, but Britain then apparently had the highest per capita income in the world.3 Yet the situation is now reversed, even though, for a very long time, the public sector has been larger, marginal tax rates higher, and union wage-leveling incomparably stronger, in Sweden than in Great Britain. There is another country that used to have a per capita income substantially in excess of Sweden's but is now way behind. That country is Argentina. Now that Argentina lags so far behind the nations of Western Europe, it is no longer usually compared with them, even though its population is almost entirely of European descent. Yet, in the early decades of our century, Argentina apparently had one of the ten highest per-capita incomes in the entire world, and for a considerable period surely had higher standards of living than Sweden. In the period during which Argentina has fallen behind, 2 "Efficiency and Equality in Swedish Labor Markets," in Barry Bosworth and Alice Rivlin, eds., The Swedish Economy (Washington, DC: The Brookings Institution, 1987), p. 172. 3 With the exception of resource rich and recently settled Australia. 3 its redistributions of income to low-income people and its welfare state have been incomparably smaller than Sweden's; it is even questionable whether Argentina (protectionist and interventionist as its government is) should be described as having a welfare state at all. Many people around the world would say that the Swedish economy has not only outperformed the three economies that have just been discussed, but also been one of the most successful economies anywhere. That is a much stronger conclusion than is required for my argument. To be on the safe side, I take the lowest possible estimate of Sweden's economic achievements. Even then, we must ask why the Swedish economy works or survives at all, why Sweden incontestably remains one of the developed economies, and why it has unquestionably outperformed the three aforementioned comparison economies with less egalitarian policies. Is the Standard Answer to the First Question Wrong? Some people may suppose that the answer to the second question (why isn't Sweden worse off?) is simply that the standard answer to the first question (why isn't Sweden doing better?) is wrong: the fact that Sweden's per capita income is better than that of some countries with a smaller public sector, lower marginal tax rates, and less egalitarian unionism merely shows that individuals are not very sensitive to incentives and that the distortions in resource allocations that agitate critics of the Swedish welfare state are not, in fact, a significant problem. I will argue in this essay that the familiar answer to the first question is to a large degree correct, but that limiting and offsetting factors have not been understood. These countervailing considerations not only help to explain why Sweden has a high level of per capita income, but also suggest that a society, with the right policies, can afford a better provision for its least fortunate citizens than those who understand the importance of incentives have realized. The countervailing considerations that are the main concern of this essay can only be understood if one first appreciates the large element of truth in the familiar argument that Sweden's economic growth is slowed by its high taxes and subsidies and its often egalitarian wage policies. This familiar argument is not merely ideological rhetoric; it is generally accepted by the leading economists on the left and the right alike. The professional consensus on this point is obscured in part because many economists understandably point out that some welfare state expenditures can increase a society's income. Subsidized education and training for low-income people who (because of capital-rationing or other market failures) would not obtain it by themselves can increase a nation's income. This point may be especially pertinent in countries such as Japan and the United States, where the welfare state is relatively smaller than in Sweden but where unusually large proportions of the population nonetheless receive 4 subsidized higher education. But it does not appear, at least at first glance, that any extra spending on the development of education and skills in Sweden is the main explanation of Sweden's relatively larger welfare state or of its economic achievements. The professional consensus that very high levels of welfare state transfers reduce a country's income is also obscured because many economists favor income redistributions to the poor even though they believe such redistributions reduce income. There is no inconsistency in this position. A transfer to low-income people may increase total welfare or utility even though it reduces the measured income of the society. This is because, as is now known, the marginal utility of consumption systematically decreases with consumption4 and people with higher incomes tend to get less utility or satisfaction from marginal consumption than poorer people.5 Thus, if it is not pushed to the point where the adverse effect on incentives makes the incomes of even the poor lower than they would otherwise be, a redistribution of income can still increase human welfare even if it reduces measured per capita income. There is also a technical complexity involving "income effects" that leads most observers to underestimate the loss of social efficiency arising from high taxes. Consider a worker who is taxed to pay for transfers to those who cannot work. The tax reduces the reward to the worker from an additional hour of work, but it also lowers his disposable income. Just as the first or "substitution effect" gives the worker a reason to work less, so the latter or income effect implies that he can afford less leisure and this makes him work more. The latter effect at least partially offsets the former effect, and this is one reason why the number of hours individuals work does not usually change dramatically with changes in tax rates.6 Unfortunately, high tax rates are harmful to economic efficiency even in those cases where the income effect completely offsets the substitution effect so that the hours worked are unchanged. This last point is perhaps counterintuitive and even some skilled economists need to be reminded of it. The reason why there is a loss of social efficiency from taxes even when they do not change hours of work at all is, in essence, because the taxed individual, though mindful that taxes mean he can afford less leisure, takes no 4 Martin J. Bailey, Mancur Olson, and Paul Wonnacott, "The Marginal Utility of Income Does not Increase: Borrowing, Lending, and Friedman-Savage Gambles," American Economic Review, vol. 7 (3) (June 1980), pp. 372-79. 5 See my "Why Some Welfare-State Redistribution to the Poor Is a Great Idea," in Public Choice and Liberty, Essays in Honour of Gordon Tullock, ed. Charles K. Rowley (Oxford: Basil Blackwell, 1986). 6 The impact of a tax increase on hours work also depends on other things, such as the uses to which the tax proceeds are put. If the tax receipts are mainly given as transfers to the same people who pay them, or if they are used to make leisure activities more attractive, this can lead to a far greater reduction in work than otherwise. See, for example, Assar Lindbeck, 'Tax Effects vs. Budget Effects on Labor Supply," Economic Inquiry vol. XX No.4, October 1982, pp. 473-89, James Gwartney and Richard Stroup, "Labor Supply and Tax Rates: A Correction of the Record," American Economic Review, June 1983, 73, 446-51, Cecil E. Bohanon and T. Norman Van Cott, "Labor Supply and Tax Rates: 'Comment"', and Firouz Gahvari, "Comment," in American Economic Review, Vol. 76, No. l, March 1986, pp. 277-283. 5 account of the gain to those who benefit from the extra taxes he would pay if he worked more (as is shown in the footnote).7 Since the loss of efficiency from higher taxes, even when large, is not directly observable from the total number of hours of work done, it is often under-estimated. In short, the second question of why Sweden is not doing worse than it is cannot be dismissed simply by claiming that the standard answer to the first question is wrong. The large Swedish welfare system may be morally appropriate and on balance desirable, but that does not change the fact that this system, together with wage-leveling, is making Sweden's measured economic performance less good than it would otherwise be. If we leave aside some interesting technical questions that are of no special pertinence here, this conclusion does not require much more than the finding that individuals and firms respond to changes in incentives in the ways that economic theory predicts.8 Sweden's relatively high per capita income, and its far faster growth during the period it has followed the "middle way" than less egalitarian countries like Argentina, Britain, and Ireland, is an anomaly that cries out for explanation. 7 Suppose an individual, with preferences between money income and leisure given by the indifference curves below, is confronted with a (proportional) income tax such that his post-tax wage rate falls from W1 to W2, and that none of the proceeds of the tax receipts are devoted to the consumption of the individual who pays the taxes. In the case depicted the reduction in the individual's post-tax income reduces the amount of leisure he chooses to consume by just enough to offset the reduction in the amount of work he chooses to do because the reward to him of an extra hour of work is decreased. But the adverse effect on economic efficiency is still there. When we take into account the value to others of the extra taxes this individual would pay if he worked more, we see that the total social value of his work can be depicted as the slope of line AB, parallel to Wl, so that, if the individual had an incentive to take account of the value of his tax payments to others, he would now take only Q1 rather than Q2 hours of leisure. 8 The important qualifications in the theory of the "second best," for example, will be ignored here. 6 Are Time Lags the Answer? A comparison with the United States in recent years suggests a possible explanation. Although the American standard of consumption has remained high since 1980, it is by no means an adequate basis for judging the country's economic performance in this period. As is well known, the United States since the beginning of the Reagan administration has run large deficits in its government budget and in the current account of its balance of payments. When the debt to overseas borrowers is repaid, consumption will have to be lower than it would otherwise be. Therefore, the level of consumption leads to an overstatement of the current performance of the American economy. Surely the Swedes have not yet received the full bill for their extravagances either. As Swedish critics of the country's welfare state have pointed out, habits of behavior do not change overnight, so that when incentives change their full impacts come with lags, which are a pervasive feature of economic life. Thus there can be no doubt that lags are relevant in answering our second question. A subsidiary theme of this book will be that the shape or functional form of the relationship between tax and subsidy levels and economic performance makes the "lag" explanation more significant for Sweden than it would otherwise be. Yet, even if Sweden is overcome by an economic catastrophe before the ink is dry on this little book, we should still be puzzled about why the lags are so long. The impacts of changes in macroeconomic policies, in investment incentives, in exchange rates, and in tariff levels normally show up within a few quarters or a few years. Sweden has been famous for its "middle way" for more than half a century. Most of the lags that are evident in economics are not even a tenth as long as that. When we are comparing economic performance of different countries, it is the relative size of the public sector that is most relevant. If we take the period since 1951 (when national income statistics first came to be generally available) as a whole, both government consumption and total government outlays as a percentage of national income have been larger on average in Sweden than in any other country in the Organization for Economic Cooperation and Development.9 No doubt some of the effects of a general change in the pattern will show up only after generations have passed, but the main effects of most changes in economic policy show up within a few years, so we still have a scientific puzzle. 9 Total government outlays are defined as government consumption + social security transfers + subsidies + interest on public debt + gross capital formation + purchases of land and intangibles. Over much of the 1951-87 period, social security transfers have been less in Sweden than in several other Western countries. 7 Is Swedish Culture the Explanation? There is a tendency to "explain" each country's performance by referring to allegedly unique traits of its people. Every culture and every people have some obviously distinctive characteristics, so it is always easy to claim that these characteristics account for a country's performance. But after sustained examination these claims usually turn out to be pseudo-explanations. Although references to national character are as common on one side of the political spectrum as the other, my experience suggests that, where our second question is concerned, this tendency is likely to take what might be called a "conservative-chauvinistic" form. Sweden's economic performance is as good as it is, some say, because the disincentives of the uniquely large welfare state and the egalitarian LO are countervailed by Swedish or Nordic cultural tendencies to work harder, to save more, and to be more enterprising than most other peoples would be if confronted with similarly oppressive incentives. Being of Nordic descent myself, I understand that this type of argument can have an emotional appeal. But it cannot survive careful examination. How can favorable cultural traits, which surely do not change quickly, also explain the relatively low incomes in Sweden and in Scandinavia in the mid-nineteenth century and for some time before that? Those who oppose a large welfare state also need to explain how a superior national character could be consistent with what they regard as uniquely bad national decisions on this matter. And how can enduring national characteristics explain why so many countries - such as Argentina, Britain, Germany, and Japan - had extraordinarily good economic performance in some periods and extremely poor economic performance in others? All that we know about relative economic performance indicates that ad hoc cultural and racial explanations normally do not prove useful in the long run. We can be confident that we have found valid explanations only when we have parsimonious and general theories that explain performance in a wide variety of settings.10 Government Size and Economic Growth in the West The same puzzle of why, in spite of the distortion of incentives from high taxes and welfare state subsidies, Sweden still has a high per-capita income is evident, in a less extreme form, in most of the other countries with relatively large welfare states. The data on the developed democracies in general show that, in the years since World War 10 See Chapter 1, on "The Standards Satisfactory Answers Must Meet," in my Rise and Decline of Nations (New Haven, Conn. and London: Yale University Press, 1982), or in the Swedish translation published by Ratio in 1984. 8 II, the countries with relatively large welfare states have tended to grow about as rapidly as those with less egalitarian policies. To be sure, as later parts of this book will demonstrate, different types of government expenditures can have very different effects, so we cannot solve the puzzle simply by looking at the relative sizes of government outlays and rates of economic growth in different countries. In addition, definitive empirical findings about the relation between the size of the public sector and the rate of economic growth would require not only more data than now exist, but also a full scale econometric study rather than the merely illustrative and descriptive display of data that is offered here. Still, it is instructive to examine the data displayed in Figure 1 and in Table 1. These data reveal that, when we take all the years since 1951 to 1987 together, there is at least no clear association between the percentage of a nation's resources spent or transferred by government and its rate of economic growth. Any association there may be is not strong enough be conspicuous. In the 1950s, as Figure 2 and Table 2 reveal, there was, if anything, a faint tendency for the countries with larger welfare states to grow faster. In this decade, Sweden, though well above average in total government disbursements as a percentage of GDP, was by no means the leader. The most notable country was West Germany, which was then undergoing an economic miracle and had, at the same time, one of the very highest levels of social security transfers and also one of the highest percentages of total government disbursements. These facts must surely alert us to the likelihood that something besides the disincentives of the welfare state has a big impact on rates of economic growth. The ideological arguments that focus exclusively on the size of welfare state redistributions to the poor are, at best, incomplete. The data also reveal that there is definitely no general or lasting tendency for nations with larger public sectors to grow any faster than other countries. Indeed, the one country that has all along devoted the smallest proportion of its resources to government - Japan - grew almost as rapidly as West Germany during the 1950s, and much more rapidly than any other developed country over the postwar period as a whole. If we compare historical periods in which the roles of government were quite different, we get the same mixed picture. In the last half of the nineteenth century, almost all countries came closer to having laissez-faire policies than at any time before or since. This was also a time of unprecedented economic progress, so this period clearly supports the view that laissez-faire improves economic performance. The period from 1945 to 1970 offers quite a contrast. During these years all of the developed democracies greatly expanded their welfare states and the public sectors became far larger than ever before. Yet this was also a time of extraordinary economic growth, surpassing even the glorious last half of the nineteenth century. This period is, accordingly, evidence supporting the view that big government favors economic growth. But the pattern changed again in the 1970s and 80s: the welfare state grew still larger and at the same time economic performance turned sour. Thus the historical record for the developed democracies as a whole, like the 9 comparison across the developed democracies since World War II, produces no regular pattern: in some countries and in some periods a relatively large public sector is associated with poor economic performance, but in other countries and periods we observe exactly the opposite. Figure 1 Unfortunately, the ideological and partisan debates have so obscured judgment that some scholars have nonetheless been tempted to draw unjustified conclusions from the experience of individual countries, or from data such as that presented in Figures 1 or 2, or from one or the other of the historical periods mentioned above. This type of mistake is made on both the left and the right. Some scholars of the former persuasion conclude from data or information such as I have set out that a government that aggressively intervenes in the market is favorable to, or at least not harmful to, economic growth. Some advocates on the right can find in the same sources allegedly clear evidence of the pernicious effect of a large welfare state on economic growth. Statistical results are in these situations sensitive to variations in specification and to the inclusion or exclusion of particular countries. Japan is particularly important; it is such an outlier that it can reverse the results in a regression equation. Although the data on Japan clearly favor the rightist view - that a smaller government yields faster growth - this one country's rapid postwar growth obviously could be due to other reasons, so regressions that are not statistically significant unless Japan is included are unreliable. 10 Table 1: Average Government Size and GDP Growth, 1951-87 (percent) Annual GDP Growth Government consumption Social Security Transfers Government Expenditure Current Disbursement Total outlays (1960-87) Australia 3.96a 14.2 6.6b 20.8 23.8b 29.8 Austria 3.91 15.2 15.3 30.5 34.5 44.0 Belgium 3.20d 14.5e 15.3e 29.7e 36.5e 42.2 Canada 4.32 17.1 8.4 25.5 31.1 37.1 Denmark 3.03 19.4 11.1 30.5 35.6 44.3 Finland 4.01 15.3 7.5 22.7 27.9 34.3 France 4.00 16.3 17.9 34.2 37.0 42.7 Ireland 2.92 15.3 9.2b 24.5 33.8b 41.8c Italy 4.10f 13.4 12.7 26.1 33.0 39.8 Japan 6.69e 9.1f 6.4f 15.5f 18.4f 25.3g Netherlands 3.44 15.5 18.6h 34.1 39.4 48.9 Norway 3.85f 16.4 11.0 27.5 35.5 42.5 Sweden 3.00 21.5 12.0 33.5 39.8 49.3 Switzerland 2.92 11.5 9.1 20.7 23.7 25.0 United Kingdom 2.40 18.8 8.8b 27.6 34.9b 41.0c United States 3.24 18.0 7.5 25.5 28.7 32.0 West Germany 3.84 16.8 14.2 30.9 35.2 42.6 Note: Government consumption = government final consumption expenditure for goods and services. Social security = social security benefits for sickness, old age, family allowances, etc. + social assistance grants and unfunded employee welfare benefits paid by general government. Government expenditure = government consumption + social security transfers. Current disbursement = government expenditure + interest on public debt + subsidies. Total outlays = current disbursement + gross capital formation + purchases of land and intangibles. All data calculated and rounded. For average annual GDP growth rates, geometric averages are used. a.1956-87. b.1951-86. c.1960-86. d.1954-87. e.1953-87. f.1952-87. g. Average of 1968-87 h. Average of 1951-59 and 1968-87 Sources. 1. OECD, OECD National Accounts Statistics, 1950-68. 2. OECD, Economic Outlook, Historical Statistics, 1989. 3. United Nations, Yearbook of National Accounts Statistics, 1957,1964 11 Figure 2 I do not, however, want to go so far as to conclude that there is no causal relationship between the role of government and economic growth. My point is rather that the foregoing data and historical facts do not, by themselves, allow us to come to any definitive conclusion yet about the size or even the direction of the influence of the 12 welfare state on economic growth. Any statistical analysis that considers only the variables in Figures 1 and 2 and thus omits many other factors besides the role of government that affect the rate of economic growth, can do little more than raise questions. This problem of "omitted variables" and other difficulties of "specification" indicate that the results of statistical tests on only the foregoing data are likely to be spurious. To underline the point that the variables that have been presented here are not sufficient to allow a properly specified statistical test, I do not even report the results of the regressions that I have run on the foregoing data - that might obscure the mainly heuristic purpose of the foregoing tables. Table 2: Average Government Size and GDP Growth, 1951-60 (percent) Annual GDP Growth Government consumption Social Security Transfers Government Expenditure Current disbursement Australia 3.85a 10.0 5.2 15.2 15.8 Austria 5.66 12.7 9.9 22.6 25.0 Belgium 2.99b 12.0c 9.7c 21.7c 25.4c Canada 3.95 14.2 6.5 20.7 23.8 Denmark 3.28 12.6 6.9 19.5 21.4 Finland 4.94 12.0 5.5 17.4 21.2 France 4.77 14.1 12.7 26.8 30.4 Ireland 1.75 12.5 6.2 18.7 24.7 Italy 5.54d 11.9 9.3 21.2 24.4 Japan 8.23c 10.0d 3.6d 13.6d 14.7d Netherlands 4.66 14.1 8.4 22.5 26.5 Norway 3.25d 13.1 6.6 19.7 25.3 Sweden 3.42 16.5 7.2 23.6 26.3 Switzerland 4.3 11.1 5.8 16.8 19.4 United Kingdom 2.78 17.4 5.6 23.1 29.5 United States 3.23 18.2 4.1 22.3 24.3 West Germany 8.47 3.7 12.2 25.9 28.4 Note: For definitions and sources, see Table 1. a. 1956-60. b. 1954-60. c. 1953-60. d. 1952-60. This said, we must also remember that, if causal relationships are overwhelmingly 13 strong, there is usually little need to worry about omitted variables and other misspecifications. If we test whether being in a plane crash reduces life expectancy, we will probably get the qualitatively correct answer even if we don't take into account whether the dead had been cigarette smokers or not. Thus we can be pretty sure that the effect of the size of the welfare state on economic growth, even though it is the crux of modern ideological debate, is evidently not large enough to overwhelm other factors that affect economic growth in the developed democracies. When we go beyond the advanced democracies and include evidence from the developing nations, we find that the data are poorer. They are also more difficult to interpret, since the level of development of a country constrains or influences the share of the government in GDP. It is nonetheless interesting that the data on the developed and developing nations together also do not reveal any clear tendency for the size or growth of government to be associated with economic performance. Some studies, such as Daniel Landau's,11 find that a relatively larger public sector makes for slower economic growth. Other studies, such as those of Richard Rubinson12 and Rati Ram,13 find that a relatively larger size of the government generates faster economic growth. When different specifications lead to exactly opposing results, we may suspect that there is really no compelling pattern in the data. That is also what we find from a glance at the data I have displayed in Figure 3, which compare government size and growth for 121 developing and developed non-communist countries. The data in the three Figures in this chapter, the conflicting conclusions of the prior studies of government size and economic growth, and the broad historical facts that have been set out here are nonetheless suggestive. Since we know from the observation of the behavior of individuals that high levels of welfare spending and taxation have adverse effects on incentives, why doesn't the size of the welfare state show up in comparisons a cross countries and historical periods? This question underlines the need for an answer to my second question about why Sweden, in spite of the large distortions of incentives due to the huge welfare state and to LO wage-leveling policies, is not worse off; the puzzle about Sweden is also evident, in a less extreme form, in the comparisons of different countries and historical periods. Similarly, we know that all the Soviet-type economies - the ones where, in principle, the state runs everything - have failed miserably. Though it is wildly wrong to suppose that large welfare states are inherently semi-Soviet, the collapse of the Soviet-type societies nonetheless does raise our second question in a more casual and perhaps intuitive way. Why isn't the Sweden of the "middle way" - the society with the relatively largest public sector in the noncommunist world - also "mid-way" in per capita income between the Soviet-type 11 "Government Expenditure and Economic Growth: A Cross-country Study," Southern Economic Journal (1983), pp. 783-92. 12 "Dependency, Government Revenue, and Economic Growth, 1955-70," Studies in Comparative Institutional Development, 12 (Summer 1977), pp. 3-28. 13 "Government Size and Economic Growth: A New Framework and Some Evidence from Cross- Section and Time-Series Data," American Economic Review 76 (1986), pp. 191-203. 14 countries and the developed economies with the relatively smallest public sectors? Figure 3 I shall try in later chapters to provide a conceptual framework that helps resolve these puzzles. The foregoing data and historical facts have helped us see that we must search for some factors that both of the familiar ideologies omit. They have also shown us that the effects of a larger welfare state on economic growth, important as they must be, are by no means sufficient to overwhelm other factors. The Route to an Answer to the Second Question The next task is to contrast the inconclusive aggregate data about the welfare state and economic growth with some other salient facts about economic performance, and especially about international trade in manufactures by small countries such as Sweden. As we shall see, the other facts form a strong, clear pattern. This striking pattern suggests that competitive markets open to international competition are the main source of economic dynamism and that protectionism and most other forms of government regulation in such markets have monstrously harmful impacts on economic growth. The striking pattern we shall find in the next chapter stands in sharp contrast to the aggregate facts on the size of government expenditures and transfers and economic growth that we have seen in this chapter, both for Sweden and for the rest of the world. 15 The contrast between these dramatic results and the ambiguous aggregate data on the size of the welfare state and economic growth will, as we shall see, help us discover some new ideas. With the aid of these ideas, we will be able to find an answer to our second question, about why Sweden, given the undoubted distortions in incentives from its uniquely large welfare state, is not performing worse than it is. Chapter 2: International Trade, Competitive Markets, and Economic Growth Serendipitously, the unfolding of history in recent years has produced a surprising number of unintended or "natural" experiments that have generated the same kind of solid information about how the world works that we have been accustomed to getting from controlled experiments in the physical sciences. These inadvertent experiments have, for some reason, been brought to light only recently. The truths they reveal will (in conjunction with well-established results from more familiar sources) take us much of the way toward an answer to the question of why Sweden is as well off as it is. The natural experiments involve international trade in manufactures by smaller countries and major alterations in the size of the countries or other jurisdictions that determine trade policies. I shall consider the lucky natural experiments carefully and then go on to discuss briefly how the findings from these experiments fit in with the more familiar sources of insight. To appreciate these experiments, we must first note that data on international trade in manufactured goods can yield insights that do not emerge so clearly from the study of trade in primary products and services. Data on trade in manufactures are especially instructive because manufacturing is usually less dependent on natural resource endowments, and therefore a bit more sensitive to economic institutions and policies, than extractive industry is. Saudi Arabia and Iran export a great deal of oil, but this does not tell us very much about what policies or institutions these countries have nor offer a sound basis for judging the efficiency of resource allocation in them; the oil exports of these countries tell us more about their geology than about their economic and political systems. In the extractive industries generally, and even to a considerable extent in agriculture, the pattern of production and international trade is quite sensitive to the natural resource or climatic endowments of a country. The raw materials needed for manufactures can, by contrast, 17 be imported. Naturally, this often entails extra costs, but since transportation costs decrease as technology advances, the influence of endowments of natural resources on manufacturing is becoming smaller over time. Although the location of service industries is often even less restricted by natural resources than manufacturing, the statistics and other information on services are poorer for services than for manufacturing. The pattern of exports of manufactures can accordingly often tell us more about what types of economic arrangements or systems are effective for economic development and growth than can other types of trade. A thought-experiment will make it clear why data on the patterns of trade and the levels of protectionism of smaller countries is especially illuminating. Imagine a country so large that it was all the world except for Luxembourg. Suppose that this colossal country bad prohibitive tariffs on trade with the rest of the world, namely Luxembourg. Obviously, this hypothetical country could not be affected that much by its tariffs against Luxembourg, because even without any protectionism most of what it purchased would in any event have been purchased internally. The protection, in other words, would have affected only a relatively small number of markets to a minor degree and thus could not have had any great consequence for our gigantic hypothetical country. I therefore conclude that in looking at the effects of protection, it is essential to consider the size of the jurisdiction that has the protection and to note that the biggest economies, like the United States and Japan, are not affected by protection nearly as much as smaller countries are. This consideration also suggests that previous studies of protectionism have given relatively too much attention to the height of tariffs and other forms of protection and too little attention to their mileage or length - to the extent to which they confine trade. In other words, too little attention has been given to the size of the jurisdictions with protection; if there are many small jurisdictions with protection, the total length and impact of protective barriers will be very great, but if the only protective barriers are those that surround a few huge countries or common markets, the impact of protection will be limited. A focus on manufacturing in smaller countries will not only generate some special insights, but also tell us something about certain recent developments in the theory of international trade. These recent developments appear to qualify the presumption from economic theory in favor of free trade, and to suggest that, when there is imperfect competition and decreasing costs, a country can sometimes increase its welfare with tariff protection. It is in manufacturing industries in smaller countries that decreasing costs and imperfect competition will usually be most striking, so our results should also help us test the practical applicability of recent developments in international trade. If one goes outside the economics profession, there is, of course, often no presumption in favor of free trade at all. Many people suppose that successful manufacturing usually requires protection. It is often argued, for example, that the protection of infant industries will in due course give a country a comparative advantage in manufacturing that it would not otherwise have had, and that the country may profit 18 significantly from taking advantage of the new pattern of comparative advantage that protection has given it. Certainly, comparative advantage is not something that is given and static, but something that is made or achieved. One way to develop a competitive manufacturing industry, it is often said, is to protect this sector so that there will be learning-by-doing which will eventually enable the industry to become competitive. The data presented in this book will make it possible to obtain a powerful test of these familiar ideas. Some Surprisingly Strong Statistical Relationships In a previous paper, I was able to present calculations,1 which I shall re-use here, of the percentages of the manufacturing output of various smaller countries that are exported. In other words, for all those small and medium-sized countries on which we found the needed data, I had the gross value of each country's manufacturing exports divided by the gross value of its manufacturing output. (I would have liked to divide the value added in manufactured exports by the value added in manufacturing in a country, but we did not find the data needed to do this.) So what is presented in Table 3 is the gross value of manufacturing exports divided by the gross value of manufacturing output. This provides, of course, the percentage of a country's manufacturing that it succeeds in exporting. To ensure that the water is not muddied by countries so large that most of their trade would be internal trade even without any protection, I have excluded Italy and all larger developed industrial countries, and above all Japan and the United States. If countries are undeveloped and thus small in industrial terms, they are included no matter how large their populations might be. For most years, I do not have all of the needed data on the less developed countries. Fortunately, Bela Balassa and his associates at the World Bank made the needed estimates and calculations for various less developed countries for 1973. Thus Table 3 includes almost all small or medium-sized developed democratic countries and those developing countries for which we have the Balassa data for 1973. Table 4 contains the indexes of levels of tariffs on manufactured goods in the developed democracies that I had previously published in The Rise and Decline of Nations. There is a striking pattern: if the countries have high levels of protection on manufacturing, they export very little of their manufactures. Argentina, a country that is extraordinarily protectionist (and whose economic performance during the last half century, as we have noted, makes the growth of the Swedish economy look awesomely good by comparison) exports only about 2.5 percent of its manufactures. (I have rounded all numbers to the nearest 2.5 percent to underline the shortcomings of the data 1 Done for me by Kim Chohan, Alfred Forline, Michael Kendix, and Young Park. 19 Table 3: Comparisons of Export Percentages for Small, Medium-sized and Developing Countries, 1973 Manufactured exports/manufactured output Exports of "true" manufacturesb/total exports Exports of manufactures & processed primary productc/total exports Argentina 2.5* 17 66 Australiaa 7.5* 11 57 Austria 32.5 53 97 Brazil 5.0* 16 54 Canada 20.0 36 72 Chile 2.5* 1 86 Colombia 7.5* 12 31 Denmark 42.5 42 90 Finland 27.5 30 97 Greece 12.5 22 71 India 7.5 44 62 Ireland 37.5 36 83 Israel 15.0* 27 47 Korea 40.0* 64 93 Mexico 5.0* 30 64 Netherlands 45.0 33 85 New Zealand 5.0 14 80 Norway 35.0 40 91 Portugal 27.5 48 89 Singapore 42.5 37 76 Spain 16.0 43 85 Sweden 37.5 52 95 Taiwan 50.0* na 85 Turkey 2.5* 13 34 Yugoslavia 17.5* 47 91 a Average of 1972, 1973 and 1974. b Manufactured exports include International Standard Industrial Classification sub-categories 32 (textiles), 38 (metal manufactures) and 39 (other manufactures). c Manufactured exports include all processed primary products that are classified as manufactures in United Nations statistics. Sources: For column 1. United Nations Statistics, except for the asterisked figures, which were obtained from Bela Balassa, of The World Bank, Washington, D.C.; for column 2. Yearbook of International Trade Statistics, United Nations, New York, for 1978 and 1979; and for column 3, the same two United Nations yearbooks plus Economic Daily News, Taipei, and Economic Yearbook of the Republic of China 1980, Taipei. 20 and the approximate character of the calculations.) So it was with other highly protectionist countries in 1973. Chile exported only 2.5 percent of its manufactures; Columbia only 7.5 percent; Greece only 12.5 percent; India only 7.5 percent; Mexico only 5 percent; Turkey only 2.5 percent. Brazil (a questionable inclusion because of its large size) exported only 5 percent of its manufactures. Table 4 indicates that, of the developed democratic countries, New Zealand is the most protectionist; it exported only 5 percent of its manufacturing production. Australia is the second most protectionist on manufactures of the developed democracies, and it exported only about 7.5 percent of its manufactured output. Many countries allocate a great deal of labor and other resources to manufacturing, but can sell only a small percentage of their manufactured output in the competitive world market. Let us now look at countries of similar industrial size with relatively open policies. Austria, which is a member of the European Free Trade Association and has relatively low tariffs, exports about a third of its manufactures. Denmark, a country singularly lacking in natural resources for manufacturing, nonetheless exports 42.5 percent of its manufactured output. Korea, with relatively open policies on manufacturing by the standards of developing countries, exports 40 percent of its production of manufactures. Similarly, the other countries with relatively little industrial protection export a large part of their manufactures: the Netherlands, 45 percent; Norway, 35 percent; Portugal, 27.5 percent; Singapore, 42.5 percent; Taiwan, 50 percent. As might be expected from its relatively low level of protection of manufactures, Sweden exported a healthy 37.5 percent of it manufactures in 1973. Most significantly, for the small and medium-sized countries on which I have succeeded in obtaining data, there is not a single exception to the rule that the countries that protect manufactures least, export manufactures most. Note how this strong finding contrasts with the lack of any clear pattern in any direction in the data on the size of the welfare state and economic growth. At this point, a specialist in international economics, or any economist who remembers to think of the tendency toward general equilibrium of the economy as a whole, may say there is an obvious explanation of the foregoing results. The country that has high levels of protection for manufactures may export little of its manufactures simply because protection that reduces imports also reduces the amount of that country's currency that is supplied to buy foreign exchange, so the protection tends to raise the value of the country's currency and thus reduce its exports. Over the long run the imports and exports of a country tend to balance, so countries that don't import much also won't export much. If protection of all kinds and exports of all kinds were at issue, this argument could explain the foregoing results. But it is only the protection and trade of industrial products that has been considered. Countries such as Austria, Switzerland, the Common Market nations, and the Scandinavian countries - Sweden included - are exceptionally protectionist where agriculture is concerned. Yet, relatively speaking, they are not very protectionist in manufacturing. The high protection of agriculture in these countries 21 Table 4: Average Levels of Industrial Tariffs (notes on the upper part of next page) World weightsc No trade weighing: a simple average Own country import weighing:b Import weights on BTN aggregatesd Import weights on each BTN commoditye 1976 Ave. Finalf Ave. 1976 Ave. FinalAve. 1976 Ave. Final Ave. 1976 Ave. Final Ave. Australia Dutiableg 28.8 28.0 29.1 28.1 27.8 26.7 26.4 25.2 Totalh 16.9 16.5 15.4 15.1 13.3 12.8 13.0 12.6 New Zealand Dutiable 31.4 28.3 28.6 25.5 33.0 30.4 30.2 27.5 Total 24.3 21.9 19.7 17.6 20.5 18.7 18.0 16.3 EEC Dutiable 8.8 6.0 9.8 7.2 9.5 7.0 9.6 7.1 Total 8.0 5.5 6.3 4.6 7.0 5.2 6.9 5.1 United States Dutiable 15.6 9.2 8.3 5.7 9.2 5.5 7.6 4.8 Total 14.8 8.8 6.2 4.3 7.1 4.1 5.6 3.5 Japani Dutiable 8.1 6.2 6.9 4.9 8.0 5.7 7.9 5.5 Total 7.3 5.6 3.2 2.3 6.1 4.4 5.8 4.1 Canada Dutiable 13.7 7.8 13.1 8.9 12.0 7.3 12.9 8.3 Total 12.0 6.8 10.1 6.8 8.9 5.5 9.4 6.1 Austria Dutiable 14.2 9.8 18.8 14.5 15.9 12.0 17.0 13.3 Total 11.6 8.1 14.5 11.2 10.5 7.9 10.9 8.5 Finland Dutiable 17.0 14.6 11.6 9.2 11.2 9.0 11.5 9.1 Total 14.3 12.3 8.2 6.5 6.7 5.3 6.7 5.3 Norway Dutiable 11.1 8.2 10.5 8.0 10.2 7.4 10.0 7.5 Total 8.5 6.3 6.4 4.9 5.8 4.3 5.8 4.4 Sweden Dutiable 7.8 6.1 7.7 5.9 7.4 5.3 7.1 5.2 Total 6.2 4.9 6.3 4.8 4.6 3.3 4.5 3.3 Switzerland Dutiable 3.7 2.7 4.1 3.3 4.2 3.1 4.0 3.1 Total 3.7 2.7 4.0 3.2 3.3 2.4 3.2 2.4 22 a. An average of tariff levels on the assumption that all commodities are of equal significance; b. The relative weight attributed to each tariff is given by the imports of that commodity by that country; c. The significance of each tariff determined by world imports of the commodity, or aggregate of commodities, to which the tariff applies. World imports are the imports of the countries listed and the EEC. For Notes d. through i. and the sources, see The Rise and Decline of Nations. raises the value of their currencies and reduces the extent of their exports of manufactures. They nonetheless export a large percentage of their manufactures. Similarly, many of the countries with extraordinarily high levels of industrial protection, such as Argentina, nonetheless export a fair amount of primary products. To obtain a more general test of whether something besides exchange-rate or "general equilibrium" effects is operating, I turn now to some calculations done partly for the Holger Crafoord lecture in Lund out of which this essay grows. These calculations are in columns 2 and 3 of Table 3. These columns provide alternative measures of the proportion of a country's exports that are manufactured or "processed" products. Though any statistical segregation of manufactures here is arbitrary, the middle column is probably the better measure of "true" manufactures. (Fortunately, the results are probably not very sensitive to the definition of manufactures as the two columns are positively correlated.) These data show that there is a distinct (though not a very strong) tendency for the countries with high levels of protection of manufactures to have a relatively low percentage of exports that are manufactures. This suggests that the protection of manufactures may well discourage efficiency disproportionately in the manufacturing sector.2 There is still further evidence that the failure of those small and medium-sized countries that lavishly protect manufactures to develop profitable manufacturing export industries is not due only to exchange rate effects. A country that changes from relatively open policies to high protection of manufactures may actually reduce the rate of growth of manufacturing output for domestic as well as international use. For example, after 1930 and especially under the regime of Juan Peron, Argentina increased 2 One other factor probably helps to explain the limited proportion of exports that are manufactures in countries that are very protective of manufacturing. The supply curves of many primary product industries may be relatively inelastic, so some of the enterprises in these industries will be able to produce some output at modest costs even when the country's institutions are not efficient. Countries with some exceptionally good mines or oil wells may be expected to export same of the yield of their natural resources even if the whole economy is badly organized. The Soviet Union fails to sell much in the way of manufactured goods in free foreign markets and has even lost the large agricultural exports it had in czarist times; it does nonetheless export relatively large amounts of the prodution of its mines and wells and thus may illustrate this point. (I am thankful to Christopher Clague for calling this point to my attention.) 23 its protection of manufactures to a colossal level and systematically exploited its agricultural export industries. As an authoritative study concludes, "the most ironic lesson of postwar Argentine experience is that if there had been less discrimination against exports, manufacturing expansion would have been greater. Indeed, the annual growth rate of manufacturing during 1900-29 (5.6 percent) was higher than during 192965 (3.7 percent)."3 The Historical Relationship Is Also Strong I have argued above that protection has a much greater impact in smaller countries than large ones and we found in the data on smaller countries a strong relationship between relative openness to imports of manufactures and success in manufacturing. We can corroborate or refute the foregoing results by looking at the historical evidence on the consequences of great increases in the size of countries or jurisdictions with trade barriers. If protectionism has a much greater impact on smaller jurisdictions, and if this impact is overwhelmingly harmful, then we should expect that dramatic increases in the size of a protectionist jurisdiction would greatly reduce the damage done by the protection. This is a question that I examined in The Rise and Decline of Nations, which offers theoretical reasons why a sudden and substantial increase in the size of a protectionist jurisdiction should stimulate rapid economic development. To facilitate a comparison of the historical evidence on the impact of protectionism with the cross- country evidence that has just been presented, I casually summarize here my evidence from Rise and Decline on whether the great periods of freeing of trade - by the method of increasing the size of the jurisdiction with protection - have in fact been associated with rapid economic development. Though the explicit efforts to reduce tariffs and quotas have received more attention, it appears that the quantitatively most important freeing up of trade has, in fact, occurred when larger jurisdictions have been created and the mileage or length of protection thereby reduced. The most notable reduction in the length of tariff barriers in recent times was brought about by the creation of the Common Market in Europe through the Treaty of Rome in 1957. What happened through the Common Market has happened many times in history, usually through national unification that, often inadvertently, freed trade by creating a far larger market in which, even if there high protection around the newly unified country, there were no barriers to internal trade. In the 1830s in Germany, for example, a Zollverein or customs union was created, and gradually extended and deepened, until it culminated in the German Reich that was completed in 1871. It is interesting that most of the German-speaking areas of Europe 3 Carlos Diaz Alejandro, Essays on the Economic History of the Argentine Republic (New Haven & London: Yale University Press, 1970), page 138; see also pages 126, 139-40, 252, 259-60, and 271-72. 24 were relatively poor in the period before the Zollverein and the German Reich were created. In the eighteenth and early nineteenth centuries, Germany was far poorer than Britain and the Netherlands, and probably also had a distinctly lower income than France. Nonetheless, in the second half of the nineteenth century and in the years up to World War I, the German economy grew at an extraordinary rate, so that by World War I Germany was undoubtedly one of the greatest industrial powers of the world. I call phenomena such as the creation of the Common Market and German unification examples of "jurisdictional integration": such integration occurs whenever a much bigger jurisdiction is created that has internal free trade. Japan offers another example of jurisdictional integration. Before the Meiji Restoration of 1867-68, Japan was divided into nearly three hundred separate feudal domains, each under its own feudal lord or "daimyo." Normally each of these domains had high levels of protection, limiting trade from that jurisdiction to other parts of the Japanese Archipelago. To the extent the Shogunate had some control over the whole of Japan, it used that control in part to make Japan as a whole virtually autarchic with the rest of the world, limiting trade and factor mobility with the outside world to a negligible level; even travel abroad was punishable by death. The Meiji Restoration (or revolution) of 1867-68 created a free trade area within Japan. It eliminated the separate feudal jurisdictions and thus also the trade restrictions that went with them. At about the same time, a group of Western powers forced on Japan the "humiliating treaties." These treaties are described as "humiliating" because the Japanese were too weak to prevent their imposition. One of these treaties prohibited Japan from having any protective tariffs; for fifty years the country could have nothing more than tariffs for revenue only at rates of 5 percent or less. Because of the jurisdictional integration plus the "humiliating treaties," Japan experienced an increase in freedom of trade. Japan was a poor and underdeveloped country before this process occurred. Some Western observers believed that the Japanese would never be able to manage modern economic life. Yet, not long after the Meiji Restoration, Japan began to grow very rapidly. One symptom of that growth, besides the evidence from the statistics, is that by 1904-05 Japan was already powerful enough to defeat Russia in a war. At the end of the eighteenth century there was another example of jurisdictional integration - the United States. At the time of the Declaration of Independence in 1776, and for several years after, the thirteen ex-colonies were virtually independent countries. The U.S. government was created only in 1789, when the U.S. Constitution went into effect. The Constitution outlawed the tariffs that some states, such as New York, had imposed against imports from other states. So the United States then became a substantial market in which, internally, there has been free trade to this day. To be sure, through most of the nineteenth century and until the 1930s, the United States was a highly protectionist country. Nonetheless, because of the absence of tariffs by states and the great growth of the U.S. over the nineteenth century, the U.S. has enjoyed a large and growing unrestricted internal market. 25 We see much the same phenomenon when we go back to Holland in the 17th century. When the United Provinces rebelled against Spain, they created an area in which, generally speaking, there was internal free trade. Although the Netherlands was not large by the standards of countries today, by the feudal standards of the time it was reasonably substantial. Moreover, its location and flat topography - much of it below sea level - meant that it was uniquely suited to canals, so that it had an exceptionally large area that was accessible to water-borne transportation. Before long, Holland entered its "Golden Age" and became the world's leader in economic development. If we go back still further to the end of the Middle Ages, we find that the first country in Europe to establish true unification was England, or more precisely, England and Wales. By the sixteenth century, the parochial feudal system had been all but abolished in England. Some time later Scotland was conquered and all of Great Britain was essentially one free market. The semiautonomous towns and feudal fiefs with their separate trade restrictions were made part of a unified Britain. Though the textbooks call this a mercantilistic period and correctly emphasize that there were high national tariffs, there was nevertheless a great freeing of trade because of the increase in the size of the jurisdictions that restricted trade. This was also the period of the Commercial Revolution and of substantial economic progress, which was soon to be interrupted by the English civil wars of the nineteenth century. After this unstable and revolutionary period ended, Britain became the location for the epochal economic progress of the Industrial Revolution. Thus in several periods of history protection has been dramatically reduced simply because a big market replaced many small protected markets. Even though the big markets were sometimes highly protected, there was a great freeing of trade, which was followed in every case by rapid economic development. Wider Evidence on Free Trade, Competitive Markets, and Growth I have emphasized the foregoing evidence on the value of wide and unprotected markets as seedbeds of economic growth because it is a new and different type of evidence. But more familiar types of evidence point in the same direction. This is not the place to go over the massive literature on international trade, but it may be useful to refer briefly to the discovery in recent years, by leading specialists on international trade and economic development, that trade policy has incomparably more importance for the growth of the developing countries than economists previously realized. As I see it, the discovery arose because most of the less-developed countries have chosen levels of protection of manufactures that are vastly higher - often ten or twenty times higher - than those in the developed democracies, while a minority of the developing societies have, by contrast, turned to "outward- looking" policies. The performance of these less protectionist societies has been incomparably better than that 26 of the societies with "inward looking" policies. This is evident not only from many careful studies of selected sets of countries, but also from the systematic examination of essentially all of the developing countries on which there is usable data. In a study summarized in the World Development Report for 1987,4 the World Bank examined 41 developing countries, which it classified according to the extent they were protectionist and inward looking, on the one hand, or approached relative neutrality in their treatment of imports and exports, on the other. Notwithstanding the great importance of other factors, the less protectionist or more outward looking countries grew far more rapidly than the more inward looking, and usually did better by other measures of economic performance as well. As is well known, Hong Kong, Korea, and Singapore were distinguished both by their lesser use of protection and also by their rapid economic growth, but there was a similar if less marked variation in the degree of protection and in economic performance across the whole set of countries. Though it has not achieved anything like the per capita income of the three countries just named, Korea is nonetheless classified with the "gang of four" most successful developing countries. Some observers of Korea argue that it has had significant levels of protection and also that the country has by no means had a policy of laissez faire. There is evidence for the latter argument, but the high level of both imports and exports in Korea makes it clear that it has been, at least by the standards of most developing countries, definitely an outward-looking and relatively open country. In any event, a change in the classification of only one or a few countries would by no means eliminate the strong association between an outward orientation and faster growth found in data analyzed by the World Bank. Thus the evidence that the less protectionist developing countries have tended to have far better economic performance than more protectionist countries cannot be dismissed. Another kind of evidence that is worth singling out is the record of the firms and industries in advanced countries that have been especially impressive in international competition. Michael E. Porter and his many associates have done a large scale, ten- nation study of many such firms and industries and the results have been published in Porter's book, The Competitive Advantage of Nations.5 That book examines an almost endless number of specific cases with a lot of convincing detail showing that the winning firms and industries in international competition have systematically been bred in environments in which there was vigorous domestic as well as international competition. In contrast, protectionism, cartelization, and subsidization have systematically failed to produce internationally impressive firms and industries. (An apparently disproportionate number of Porter's examples of successful firms and industries are Swedish, and we shall later see that this is not surprising in the light of my argument here.) The successful firms and industries are also regularly nourished and driven by what Porter calls a "cluster" of symbiotic and motivating activities - by a wide 4 Oxford University Press, 1987; see especially chapter 5. pp. 78-94. 5 New York: The Free Press, 1990. 27 array of competitive suppliers of inputs in the form of intermediate goods, by labor with the necessary specialized skills, and by pertinent and high quality research and education, and by demanding consumers with good alternatives. The combination of vigorous competition and unrestricted access to a vast variety of inputs generate the continued innovation and ever-increasing efficiency that are required for international competitive advantage.6 I conclude that there is no way that a small country with a high level of protection could have either the vigorous domestic competition or the symbiotic competitive cluster of activities and readily available inputs that are needed for success in international competition. Thus Porter's book provides yet another kind of evidence that uninhibited trade and large markets are decisively important for economic progress. Still other kinds of evidence from all over the world (including eastern Europe) point in the same direction, but the time has come for us to relate the strong findings in this chapter to the ambiguous results in Chapter 1. Why Does Protectionism Hurt Growth More than Welfare Does? There is a puzzling contrast between the strong statistical and historical patterns suggesting that protectionism is extraordinarily damaging, especially in smaller countries, and the lack of any clear pattern in the raw facts on the size of the welfare state and economic growth. Why does government intervention in markets that cross international borders have such dramatic and easily demonstrated effects on economic performance, when the impact of the size of the welfare state on the rate of economic growth is difficult to discern in the aggregate data? As we shall see, some forms of intervention in markets do indeed usually have far larger impacts on economic performance than others, in part because the impacts of some types of intervention are fairly closely monitored and thus limited, whereas others are not. The next chapter will distinguish two different types of redistributions of income that usually have quite different impacts on economic development. The type of redistribution of income that is more damaging to economic performance also turns out to be more complex and less conspicuous. Sweden has uniquely high levels of the more conspicuous but less costly type of redistribution, but it does not appear to have unusually high levels of the less conspicuous but more damaging type of redistribution. I shall attempt to show in this book that that is the single most important reason why Sweden is not doing worse than it is. 6 Independently, Christopher Clague has found by econometric methods that the manufactured exports of the less developed - and on average vastly more protectionist countries are, disproportionately, relatively self-contained products that can be produced without unrestricted access to a wide variety of intermediate goods and other inputs. See Christopher Clague, "Relative Efficiency, Self-Containment and Comparative Costs of Less-Developed Countries," Economic Development and Cultural Change, forthcoming. Chapter 3: Explicit, Implicit, and Efficient Redistribution Most people use the phrase "income redistribution" to refer to transfers to relatively low- income people through social insurance or welfare-state programs explicitly designed to reduce the inequality of the distribution of income. But many other kinds of governmental intervention also change the distribution of income, as does collective action of groups of firms or workers in the marketplace. The tariffs, import quotas, and other protectionist measures considered in the last chapter undoubtedly change the distribution of income - they change the prices of some products and thus also the returns to the firms and the owners of the labor and other resources that produce these products. Public subsidies, price supports, and tax loopholes obviously also change the distribution of income. Even many public policies that have little or no significance in the budget of the government - such as regulations that restrict entry, limit competition, fix prices, or mandate benefits to employees and others - change the distribution of income. Similarly, whenever firms or workers successfully combine, whether through explicit cartelization or tacit collusion, to change prices or wages this again changes the distribution of income. "Implicit" and "Explicit" Redistributions It is useful to label the money and the services that governments openly transfer or provide to low-income people, simply because they are deemed morally to deserve or need the assistance, as explicit redistributions. By contrast, when a society is persuaded to choose a policy mainly for some reason other than its redistributive effect - as is the case when a society protects manufacturers against competing imports, or regulates prices or competition in some industry, because it is persuaded that will further the 29 development or strength of the country - that redistribution will here be called an implicit redistribution of income. In keeping with standard usage in economics, I define the social cost of a redistribution, whether it is explicit or implicit, as only the "deadweight loss" or "excess burden," that is, the reduction in the national income from the transfer. If one group is taxed to finance transfers to another, the social cost is the reduction in the income of the society that results from any impairment of the incentives to work, to save, to innovate, and to allocate resources to their most productive uses, plus the costs incurred in administering the transfer. The amount that the taxpayers transfer, though obviously a matter of interest to them, is not a loss or cost to the society since the recipients of the transfer are also a part of the society. This definition of social cost is standard in economics and is obviously also the one that is relevant when explaining the rate of economic growth or the level of per capita income in a society.1 Of the many of redistributional activities, both explicit and implicit, that take place, which have the greatest social cost for each dollar or crown redistributed? Strangely, this subject, important as it is, has been so badly neglected that we must work out much of the analysis from the start. The Criteria for Redistributions Generate the Social Costs In economic theory, there is a familiar answer to the question of what type of redistributions has the least social cost: the "lump-sum" transfer, by definition, has no impact on incentives and thus no social loss whatever associated with it. The concept of the lump-sum transfer is a useful simplifying device in theoretical discussions, but all sides agree that no society could in practice have a continuing program of lump-sum transfers. The continuing transfers would be taken into account in behavior and thus would affect the incentives and the income of the society. So the practical question is, what types of systematic transfers that are actually possible will come closest to the lump-sum ideal? Although there will inevitably be some excess burden from the taxes that raise the money for a transfer, the deadweight loss can usually be minimized by an unconditional cash transfer. If the recipients of the transfer are not required to do or not do anything to receive the transfer, the transfer will not have any effect on the incentives that they face. 1 If any expenditure, whether by the private or the public sector, actually increases the national income, then I will classify it here as an investment rather than a redistribution, even if the increase in income does not spread evenly throughout the society. Thus a public investment in the skills of the poor (or, for that matter, a program for the rich) that did in fact increase the national income would not be considered a redistribution, no matter how much this shifted the distribution in favor of the poor (or in favor of the rich). Our problem will become intractable unless we distinguish redistributional activities from investments and changes in the supplies of productive factors. 30 If the transfer is in the form of cash, the recipient can use it as he or she pleases, so there is also no distortion of the consumption choices of recipients. The great practical force of this point is evident if we consider an uneconomic industry. Assume that, say, the shipbuilding industry has lost its comparative advantage, so that the costs of operating the shipyards are greater than the revenues that can be obtained and that in the absence of any redistribution to the industry they would be closed down. Let us stipulate that there is no external economy to justify any subsidy on grounds of social efficiency and that any aid or protection for the industry arises only because of the political pressure applied by the shipbuilding companies and workers to obtain protection or aid for themselves. Suppose the organized shipbuilding companies and the workers through lobbying obtain a ban on the purchase of foreign ships. Protectionism benefits the firms and workers who sought it only as long as they remain in the industry. If the criteria for getting any kind of government aid is (as it usually is) that the firms and the workers continue to remain employed in the industry, then the recipients will have to allocate their time and other resources to shipbuilding to qualify for the redistribution. Keeping the losing industry going entails that the resources devoted to it produce output of less value to the society than they would have produced in the most attractive employment in unsubsidized sectors of the society. That is, these resources are partly wasted and there will therefore be a cost to the society beyond that of the taxes or other direct costs of the redistribution. This argument applies not only to bailouts and aid to declining industries, but also to redistributions to growing industries. If there is, say, a redistributive tariff or subsidy that raises the price in a thriving industry or activity, then other resources will move to take advantage of the higher price and these entering resources will then usually produce less value to society than before. This migration will cease only when the marginal private return is the same in the favored activity as elsewhere, but then the return to society at the margin will be less than the return to identical resources in unsubsidized areas. The migration to subsidized areas also means that some of the redistribution will, from the point of view of those who sought it, be wasted. Usually, redistributions of the typical kinds to any industry, occupation, or region, or any redistribution to the users of some input, will mean resources are allocated in a way that produces less value to society. That is, subsidies or regulations that favor a particular group rather than correct a market failure - and are accordingly simply redistributive2 - generate losses for the society. They generate social waste because 2 Though they won't matter much for the argument here, there are some complexities that need to be dealt with to isolate the smaller implicit redistributions. To determine whether a measure is "simply redistributive," it is necessary to specify an initial or reference set of institutions and policies in order to delineate changes. Any stipulated set of initial conditions may, of course, be controversial; there are different ideas about what institutions are most productive. Thus some observers might object to any given measure of the extent of implicit redistribution because they objected to the initial state in terms of which the change in policy was defined. (The note continues on next page.) 31 normally the criterion for receiving the redistribution to an industry or occupation or locality is that the relevant firms or workers must allocate their capital and their time to the favored sector rather than to the sector that would, in the absence of the redistribution, have been most profitable. To keep the costs of the redistribution from becoming insupportably high and to prevent the redistribution from being "wasted" on those for whom it was not intended, regulations or limitations are often established. A government regulatory agency may be created to prevent "abusive" or "speculative" actions to take advantage of the subsidy and to prevent an oversupply of the subsidized good. Sometimes only the original firms and workers will be legally entitled to receive the higher price or other subsidy in question. In declining industries the regulatory measures may even be designed to induce some of the excess resources to leave the industry at the same time that the protection or subsidization make the industry more profitable than it would otherwise have been. Such measures can reduce the extent to which conditional redistributions distort the allocation of resources a cross industries. Slower Innovation as a Deadweight Loss Restrictions and regulations also have to be applied in practice and enforced, and this entails some administrative or regulatory involvement in the productive process. The costs of the bureaucracy needed for this purpose may not be very large, but the social costs of the extra complexity, rigidity, and delay are rarely small. If the benefits of a higher price or any other subsidy are to be restricted to the original capital and labor in the relevant industry, then some authority must decide what whether a given expenditure is just maintenance and repair or really a new investment. If the regulation is going to block the extra output that would otherwise result from a higher price or other subsidy, then the level of output of each firm must be monitored and controlled. If new investment or output are controlled, the incentive to innovate is no longer so clear. If exploiting a new technology or idea changes the optimal level of investment and production, as it usually does, the new desired new pattern of investment or production is unlikely to get the needed regulatory approvals without delay. Those enterprises that would lose from a rival's innovation can also use the politics of the Note, however, that the concept of implicit redistribution is applicable whatever initial distribution of endowments or wealth is preferred - the desired distribution of wealth can be obtained by explicit redistributions. To delineate implicit redistributions we need only distinguish productive measures that increase social income from those that implicitly redistribute it. Of course, even this can sometimes be very difficult. Since this book is concerned only with implicit redistributions that bring large deadweight losses, this complication is not a major difficulty here. There is not much controversy among skilled specialists about which policies that have large excess burdens. 32 regulatory process to block the innovation. Thus the controls that are needed to prevent waste and abuse in a subsidized sector normally delay innovation and slow the reallocation of resources. The social cost of conditional redistribution often includes a lower rate of innovation and a less flexible economy. Some economists speak of "excess burdens" and "deadweight losses" only in static contexts where innovation, flexibility, and productivity growth are irrelevant, but this usage is misleading and (since the dynamic losses are usually much larger that the static costs) also mischievous. In this book, the excess burden or deadweight loss resulting from a conditional redistribution includes the slower productivity growth that regulation and other complex decision-making procedures bring about. If those in the protected, subsidized, or cartelized industry are simply given cash from the government with no strings attached, then the total cost to the rest of society is merely the sum transferred plus the deadweight loss of the taxes that obtain this sum; the recipients of the cash still have an incentive to allocate their capital and their labor to whatever employments offer the highest returns, and (in the absence of other market failures) these will be in the sectors with the highest marginal return to society. It is true that the beneficiaries of the unconditional cash subsidy will be better off because of the transfer to them, and the "income effect" of the transfer could make them take more leisure, but this does not reduce the efficiency of the society. Some have more income and may take more leisure, and others have less income and take less leisure, but (apart from the excess burden of the taxes) the efficiency of the society is not changed. The new distribution of disposable income arising because of the redistribution may or may not be unjust, but that is a separate issue that is not relevant to the present question of which types of redistribution have the lower social costs. When a redistribution is by its nature or conditions restricted to those who remain in the existing pattern of activity, it is usually possible to provide the recipients with the same net gain, at a smaller cost, by giving them an unconditional cash payment. Thus no amount of moral concern about fairness alters the reality that it is the criteria or conditions inherent in protectionism and in most aid to industries, occupations, and localities that increase their social costs. It is also important not to overstate the point that has just been made. It is possible that the excess burden of the taxes needed to finance an unconditional cash transfer could exceed the deadweight loss from the protection, monopoly, regulation, or other devices are often used for an implicit redistribution.3 In this case the unconditional cash transfer would not be the cheapest form of redistribution. Essentially, the reason for this is that tax burdens are also conditional - the amount of tax we have to pay is understandably conditional on how much income we earn and thus on how much we work and save, and this makes our individual choices socially less efficient. The conditionality inherent in taxation could cause greater distortions than the conditions on the typical implicit redistribution. The logical possibility that a typical measure for 3 I am grateful to Stephen Baba for reminding me of this. 33 implicit redistribution could have a lower social cost than a typical implicit redistribution is not, however, so important in practice. This study will go on to indicate reasons why the social costs of many implicit and conditional redistributions will, in reality, usually become far higher than an unconditional cash transfer would have been. Conditions on Explicit Redistributions to the Poor The idea of conditionality as the source of distortions can also be applied to explicit redistributions to low-income people. There is one criterion for most such transfers that causes them to have a greater deadweight loss than an unconditional cash transfer of equal size. One nearly universal condition on welfare-state grants for the poor is that they will no longer be available if the recipient becomes prosperous. This condition is only natural and proper, but we must, if we are being honest, recognize that it entails some deadweight loss: the poor person who somehow mitigates his or her plight thereby loses some transfers, and this reduces the poor person's incentive to obtain an income high enough to end the redistribution. As this last illustration suggests, the point of my argument is not to advocate redistributions with no strings attached - it is rather to make clear that the criteria or conditions that qualify an individual or firm to receive redistributions are the main source of their social costs. This idea, and the distinction between explicit and implicit redistributions, are indispensable for a full understanding of economic efficiency and growth in the modern world. The Theory of Efficient Redistribution We can also now begin to assess what I like to call the theory of "efficient redistribution," which grows out of the work of Gary Becker, Donald Wittman, Earl Thompson and Roger Faith, and others.4 Although different advocates of this theory make somewhat different arguments, the most basic idea, at least in Gary Becker's formulation, is that those who lose from redistributions of income have an incentive to 4 Gary Becker, "A Theory of Competition Among Pressure Groups for Political Influence," Quarterly Journal of Economics, (August 1983), pp. 371-400, "Public Policies, Pressure Groups, and Dead Weight Costs," Journal of Public Economics, 28 (1985), pp. 329-347; Earl Thompson and Roger Faith, "A Pure Theory of Strategic Behavior and Social Institutions," American Economic Review, vol. 71 (June 1981) pp. 366-80; Donald Wittman, "Why Democracies Produce Efficient Results," Journal of Political Economy Vol. 97, No.6, (December 1989), pp. 1395-1424; and Bruce Gardner, "Efficient Redistribution Through Commodity Markets," American Journal of Agricultural Economics, vol. 65 (May 1983), pp. 225-34. 34 keep their losses to a minimum. When a redistribution to one group in a society increases to the point where the losses to others become large, then the resistance to this redistribution also increases. If the losses are substantial, Becker emphasizes, there will be so much political resistance that the redistribution is almost certain to be curtailed. The deadweight losses from redistributions obviously add to the losses of those who lose from the redistribution, and they do not, of course, help the beneficiaries of the redistribution either. Both the gainers and the losers therefore have an incentive to keep the excess burden to a minimum. Indeed, they have an incentive to bargain with one another until they maximize the joint gains that they can obtain from reducing excess burdens - in other words, an incentive to continue bargaining until the society has achieved an efficient allocation of resources. In Donald Wittman's formulation, at least, it is only the time and other valuable resources used up in the bargaining - only the transactions costs - that keep a society from achieving the most efficient state we can conceive of. But transactions and bargaining costs are an inescapable feature of reality, and the cost of the time and other resources that they use up is just as meaningful as any other cost. So transactions costs, like the costs of any productive activity, should be part of the costs we take into account in defining a Pareto-efficient or totally efficient state. Some of those who use the theory of efficient redistribution therefore conclude that the existing societies, notwithstanding the redistributions they often engage in, are essentially Pareto-efficient. Accordingly, the theory of efficient redistribution predicts that any social losses from redistributions are small and that redistribution is, at least for the most part, "efficient redistribution." Unstinting proponents of this type of thinking, such as Thompson and Faith and Donald Wittman, therefore say (in effect) that our world could not be much more efficient - that this is, indeed, the best of all possible worlds. If the theory of efficient redistribution is right, then we seem to have an answer to our second question: any redistribution that actually occurs anywhere is quite efficient and so any social costs are small. It follows that Sweden cannot be losing much from its large welfare state. We Are Not Done Yet Unfortunately, even though it contains the germs of some important truths, the theory of efficient redistribution is in large part wrong. We still have a ways to go to get an answer to our second question of why Sweden isn't worse off. To see what is wrong about the theory of efficient redistribution - and even to isolate the elements of truth in it - we need a new conceptual framework. To construct this framework, we shall need a number of different ideas, including the distinction between explicit and implicit redistribution that has been developed in this chapter. The new conceptual framework is presented in the next two chapters. Some of the 35 concepts in this framework are new and others are drawn from my books on The Logic of Collective Action5 and The Rise and Decline of Nations. I shall endeavor to present the argument in such a way that it will involve very little repetition for those who have read these books, yet be comprehensible to those who have not. When this framework is complete, we will not only be able to see what is right and wrong about the theory of efficient redistribution, but we shall also be able to look at modern Sweden from a new angle. 5 Cambridge, Mass.: Harvard University Press, 1965. Chapter 4: "Rational Ignorance" and the Bias of Collective Action No analysis of the social costs of redistributions can capture the essence of the matter unless it faces up to a sad and inescapable reality: "rational ignorance." The seemingly oxymoronic phrase "rational ignorance" is not, in fact, a contradiction in terms. In many circumstances, the typical citizen serves his or her individual interests best by allocating little or no time to the study of public affairs, even though this leaves the citizen ignorant of many matters that are important for the country and thus also for his or her own wellbeing. The paradox becomes clear when one examines the situation of an average citizen who is deciding how much time to devote to studying the public policy choices facing the country. The more time the citizen spends studying public affairs, the greater the likelihood that his or her vote will be cast in favor of rational policies. The typical citizen will, however, receive only a small share of the gain from more effective policies and leadership; if there are a million citizens, an average citizen will get only one- millionth of the total gain. Yet that citizen bears the whole cost of whatever he or she does to become better informed about public affairs. Thus each citizen would be better off if all citizens spent more time finding out how to vote to make the country better serve their common interests. The gain to a voter from studying public issues to determine the vote that is truly in his or her interest is the value to that one individual of the "right" election outcome, multiplied by the probability that a change in this one individual's vote will change the outcome of the election. Since the probability that a typical voter will change the outcome of the election is minuscule, the typical citizen, whether a physician or a taxi driver, is usually rationally ignorant about public affairs. This point was made, albeit less starkly, in Anthony Downs's classic 1957 book, An Economic Theory of 37 Democracy,1 and in recent years its extraordinary practical importance is coming to be realized. Occasionally information about public affairs is so interesting or entertaining that it pays to acquire it for these reasons alone. Similarly, individuals in a few special vocations can receive considerable rewards in private goods if they acquire exceptional knowledge of public goods. Politicians, lobbyists, journalists, and social scientists, for example, may earn more money, power, or prestige from a knowledge of the public's business. Sometimes exceptional knowledge of public policy can generate exceptional profits in stock exchanges or other markets. Nevertheless, the typical citizen finds that his or her income and life chances are not improved by the zealous study of public affairs. Most people are not, of course, totally self-interested, and their altruistic motives make many of them study public affairs somewhat more than self-interest alone would justify, but the evidence nonetheless reveals that rational ignorance is undoubtedly the norm. This fact - that the benefits of individual enlightenment about public goods are usually dispersed throughout a group or nation, rather than concentrated upon the individual who bears the costs of becoming enlightened - illuminates many other phenomena as well. It explains, for example, the "man bites dog" criterion of what is newsworthy. If the television newscasts were watched or newspapers were read solely to obtain the most important information about public affairs, aberrant events of little public importance would be ignored and typical patterns of quantitative significance would be emphasized. Since the news is, by contrast, largely an alternative to other forms of diversion or entertainment, intriguing oddities and human-interest items are commonplace. Similarly, events that unfold in a suspenseful way or sex scandals among public figures are fully covered by the media, whereas the complexities of economic policy or quantitative analyses of public problems receive only minimal attention. Public officials, often able to thrive without giving the citizens good value for their taxes, may fall from power because of an exceptional mistake that is simple and striking enough to be newsworthy. Extravagant statements, picturesque protests, and unruly demonstrations that offend much of the public are also explicable in this way: they make gripping news and thus call attention to interests and arguments that might otherwise be ignored. Even some acts of terrorism that are described as senseless can, from this perspective, be explained as effective means of obtaining the riveted attention of a public to demands about which they otherwise would remain rationally ignorant. In part because of rational ignorance, there is much more implicit redistribution than explicit redistribution in most democratic societies. These implicit redistributions, moreover, normally are not efficient redistributions. The prevalence of implicit and exceptionally inefficient redistributions is due to the way that rational ignorance interacts with a bias in the pattern of collective action, to which we now turn. 1 New York: Harper & Row. 38 The Difficulties of Collective Action The rational ignorance of the typical voter is an example of the general logic of collective action. This logic is readily evident in organizations that lobby a government for special-interest legislation or that cooperate in the marketplace to obtain higher prices or wages. Some examples are professional associations of physicians or lawyers, labor unions, trade associations of firms in individual industries, farm organizations, or oligopolistic collusions. Such organizations can only be understood if we are aware of how difficult collective action is for large groups. It is difficult because the benefits of collective action go automatically to everyone in some group or category. If an association of firms wins a tariff, that raises the price for every firm that sells the commodity or product in question, regardless of whether the firm contributed to the effort to win the tariff. Similarly, if one group of workers strikes to bring a higher wage in some factory or mine, all the workers in the relevant factory or mine receive the benefit of the higher wage, regardless of whether they paid dues to the union or walked in the picket lines that made the strike successful. The same reasoning applies to the firms or workers attempting to raise prices or wages by combining to restrict the quantity supplied. Because the benefits of collective action go to everyone in a category or group, it is not rational for an individual in a large group or class to make any voluntary sacrifices in the interests of the group. The individual citizen or firm will get the benefits of whatever actions others undertake whether or not he contributes anything and, in large groups, the single individual or firm is not able to bring about the desired results singlehandedly. The precise logic and the empirical evidence validating this point are set out in my book on The Logic of Collective Action, and in the literature that has grown out of that book, so it should not be necessary to go into any detail on this matter here. It is, however, essential to note that the individuals in large groups do not voluntarily, in the absence of special arrangements I will consider below, contribute time and money to organizations that would lobby or fix prices or wages for exactly the same reason that the typical citizen remains rationally ignorant about many aspects of public affairs. An individual receives only a minuscule share of the return from any sacrifice he or she makes in the interest of a group, whether the sacrifice takes the form of dues paid to a lobbying or cartelistic organization, or research into what political outcomes are best for individuals like oneself. Thus many groups with common interests - such as consumers, taxpayers, the unemployed, and the poor - are not organized for collective action, and most people have only the haziest knowledge of public affairs. Those large organizations to lobby the government or fix prices and wages that have managed to survive have special arrangements that mainly explain why they are able to attract dues-paying members. There are in all large and lasting organizations for collective action some special gimmicks, which I call "selective incentives," that account for most of the membership. The selective incentives are individualized benefits or 39 punishments that induce firms or people to participate in, or help pay the costs of, collective action. One example of a selective incentive is the element of compulsion inherent in the closed shop, the union shop, and the coercive picket line, but this is only the most obvious example. All large organizations for collective action that survive have some analogous arrangements. These arrangements are usually very subtle and often provide individual benefits to those who contribute to the organization for collective action, while denying the benefits to those who do not. When the beneficiaries of collective action are few, there may be voluntary rational action to obtain collective goods without selective incentives. Consider the small number of large firms in a relatively concentrated industry. If there are, say, three large firms of about the same size in an industry, each firm will obtain about a third of the benefits of any action to get governmental favors or higher prices for the industry. This third of the benefits will usually be a sufficient incentive for considerable action in the interest of the industry. When the numbers in a group are small, it will also be true that each participant will have a noticeable effect on how well the common interest of the small group is served, and this will affect the likelihood that the others will contribute. Thus small groups will often bargain until they agree to act in their group interest to a complete or "group optimal" extent. This organizational advantage of small groups, and particularly of small groups of large firms has, as will be shown below, important implications for the pattern of redistributions that emerges in most societies. Since collective action is difficult and problematical, it normally takes quite some time before a group can overcome the difficulties of collective action, even if it has the small numbers or the access to selective incentives that are needed. The bargaining that can make it possible for a small group to organize or collude to an optimal extent usually takes some time, since unanimous consent is needed for full-scale cooperation. Organizing large groups is incomparably more difficult and time-consuming. Selective incentives, even if potentially available, are hard to arrange. If the selective incentives are to be positive rewards to those who participate, there has to be some surplus profit or advantage somewhere that can be used as the source of the rewards, and this surplus will normally be devoted to the collective action only if there is a complementarity between the activity that generates the surplus and the collective action (as is the case when a lobby gets favorable legislation for the enterprise that provides the resources for its selective incentives). If the selective incentive is the punishment of those who do not share in the costs of the collective action, this punishment has to be organized and the resistance to it overcome. The time it takes to get collective action going was illustrated when Jimmy Hoffa, who ultimately became a powerful American union leader, was a youth working in a warehouse in Michigan. On a hot June day, the warehouse company received a large shipment of strawberries and other fresh produce that would become valueless unless it reached consumers before it spoiled. Jimmy Hoffa and his collaborators chose that moment to organize a strike and a union, and the management, rather than lose the fresh produce, gave in. It is only in the fullness of time that many groups will have had the 40 able leadership and the favorable circumstances needed to organize for collective action. As we shall see, this fact has important implications for economic growth. The Inegalitarian Bias of Collective Action If my theory of collective action is correct, the capacity for collective action is most common among the relatively established and prosperous interests in society and is virtually absent among the poorest and most insecure elements in the population. As we have seen, collective action is less difficult with small numbers, and this favors the organization and collusion of such groups as the large firms in concentrated industries. Selective incentives are also more often available to "insiders" - incumbent workers and well-established people - than to prospective entrants and those on the lowest rangs of the social ladder. I have shown elsewhere that the learned professions often have the widest array of selective incentives.2 Incumbent workers in an enterprise have already assembled for work and, if they have worked together long enough to have established a social network, they have a good chance to organize a union. By contrast, the unemployed are not automatically assembled or associated in ways that make selective incentives available through social interaction. Normally, all of the poor and marginal parts of the population are without access to selective incentives. It is not, however, poverty or insecurity themselves that prevent collective action. Consumers - even consumers of luxury goods - are also not organized; they do not have the advantage of small numbers, and they make their purchases at so many scattered locations that the selective incentives of picket lines and social interaction available to some workers are also unattainable. Some people may wonder whether my hypothesis that the well-established and economically powerful elements in society are usually best able to overcome the difficulties of collective action fits the facts. We can test this hypothesis by examining how long it took different groups to overcome the difficulties of collective action. If well-established and well-off groups are, in fact, able to organize more readily than humbler groups, then the first organizations for collective would have represented those groups, and the poorest and least secure parts of the population should not even now be organized. 1 Adam Smith's Wealth of Nations, published in 1776, is a superb source of information about some of the first groups that were able to organize for collective action. This book is mainly an attack on "mercantilism," or on government policies and collusive prices that result from the combined action of "merchants" and master- manufacturers. In Adam Smith's time and before, merchants and manufacturers were often organized in guilds. Smith emphasized how often merchants and master 2 Olson, Collective Action. 41 manufacturers colluded to fix prices or to influence government; he said that they rarely gathered, even for merriment or diversion, without conspiring to fix prices. He also argued that ordinary laborers, the poor, and those in agricultural pursuits were usually not organized and had relatively little influence. Adam Smith's observations, as well as a good deal of other evidence, support the deduction earlier in this essay that small groups, such as the merchants or manufacturers in a particular industry or town, find it less difficult to organize than large groups do. The selective incentives that large groups need to organize are also more often available to those with established positions and higher incomes than to poorer and less secure individuals. The historical record in country after country shows that the learned professions tended to organize long before workers of lesser income and status. Similarly, skilled workers organized unions long before unskilled workers did. The first unions represented skilled workers in England. During the first half century of organized labor in the United States, unionized workers were called the "aristocracy of labor." Even among unskilled workers, it is mainly those who already have jobs, and almost never the unemployed or new entrants, that are organized. Men are also more often organized than women and individuals in relatively well-placed ethnic and social groups more often organized than those and those in disadvantaged groups. Most important of all, there is no society anywhere in which the poorest people or the unemployed are organized. Experience therefore confirms the hypothesis that the capacity for collective action is positively correlated with income and established position. We must now go on to examine how this reality, in combination with rational ignorance, comes in time to generate large implicit redistributions, most of which are far indeed from being efficient redistributions. Chapter 5: Why Implicit and Inefficient Redistribution is Commonplace We have seen that rational ignorance is a fundamental reality grounded in individual rationality, that the difficulties of collective action can eventually be overcome by some groups but not by others, and that it is disproportionately the nonpoor, insider, and establishment interests that are organized for collective action. What will organizations representing nonpoor established interests have an incentive to do, given that they operate in a society in which most people are rationally ignorant? The answer depends in part on the extent to which these organizations, whose members are also citizens of the larger society, have an incentive to take the interests of the society as a whole into account. Here I draw on the analysis in my Rise and Decline of Nations. Consider an organization that, though it might be large and have many members, is still only a small part of the whole country or society in question. For the sake of simple arithmetic, I assume an organization that represents 1 percent of the income-earning capacity of a country: for example, a labor union whose members' wages are in the aggregate 1 percent of the country's national income, or a trade association of firms that together earn 1 percent of the national income. Organizations of this kind are relatively less important in Sweden, where there are also what I have called "encompassing" organizations, like the LO and the employers' federation, representing constituents that in the aggregate earn a significant percentage of the Swedish GDP and thus have a large stake in the society. I will deal with this special feature of the Swedish organization scene later, but the analysis of "narrow" coalitions that represent only a tiny part of a nation's income-earning capacity is nonetheless important here. It is important partly because the logic of these narrow coalitions to some degree also applies, in a more complex way, in Sweden,1 and also 1 The reasons why this is so are set out in my "Appreciation of the Tests and Criticisms" in Scandinavian Political Studies (Spring 1986). 43 because these narrow coalitions have a decisive influence on the economic performance of most of the countries with which Sweden is being compared. One logical possibility is that organized interests will use their capacity to make the society in which their constituents are located more efficient and prosperous. In general, being part of a rich society is better than being part of a poor one. A lobbying organization could, for example, lobby for measures that would make the society in which its members live and work more productive and successful. Would an organization for collective action that represents only, say, 1 percent of the society have an incentive to do this? An organization that represents 1 percent of the society would get, on average, only 1 percent of the benefits from making its society more productive. If the national income of a country increases because it lobbies for more efficient public policies, the clients of the organization will get, on average, 1 percent of the increase in the national income. Those members, however, will have borne the whole cost of whatever lobbying they have done to improve the efficiency of the economy. If they get 1 percent of the benefits of their action and bear the whole cost of their action, then trying to make the society more efficient and prosperous will pay off for them only if the benefits of that action to the society as a whole exceed the costs of that action by a hundred times or more. Only if the cost-benefit ratio is better than 100 to 1 will the organization for collective action best serve its members by acting to make the society more efficient and prosperous. How then can a special-interest group representing only a tiny part of a society best help its clients? If a larger slice of the pie that society produces can be obtained for the members of a special-interest organization, then the members of this organization will have this larger slice of the pie. Less metaphorically, if a larger percentage of the national output or national income that is produced in a country can be redistributed to the members of a special-interest group, then these members will have that larger share of the national income. But the reader may now ask, "Won't lobbying for favors from government or combination in the marketplace to obtain monopolistic prices or wages make the economy less efficient and productive? And won't the members of the special-interest group bear part of the reduction in the national income that comes from the inefficiencies brought about by their effort to capture a larger proportion of the national income?" The answer, in most cases, is, "Yes." Cartelization will usually reduce the efficiency and prosperity of the society. Because a combination or cartel will produce and sell less and charge more, the society will normally be less efficient. Special-interest lobbying will similarly induce resources to go into the particular areas that are favored by the lobby-inspired legislation; resources will crowd into these areas until their contribution to the national income - their marginal social product - is lower than it would have been in other areas and the efficiency of the economy will thereby typically 44 be reduced. So both cartelization and lobbying to get a larger percentage of the national income will, in most cases, make the society less efficient and productive. Remember, however, that our special-interest group represented 1 percent of the society. Its members bear only 1 percent of the loss in national income or output that occurs because of the inefficiency its activities bring about, but they get the whole of the amount redistributed to them. Thus it pays our hypothetical special-interest group to seek to redistribute income to its own members even if this redistribution reduces the national income by up to 100 times the amount redistributed! Therefore, organizations that represent only a minute percentage of an economy's income-earning capacity are really "distributional coalitions" - coalitions that strive to redistribute more of the society's income to themselves, rather than to produce anything. Will Coalitions Seek Unconditional Cash Transfers? We know from Chapter 3 that unconditional cash transfers tend to have lower social costs than conditional subsidies: the conditions, such as continued production of some product in order to receive the subsidy (which may be in the form of a higher price for that product), distort the allocation of resources. By contrast, those who receive an unconditional cash subsidy have an incentive to allocate their resources to the most productive uses. Unless they obtain unconditional cash transfers that they will continue to receive even if they move into entirely different lines of activity, the coalitions' constituencies will get only a part (and often only a small part) of what society gives up. Gary Becker emphasized that the political resistance to a redistribution will be greater the more the rest of society loses from it. Unconditional cash transfers would obviously be worth more to recipients than equal-sized redistributions with strings attached. If it were really true that unconditional cash transfers not only were less costly to the rest of society, but also had to overcome less political resistance, then it would follow that distributional coalitions would always demand unconditional cash transfers. But they do not. Indeed, the redistributions that coalitions seek are almost never unconditional cash transfers. They are usually protective tariffs or quotas, monopoly prices or wages, price supports, regulations that restrict entry and competition, and subsidies for those in particular industries, regions, or occupations, or for those who use particular inputs. Physicians are not subsidized by cash grants that they receive even if they no longer practice medicine; unionized workers are not given government checks that they continue to get even if they retire or become entrepreneurs. Typically, the redistributions sought by organized interests are, if possible, not directly from the government budget. Those who benefit from protection or monopoly do not want the tariffs or monopoly privileges replaced by checks drawn on the national treasury. When open subsidies from the treasury are the only attainable form of redistribution, they are almost always conditional on continued participation in some 45 industry or activity; the subsidies to money-losing industries are available only to the firms and workers who stay in the losing industry, agricultural subsidies go only to those who continue to farm, and money-losing national airlines keep getting subsidies only so long as they fly. To see why the redistributions sought by organized interests are virtually never the unconditional cash transfers that would be the least costly redistributions for the society, we need to return to the earlier analysis of the bias of collective action. We saw that it followed from the logic of collective action that those who could organize for lobbying, collusion, and cartelization were those with small numbers (such as the large firms in concentrated and protected manufacturing industries) or those with access to selective incentives (such as members of professions and workers already established in jobs). Though in a long-stable society some groups of below-average income will be organized for collective action, the overall pattern of collective action will favor more prosperous and better established interests; it will not include groups like the unemployed and the poor. The organized groups that are never poor (but often relatively well-off) usually cannot obtain redistributions by appealing to the egalitarian moral sentiments of the electorate. Naturally enough, the typical voter does not want to have his or her own standard of living lowered solely for the purpose of transferring income to someone else who is already as well off, or better off, than the voter. Accordingly, most organized groups cannot further their interests by appealing on grounds of need for explicit redistributions to themselves. Each narrow distributional coalition also represents a small minority of the electorate, and therefore does not have votes enough to pass an explicit redistribution to itself. So how can an organized group that cannot qualify for explicit redistribution on egalitarian grounds, and cannot hope to outvote the majority that would lose from redistributions to it, obtain any redistributions? Rational Ignorance Makes Implicit Redistributions Possible A distributional coalition can usually obtain redistributions by exploiting the rational ignorance of the electorate. Lobbying and special interest pressure usually succeeds, and cartelization and collusion are typically tolerated, mainly because of rational ignorance. If voters were fully informed, they would not be swayed by the publicity or advertising stemming from organized interests and they would replace any representatives who were serving interests other than those of the voters. They would also not allow those who sell them goods and services to raise prices and wages through cartelization. In reality, the average citizen's knowledge is extremely limited; survey data in the United States reveal that about half of the electorate do not know the name of their congressman in the House of Representatives, much less what help he has given organized interests to obtain 46 campaign contributions. Diverse forms of public relations, advertising, and political indoctrination play large roles in forming public opinion. In the real world organized interests have vast opportunities to persuade voters to accept what are, in fact, implicit redistributions to those organized interests, as long as these redistributions are designed in such a way that the rationally ignorant can be persuaded that the society as a whole gains, or are so inconspicuous or indirect that a majority of the electorate is not aware of them. A policy that can be made to appear, in a newspaper advertisement or a thirty-second television commercial, to have different beneficiaries than it actually has, can be successful. It does not matter much whether lengthy research would show that a policy was contrary to the interests of most voters, because it is not in the typical voter's interest to undertake such research, and any research done by those with a professional interest in the matter will not have a great political effect unless substantial resources are available to publicize the results. These resources may be available when two or more organized interests have opposing interests, but the difficulties of collective action ensure that many organized interests will not be countervailed. As would be expected from the argument here, manufacturing firms ask for protection against imports, or investment incentives, or tax loopholes that will "strengthen or protect the national economy," or they inconspicuously raise prices through collusion. Physicians seek to keep out "unqualified" new doctors, to prevent competitive ("unethical") behavior, and lobby for government and insurance spending that insures "good quality care for the public." Lawyers seek to ensure that citizens have more rights to sue and get "justice" (with the paid counsel of lawyers) in the courts. Large farmers argue that the nation should not be dependent on foreign food supplies and thereby obtain tariffs or quotas, or they argue that the poorer farmer cannot live decently unless there are price supports, even though, in fact, the higher prices go mainly to the larger farmers who produce the most. Skilled workers established in an industry may speak in the name of the whole working class, yet use their cartelistic power to obtain a wage at which it does not pay the employers to hire the part of the working class that is unemployed. Virtually all organized groups seek privileges that they can persuade the public are beneficial to society as a whole; professors, for example, emphasize the value to society of academic freedom that tenure for themselves is supposed to provide, not the opportunity for sloth and neglect of students that their extreme job security makes possible. The Implicit Redistributions that Rational Ignorance Permits are Almost Never Efficient Redistributions The redistributions that can be made to appear to serve the interests of the society as a whole, or that are so inconspicuous and indirect that most voters don't notice them, 47 obviously cannot be unconditional checks from the treasury. It is obvious who gains from an unconditional cash subsidy and the costs to the treasury and the taxpayers from such straightforward subsidies are conspicuous and easily distinguished. To appear to serve the interests of the society as a whole, the implicit redistribution must promote some type of production or activity that rationally ignorant voters can be persuaded is advantageous to the nation. Unfortunately, the very fact that an implicit redistribution is easier to obtain if it encourages some industry usually increases its social costs. Such redistributions normally expand some industry or activity beyond the socially efficient level, and thereby impose a cost to the society as a whole that is distinct from the redistribution itself. Inconspicuous redistributions are often also more costly to society than conspicuous ones: the costs that are not noticed are less likely to be minimized. Tariffs and quotas are exceptionally apt devices for implicit redistribution. They protect a domestic industry and it is usually relatively easy to persuade voters that this also protects or strengthens the economy as a whole. The rationally ignorant have no reason to think about the implicit discouragement that protection of a given industry entails for other, unprotected industries that must compete with the protected industry for resources or export in the face of a higher exchange rate for the national currency, much less any reason to master the demanding literature on the deadweight losses that are often involved. Regulation that limits competition and entry is also an admirable device for implicit redistribution. The rationally ignorant normally take it for granted that the regulation favors the consumers and the public rather than the firms that are regulated and have no reason to go into the logic showing why the regulated firms are normally organized for collective action to influence the regulatory process and why the consumers and the public normally are not. The regulation can also be appealingly described as something that assures "orderly markets" and prevents "destructive competition," and thus made to appear favorable to economic progress. By contrast, the losses to the firms that would have entered the industry in the absence of regulation and the higher costs the regulation usually imposes on consumers are relatively subtle and difficult to identify. So are the ways in which regulation often slows down innovation by complicating decision-making. Similarly, cartelization and collusion are also well-suited for implicit redistribution. They can be defended as efforts to promote cooperation against foreign competition, as devices that insure "orderly markets," and as self-help efforts that do not impose costs upon taxpayers. Since prices and wages change from time to time in any case, the costs to consumers of the higher prices and wages that collusion and cartelization bring about are not usually evident to the casual observer. In general, the most expedient devices for implicit redistribution are those that do not rely principally on the government budget. Usually any redistribution through the government budget, even if it is not an unconditional cash transfer, is easier to identify than a change in prices or wages that arises because of protection, regulation, or cartelization. The changes in prices and wages arising from redistributions that bypass the public treasury normally entail deadweight losses and usually are tied up with regulations and agreements that slow down innovation, but the redistributions that result 48 from these price and wage changes are normally less conspicuous than governmental subsidies. The fact that redistributions through the public treasury are a poor choice for an organized interest partly explains why the sizes of the public sectors of countries are not well correlated with their economic performance. Although all continuing redistributions by any feasible method entail some deadweight losses, the implicit redistributions that nonpoor organized interests have an incentive to seek are those that are the least straightforward or the least conspicuous, not those that have the lowest social cost. If it is proposed that a socially costly implicit redistribution to an organized interest be replaced by a less costly but more nearly transparent redistribution, the organized interest will normally object, because it is likely to lose the redistribution altogether if it becomes transparent. Therefore, industries that enjoy tariff or quota protection, for example, almost always oppose replacing this protection with open governmental transfers. Monopolies and cartels similarly do not want to give up their status in return for government checks. Accordingly, we see that the theory of efficient redistribution is not true for implicit redistributions. It would probably be true, if other things were equal, that redistributions with a higher social cost would be at a political disadvantage. But other things are not equal - rational ignorance entails that the least straightforward and the least conspicuous methods of redistribution be chosen. There is no reason whatever to suppose that the redistributions chosen to meet these criteria will have the lowest social costs. Because of the conditionality inherent in redistributions that achieve their objectives by altering relative prices or reducing the degree of competition, devious redistributions will often have vastly higher social costs than unconditional and transparent subventions. The opacity of implicit redistributions unfortunately also entails that there is no necessity that they should be curtailed just because their social costs become very high - these costs need not be perceived, much less measured, by a rationally ignorant electorate. Many empirical studies confirm the conclusion that we have arrived at by abstract argument: there are many real world examples of implicit redistributions with social costs that are large multiples of the increase in the net incomes of the beneficiaries. Thus, contrary to one school of thought, there is no tendency for bargaining over public policy among the groups in a society to continue until joint gains are maximized and Pareto-efficiency is achieved. Some groups are not organized to bargain and some of the social losses are not even perceived by some large groups of losers. So this is not, alas, the best of all possible worlds, nor even the most efficient. It contains many societies in which rational ignorance can regularly be exploited by narrow distributional coalitions composed of nonpoor insider interests. Such societies are, as I have said before, like china shops filled with wrestlers battling over the china - and breaking far more than they carry away. 49 Aggregate Evidence on the Inefficiency of Implicit Redistributions Many quantitative studies have shown that particular government programs or protective tariffs or other implicit redistributions have social costs that are very large, even in relation to the amount redistributed. Critics may understandable object that these particular cases are unrepresentative; the studies may even have attracted economists' attention because the social costs were so high. There is, therefore, also a need for evidence about how well societies with high levels of implicit redistribution perform in comparison with other societies. If we can explain variations in growth rates and income levels across countries and regions - and especially variations that other theories cannot explain - with the aid of the type analysis of implicit redistribution by organizations for collective action that has been offered above, then that is strong evidence that implicit redistributions often really are inefficient. The theory of efficient redistribution already suffers because it cannot explain differences in economic performance across countries, and if a theory that explains poor economic performance as due to high social costs of redistribution is successful, then the theory of efficient redistribution is in real trouble. Although I did not deal explicitly with the theory of efficient redistribution in The Rise and Decline of Nations, I did show there that all of the really remarkable examples of economic growth and stagnation since the Middle Ages could be explained in large part by the density of narrow coalitions for collective action. Thus the evidence in that book is also telling evidence against the theory of efficient redistribution. For the benefit of readers who do not know that book, I shall refer to a couple of the most dramatic pieces of evidence in it. Then I shall show that the new evidence about international trade in Chapter 2 is also aptly explained by the argument that we have just been through. The argument in Rise and Decline predicts that long-stable societies have more groups that have been able to overcome the difficulties of collective action than lately unstable societies, and that the redistributions that narrow organizations for collective action have an incentive to seek normally have high social costs. The long-stable societies with many narrow distributional coalitions should therefore be less efficient and (for reasons spelled out in Rise and Decline) also less dynamic than otherwise comparable societies. A great deal of evidence suggests that this is indeed the case. The society that has had the longest period of stability and immunity from invasion and institutional destruction is Great Britain. And Great Britain in the twentieth century, as the theory predicts, has the poorest economic performance of all of the major developed democracies. One of the smaller developed democracies, Ireland, has suffered from an even poorer economic performance, but Ireland has also never had its coalitions destroyed by upheaval and (by a process that will be described later in this chapter) it accumulated distributional coalitions at an exceptionally rapid rate during the long period when it had extremely high protection of manufactures. 50 The theory also has clear implications for the Axis nations defeated in World War II. In Germany and Japan, and to a lesser extent in Italy, the repressive dictatorial governments and the allied occupations after the war eliminated most of the distributional coalitions. A few such organizations were created during or shortly after the allied occupations, but most of these were "encompassing" organizations. The theory that has been outlined implies that, after a free and stable legal order had been established, those societies should have grown surprisingly rapidly. And, as everyone knows, they enjoyed "economic miracles." With appropriate elaboration, the aforementioned theory also explains the general pattern of regional growth in the United States since World War II.2 The Salience of the Evidence in Chapter 2 We found earlier that it is inherent in the logic of collective action that small groups, such as the small number of firms in a concentrated industry, have less difficulty acting in their common interest than large groups. If a country (and especially a smaller country) protects its manufacturing industry from foreign competition, then often only a few firms will need to collude to fix prices in a given line of industry. A small number of firms in a concentrated industry will usually be able to fix prices without help from the government, but in any event their small numbers will ease the formation of a lobby to get governmental help in enforcing the price fixing. Collective action among the few firms that produce a given manufactured good is accordingly fairly common, even in politically unstable environments. Any effective price-fixing agreements in manufacturing that work for any length of time need to include some specifications or understandings about the definition and quality of the manufactured product whose price is being fixed, since individual firms can profit by getting a larger share of the cartelized market through subtle price reductions in the form of free add-ons and extra quality. In the long run, this usually means more complex agreements and regulations that slow down the rate of innovation. If, by contrast, a country is completely open to foreign manufactures, then normally all of the firms in the world producing the relevant manufactured good will need to combine if prices are to be fixed. There are so many different manufacturing firms in diverse countries around the world, and the difficulties of coordinating their behavior across many national, cultural, and linguistic borders are so great, that successful worldwide cartelization of manufactures is rare. Since there is no world government, we can be certain that no government will be using its coercive power to enforce a world wide cartel agreement or responding to any lobbying for world wide 2 "The South Will Fall Again: The South as Leader and Laggard in Economic Growth," Southern Economic Journal, 49 (April 1983), pp. 917-32. 51 special-interest legislation. Collusive price fixing among the manufacturing firms in each industry is accordingly more likely in countries (and especially small countries) with protection than in those with open markets. If the manufacturing firms in a country are protected against foreign competition, they do not need to worry that their costs of production will become higher than those in other countries. If the workers are cartelized in the protected industry, they do not need to worry about wage demands that make production more costly than elsewhere. In countries with free trade in manufacturing, by contrast, unions are often severely constrained in the extent to which they can exploit their monopoly over the supply of the relevant labor by the competition of foreign firms. Although unionization takes longer to develop than collusion among manufacturing firms and is in some unstable societies obstructed by periods of repression, some countries that protect manufacturing have cartelized wage levels in the protected industries that are far above the levels that would be sustainable without the protection. So protectionism of manufactures not only leads to collusion among the protected manufacturing firms, but often in due course also to a sky-is-the-limit monopoly wage setting that can have no counterpart in manufacturing industries having to meet international competition. These unrestrained wage levels are sustainable only if there are work rules that keep employers from upgrading new and unskilled labor to replace the workers with the wages that are farthest above competitive levels. Such work rules make industrial life much more complex and legalistic and thereby reduce innovation. In making any manufactured good of any complexity, many different components and resources - sometimes thousands of different inputs - are needed. If each of these can be purchased without hindrance in whichever part of the world offers the best value, manufactured goods of better quality or lower price can be produced. In the protectionist country, some of the components and inputs that are needed will be more expensive and harder to get because they are themselves subject to protection. If protection takes the form of quotas and exchange control, this problem can make it impossible for a manufacturer, especially one in a smaller country, to be competitive on the world market in the production of any complex manufactured product. And, as we saw in Chapter 2, none of the highly protectionist smaller countries were able to sell much in the way of manufactures on competitive international markets, whereas those smaller manufacturing countries with relatively low protection of manufactures (such as Sweden) could. The conceptual framework that has been offered here is certainly consistent with (and I believe in large part explains) the extraordinarily strong association between the openness of smaller countries to imports of manufactures and their success in developing competitive manufacturing industries. Small countries with high protection of manufactures find, as collective action builds up behind their tariff barriers, that they suffer from pervasive and almost limitless implicit redistribution in manufacturing, and become so inefficient that they cannot sell manufactured goods in competitive world markets. 52 These results also bear upon some recent developments in the theory of international trade. As we recall from Chapter 2, these recent developments suggested that, when there is imperfect competition and economies of scale, a country can sometimes best serve its economic interests through protection of manufactures. Though the results here do not call the logic of the new models into question, they do suggest that these models have only a very limited pertinence for public policy. Imperfect competition and decreasing costs are most significant in manufacturing in smaller countries, and it is exactly those conditions that have been considered in this book. The results here suggest that it is precisely when decreasing costs and imperfect competition are conspicuous that protection will be especially damaging, since in these conditions protection especially reduces the number of producers that must act collectively in order to collude and cartelize. The failure of the small countries with high protection of manufactures to be able to export manufactures suggests that the damage done by the collective action that the protection facilitates is of much greater significance than the static gains the countries could sometimes receive from protection of manufactures in some cases where there are decreasing costs and imperfect competition. Sudden Increases in the Size of the Market and the Polity that Determines Trade Policy We also saw in Chapter 2 a dramatic tendency for rapid economic development when a great expansion in the size of the market occurred through national unification or through the creation of a common market. Why would this "jurisdictional integration" be so strongly correlated with economic development? If the difficulties of collective action ensure that it emerges only slowly, and if narrowly-based distributional coalitions obtain implicit socially costly redistributions to themselves, then we can see why jurisdictional integration generates rapid economic development. The creation of a much bigger market and of a bigger jurisdiction for determining trade policy will undercut most of the existing distributional coalitions. It takes some time before new ones form, so there is exceptionally rapid economic growth for a time. Consider a small protected market like a medieval town with its own walls and economic policies. Suppose that suddenly the protection in the small jurisdiction is eliminated because there is national unification or the creation of a common market. Then the organizations for collective action - the guilds that have profited from the use of their cartelistic powers and lobbying powers - will find that after jurisdictional integration their customers can purchase from other suppliers in other towns, or in the suburbs, or in the countryside. Suddenly, because of the creation of a wider market, the guilds have lost their monopoly power. Since the jurisdictional integration creates a much larger jurisdiction, it also requires lobbying on a far larger scale, so the 53 organizations that were of a size suitable to lobby the town will usually not be strong enough to influence the new governmental unit. So the theory predicts that the extent of damage done by organizations for collective action will be much smaller than usual after there has been a big freeing of trade, whether through national unification, a common market, or unilateral freeing of trade. Organizations for collective action will eventually emerge again on a scale sufficient for lobbying or cartelizing the larger jurisdiction and the larger markets that have been created. But if my argument is right, it takes quite some time to overcome the difficulties of collective action, at least when the groups in question are large ones. Thus for a time there can be unusually rapid growth. As the theory predicts, there was unusually rapid growth after the creation of the EEC, after the German Zollverein, after the Meiji Revolution, after the Dutch rebellion against the Spanish, and after the national unifications in England and the United States. Although the timing of the economic growth certainly is consistent with the theory, this correlation is not, of course, necessarily sufficient to establish causation. Happily, various special features of the pattern of growth offer striking support for the argument. If we look, for example, at England in the early modern period, we find that the main form of manufacturing was textile manufacturing and that, after jurisdictional integration, it came to be handled under the merchant-employer or "putting-out" system. Manufacturing was not done mainly in the cities where the guilds, the distributional coalitions of the times, held sway, but rather in the scattered cottages of the countryside. Merchants went out to the countryside to contract with cottagers to have wool spun into yarn or yarn woven into cloth. This system was expensive both in terms of transportation costs and transactions costs, but it was nonetheless cheaper than production under guild rules in the old towns. In addition, much of the economic growth was concentrated in new towns or in suburbs where guilds did not exist. When the bigger national markets were created, firms could produce wherever they found costs were lowest, and costs were lower in places that were not under the control of guilds. Both the striking association in smaller countries between relatively low levels of protection of manufactures and success in developing a competitive manufacturing industry and the clear association between increases in the size of the market through jurisdictional integration and economic development fit neatly into the pattern predicted by the theory offered here. Both lead to less implicit redistribution. Chapter 6: The Lower Costs and Ultimate Limits of Explicit Redistribution The evidence presented in Chapter 2 revealed a remarkably clear and strong pattern. Among the 41 less developed countries studied by the World Bank, there was a regular tendency for the outward-looking countries to perform better than the inward-looking or protectionist countries. The large number of internationally successful industries studied by Michael Porter and his associates were systematically industries that had not been significantly subsidized or protected from either international or domestic competition. Most strikingly, in all smaller countries on which data were available, high protection of manufactures was strongly associated with the failure to export significant manufactures on competitive world markets. In addition, great increases in the size of a trading area and in the jurisdiction that determines trading policy were also regularly associated with great accelerations in economic development. The evidence on protectionism and jurisdictional integration was so overwhelming that some fairly strong conclusions could be drawn whether or not all of the other variables relevant to industrial development have been included in the analysis: the evidence was a bit like that on plane crashes whose adverse impact on the longevity of the victims is clear even without taking into account the many other variables that affect life expectancy. This makes it all the more puzzling why the national-level evidence presented in Chapter 1 did not show any strong relationship in either direction between sizes of the governments or the extent of transfers to low-income people and economic growth. Because of what we know from observing what happens in individual markets, we should expect that properly specified statistical tests would show that an unusually large and growing welfare state would make a country's rate of economic growth (though by no means necessarily its level of utility or welfare) measurably lower than it would otherwise be. Yet from the fixation in ideological debates - and even from some leading 55 economists' contributions to these debates - we have been led to expect that the extent of transfers to low-income people and the size of the welfare state were of decisive significance for the fate of nations: many people obviously take it for granted that transfers to low-income people are so overwhelmingly important for economic growth and human welfare that their impacts would be clear even in analyses that neglect other relevant variables. But the effects of transfers and the size of the welfare state on economic growth are evidently not colossal enough so that, like the effects of plane crashes on longevity, they overwhelm other factors. So we must ask why hasn't the adverse effect of a larger welfare state on economic performance been strong enough to overwhelm all other factors and thus to produce a striking pattern in the cross-national and historical comparisons reported in Chapter 1? The conceptual framework presented in the last three chapters suggests a possible answer. These chapters have shown that there are powerful incentives for organized groups to seek implicit redistributions and preferably implicit redistributions that achieve their objectives by altering relative prices rather than through the government budget. The indirect and concealed character of these redistributions - and the high degree of conditionality that is needed to conceal their redistributional purposes - usually makes them more costly to society. If the argument that has been offered so far in this book is correct, there is no reason why the amount of redistribution to low-income people, and even the share of the government in GDP, should be closely correlated with the total social costs of redistribution. It is true, as critics of large welfare states say, that redistributions generate deadweight losses, but it is wrong to jump to the conclusion that the countries that have the largest amount of explicit redistribution to low-income people, or the largest share of the government disbursements in GDP, lose the most from redistribution. Countries in which less is transferred to the poor and in which the government is smaller can easily lose more from redistribution than the countries with the largest welfare states do. That is probably the main reasons why, as we saw in Chapter l, there was no strong correlation, either for the developed democracies or for all noncommunist economies, between the relative size of the public sector and the rate of economic growth. The main purpose of this study is, as the subtitle indicates, to ask some questions about Sweden. So what does the intellectual framework developed in the last three chapters tell us about Sweden? As we try to answer this question, it is important to keep in mind that there is no lack of implicit redistributions in Sweden (and in the other countries with the most generous welfare states). The argument in the prior chapter that there are often large losses from implicit redistribution is, I believe, definitely applicable to Sweden. But is Sweden the country that loses the most from implicit redistributions? Or even one of the countries that loses the most? I doubt it. We must postpone any final answer until there is further research - the main purpose of this study is to generate new questions. Yet I find it hard to imagine that anyone would argue Sweden was unique in the extent and costliness of its implicit redistributions. I have found it more difficult to 56 find examples of strikingly costly implicit redistributions in Sweden than in most other countries. What reason is there to think that Sweden would be losing more from implicit redistributions than other countries? Why would it be losing relatively more than Argentina? Or Ireland? Or Britain and most of the other English-speaking countries? Why would it be losing more than the average country in Western Europe (not to mention those of Eastern Europe and the developing nations)? Although we must postpone any final conclusion until the experts on Sweden have researched the matter, my working hypothesis until then is that Sweden is not the country that suffers the most from implicit redistribution, nor is it probably even close to being in this position. I hypothesize that Sweden is doing as well as it is in comparison to many other countries because its performance is not dragged down as much by implicit redistribution as that of some other countries is. The losses from implicit redistribution in the countries which have the most of it are so large that they more than offset Sweden's larger losses of measured output because of its relatively large explicit redistributions to low-income people. What Limits the Amount of Implicit Redistribution in Sweden? One major factor that, I believe, keeps down the quantity of implicit redistribution in Sweden is the country's relatively high resistance to tariff and quota protection for manufacturers. Certainly Sweden does not lose as much from implicit redistribution through protection of manufactures as some countries do. The statistics and the historical evidence in Chapter 2 suggest that this is a matter of extraordinary quantitative importance. The argument in Chapter 5 suggests that this relative openness to imports of manufactures also reduces the amount of implicit redistribution in Sweden's labor market and in some other factor markets as well. This raises another interesting question: Why is Sweden somewhat more favorable to free trade in manufactures than many other countries? It would take far too long to analyze this question adequately now, so I shall merely refer to some of the relevant arguments here. Perhaps one factor is the historical accident that modern Swedish (and Danish and Norwegian) industrial development began in part through primary product exports in the nineteenth century, especially to free-trading Britain, which was then about the most prosperous country in the world. The Scandinavian countries exported dairy products, timber, oats, shipping services, and iron ore, for example, in the period in which their catch-up growth began in the nineteenth century. Although I could easily be wrong, I sense that many modern Scandinavian manufacturing industries began with the processing or development of primary product exports. Logs were in time processed into finished lumber, then into paper, and finally into sophisticated paper products; iron ore exports ultimately turned into exports of iron, steel, and finally into exports of complex manufactured goods; exports of dairy products 57 from Scandinavia maybe had something to do with the invention in Sweden of the cream separator.1 I do not know enough about the matter to draw any final conclusions, but it would be useful for someone to look into the possibility that this apparent symbiosis between primary product production for export and manufacturing, especially in the context of the contemporary British example and advocacy of free trade, encouraged early Swedish manufacturing interests to be more favorable to exports and to free trade than they might otherwise have been. Probably another factor is the quality and influence of professional economists over the course of modern Swedish history. Though more of the giants of economic thought have come from Britain than from Sweden, the Swedish contributions have been extraordinarily large in relation to the size of the country - Sweden is, perhaps, the country with the largest per capita contributions to the subject. Often Swedish economists have also had a considerable influence on economic policy and on the thinking of the intellectual class in general. It might seem that my earlier argument about rational ignorance and the role of self-interest in political life would rule out any influence of ideas on economic policy, but that is not the case. As I have argued elsewhere,2 those with a professional stake in a subject do not find it rational to be ignorant of that subject. Although intellectuals are as susceptible to self-interest as other people, their selfish interests are more likely to show up mainly in those matters in which they have an immediate occupational stake (such as their own pay and tenure), rather than in the typical public policy issue; individuals in a variety of social roles are like a judge or a member of a jury in the sense that their individual self-interest does not bear in any important way on the matter at hand. So there are some people who have both an incentive to become informed about a public issue and a reason to look at it in a public-spirited way, and ideas can exert an influence through them. Therefore, in spite of my emphasis on the importance of organized vested interests, I believe that the quality of ideas is also an important determinant of what policies and institutions a country chooses, and that Swedish economic performance over the long run has probably been helped by the country's strength in economics. It appears that public opinion about protectionism, for example, has been greatly different in Sweden, on the one hand, than in Argentina, on the other. This difference must be due, in part, to the difference in the economic professions in the two countries. 1 I examined this issue somewhat less superficially in a talk in Stockholm in 1984 organized by PKbanken. 2 See my paper on "How Ideas Affect Societies" in Ideas, Interests & Consequences, (London: Institute of Economic Affairs, 1989), and also reprinted in the LSE Quarterly, 3:4 Winter 1989, pp. 279-304. 58 Encompassing Organizations Another factor that probably reduces the amount of implicit redistribution in Sweden is the large role that what I call "encompassing" organizations have played, at least at times. Suppose that an organization encompasses a large part of the income-earning capacity of a country; its constituents earn, say, 50 percent of the nation's GDP. Such an organization, if it truly furthers the interests of its clients, will act very differently from the narrow coalition considered in Chapter 5 that represented only 1 percent of the income-earning capacity of a country. If the constituents of the organization get half of the benefit of anything it does to increase the prosperity of a country, that will often be enough to give the organization an incentive to do something to make the country more efficient and innovative. Such an organization, if it optimally serves its clients, will also not seek any redistributions for its clients that entail a social loss that is large in relation to the amount redistributed. If the clients of an organization get half of the Swedish GDP, they will on average bear half of the social loss from any redistribution to themselves. Their organization, if it represents them rationally, will then arrange any redistributions to them in ways that hurt the society as little as possible, and it will also stop demanding redistribution whenever the social costs of a redistribution come to be twice as large as the amount that is redistributed. Unlike the distributional coalitions considered in the Chapter 5, encompassing organizations have an incentive to seek only efficient redistributions, and bargaining costs between any pair of encompassing organizations may not be prohibitively high. Thus the theory of efficient redistribution may, at times, apply to some extent in societies with encompassing organizations. Encompassing organizations have been relatively more important in Sweden than in most other countries. The LO (especially in the 1950s and early 1960s, but less so now) has represented a large proportion of the whole organized work force. To some extent, it has been linked with the Social Democratic Party, which strives to control the government by itself and thereby represent a majority of the electorate, and is accordingly an encompassing organization. The Swedish Employers' Federation represents most of the business in the country and is similarly an encompassing organization. In The Rise and Decline of Nations, I argued, in a cautious and carefully qualified way, that maybe some of the economic growth of Sweden and of other countries with encompassing organizations, such as Austria and Norway, could be attributed to the unusually encompassing character of their organizations. In subsequent publications, I have examined the strong forces that, over the long run, can make encompassing organizations break down, or fail to act in ways that serve their clients' aggregate interests, and emphasized again the dangers of considering encompassing organizations as an ideal or reliable solution to the problem of institutional sclerosis.3 I also urged more research on the matter. Some extremely interesting research along these lines has in fact been done. For 3 "An Appreciation of the Tests and Criticisms," Scandinavian Political Studies (March 1986) 59 example, in independent articles on unemployment and macroeconomic stability, Lars Calmfors and John Driffill,4 on the one hand, and Bradford DeLong and Lars Jonung,5 on the other, have found that countries with encompassing organizations and countries with decentralized, competitive arrangements tended to have lower unemployment rates than countries with the in-between arrangement of a dense network of narrow distributional coalitions. While also making other contributions, these papers greatly extend the theory in Rise and Decline and they are in general accord with the argument that has been used in this essay.6 Similarly, in another study Bernhard Heitger found that growth rates were also higher in countries that had encompassing organizations or relatively weak distributional coalitions and were lower in countries with strong but narrow distributional coalitions.7 One likely possibility is that, as the Calmfors-Driffill, DeLong-Jonung, and Bernhard Heitger papers appear to suggest, Sweden is doing as well as it is, in part, because it has had relatively more encompassing organizations than the English- speaking countries, for example. I am, however, anxious not to push this argument too far or to oversell the idea of encompassing organizations. Thus I hope that any readers who are inclined to think that encompassing organizations are a reliable solution will examine the extensive argument I have offered about how encompassing organizations tend to break down over time or come to be decisively influenced by small subsets or coalitions of their own membership.8 Factors Lowering the Costs of Explicit Redistributions While implicit redistributions are much more important and more damaging to economic performance than has previously been understood, there are also reasons why explicit 4 In "Centralization and Wage Bargaining," Economic Policy (April 1988), pp. 14-61. 5 In "Hysteresis, the Corridor, and the Political Economy of Unemployment, 1955-1986" (forthcoming). For a fuller analysis of the Calmfors-Driffill and DeLong-Jonung papers and for a fuller statement and wider tests on how the more general theory at issue helps explain unemployment and macroeconomic performance, see Michael Kendix and Mancur Olson, "Changing Unemployment Rates in Europe and the USA: Institutional Structure and Regional Variation," in Labour Relations and Economic Performance (London: MacMillan Press Ltd, 1990), pp. 40-67. 6 In one respect, the technical features of the Calmfors-Driffill model are somewhat different from the argument in Rise and Decline, since they assume labor cartelization at the small enterprise level as their decentralized or competitive polar case, rather than completely competitive arrangements. But in their model the cartelistic power of workers in these enterprise unions is sharply limited by the competition from firms producing close substitutes, so that in fact their argument is, as they point out, similar in spirit to mine. 7 "Corporatism, Technological Gaps, and Growth in OECD Countries," Weltwirtschaftliches Archiv (1987). 8 In "An Appreciation..." cited above. 60 redistributions are, sometimes, less damaging to economic performance than might be supposed at first glance. Some of these reasons are merely the obverse of the arguments about implicit redistribution, and I shall set them out before going on to those that are entirely different. As we saw earlier, there are fundamental reasons why such scattered groups as the poor are not able to act collectively and they are not organized anywhere. Public programs for low-income people are, accordingly, not due to lobbying or other organized action by the recipients of the transfers, but are rather mainly the result of the sympathy and egalitarian sentiments of the electorate and the political leaders they have elected. Since popular moral, sympathetic, and ideological motives mainly inspire income transfers to low-income people, there is usually no desire to conceal these transfers; political leaders may even point to them with moral satisfaction. Egalitarian welfare-state transfers to low-income people (and to the aged, handicapped, and so on) are, therefore, open transfers out of the public treasury. This means that redistributions inspired by the moral judgments of the electorate need not (and often do not) involve any monopolization or protectionism, such as coalitional redistributions typically entail. Nor do they entail conditions on government subsidies designed to create the impression they are intended to serve some broad social purpose - they already reflect the moral purposes of the electorate. As a result, the extra social costs of monopolization, of protectionism, and of conditions on government grants that appear to rationalize them are usually not a serious problem with redistributions that arise because of the sympathy of the electorate. There are also totally different reasons why the explicit redistributions inspired by the moral concern of the citizenry sometimes have lower costs than implicit redistributions. For one thing, the prototypical morally inspired redistributions are to the poor, the aged, the ill, the handicapped, and fatherless children. On average, the recipients of those redistributions inspired by sympathy are, after all, less productive than those who are well off -the same traits, such as age or handicaps, that tend to provide entitlement to welfare-state transfers, usually also imply low productivity. Some recipients of transfers would not have been working anyway, and transfers to such people need not have any significant deadweight loss beyond that of the taxation that is needed to pay for the transfer. In most cases, the people who are most productive and whose skills and resources are also currently prized in the society are not, at the same time, poor. A society can transfer funds to individuals who would not have been working in any case or to individuals who would, in the absence of welfare payments, be domestic servants or gardeners, yet remain dynamic and productive. But it cannot misdirect the energies of its best workers, managers, professionals, entrepreneurs, or corporations without serious losses. It is mainly the former that are the objects of the sympathy of the electorate, and mainly the latter that are able to overcome the difficulties of collective action. The recipients of transfers inspired by sympathy will, moreover, normally not be 61 major users of intermediate goods and accessory inputs in the way many beneficiaries of distributional coalitions are. To obtain an increase in net income through redistribution of a hundred million crowns, the members of a coalition will normally have to obtain or carry out a policy that misallocates intermediate goods and complementary resources, so the firms or workers in an industry will obtain only a part of any higher prices or other benefits their organized power brings about. Society may need to spend many times as much as a hundred million crowns to increase the net income of the organized group by this amount. By contrast, when there is, through the sympathy of the electorate, a desire to shift a hundred million crowns to the poor, there need be no misallocation of intermediate goods and the like, for the poor do not normally control productive processes that use a lot of accessory resources, and the transfer to them is likely to be a fairly straightforward transfer. The limited involvement of the poor in the productive process also means that aid to them does not have much impact on the innovation that is the main source of economic growth. As Chapter 5 argued, the regulation and complex agreements that are associated with implicit redistributions delay innovation and thus affect the rate of increase of productivity as well as the static allocation of resources. By contrast, explicit redistributions to low-income people usually affect only the existing allocation of resources, rather than the processes by which innovation take place. Yet another factor tends to make the losses in efficiency and dynamism from egalitarian-inspired redistribution less than those from redistributions obtained through the capacity to lobby or to cartelize. Although the matter is complicated by such factors as the organized power of those who administer public programs for the poor, there is still a sense in which these explicit transfers are limited by the preferences of the electorate. These transfers come out of the public treasury and their magnitude is accordingly known, and this means that in the long run they cannot be larger than some majority in the electorate is willing to accept. There is no equivalent constraint on redistributions whose magnitude and purpose are obscured. Although rational ignorance always works against efficiency in redistribution, the theory of efficient redistribution is not so far off the mark for explicit redistributions. Certainly, Gary Becker's contention that the political opposition to a redistribution rises as its social costs increases is true for explicit redistributions. Since the costs of explicit transfers to low-income people are relatively transparent, we should expect that the opposition to them should increase as their social costs rise. This appears to have happened in many countries: as the size and excess burden of the welfare state has increased, so has the opposition to its growth. It is even possible that the design of programs to aid poor people will improve over time and that societies will, as experience and insight accumulates, converge on levels of explicit transfers that take full account of both the deadweight losses from such programs and their moral worth. 62 A Recapitulation Let us recapitulate the argument of this essay and try explicitly to answer the second question - Why isn't Sweden worse off? My first question, about why Sweden is not even richer, has a standard answer: that Sweden's unmatched degree of egalitarianism and its uniquely large public sector impair the incentives to work, save, and allocate resources to their most productive uses. There is no standard answer to the second question. Yet we must be cautious about saying anything very general about the Swedish economy until we have an answer to the second question. Although the performance of the Swedish economy looks much less impressive now than it did at the end of the 1960s, Sweden's per capita income still puts it in the top group of countries. The Swedish economy has outperformed not only the underdeveloped and Eastern European economies, but also some economies that were once ahead of it. Even on the lowest possible estimate of Sweden's performance, its economy is ahead of Argentina's, Ireland's, and Britain's, but all these societies have had less egalitarian redistribution and relatively smaller public sectors than Sweden. The puzzle is heightened by the fact that there is no very strong tendency for the countries or historical periods with the largest welfare states to grow more slowly than those with less redistribution to low-income people. The only serious and intellectually honest way to tackle the second question is with a realization that the familiar answer to the first question is largely true. The reasons for believing that individuals respond to incentives in the way economists predict - and that tax and subsidy payments as large as those in Sweden must bring deadweight losses - are compelling. There is further evidence of the decisive importance of the familiar economic analysis of incentives in the data and historical information on international trade that was presented in Chapter 2. This chapter showed that trade in manufactures in smaller countries provides impressive evidence about the impact of protection: no small country with really high protection of manufactures has been able to develop an internationally competitive manufacturing sector. The great quantitative significance of trade policy is also made clear by the pattern of rapid growth after there has been a great increase in the size of jurisdictions and trading areas. Taken together, the unequivocal data on protectionism and economic performance and the ambiguous data on the size of the welfare state and growth are puzzling. Why does the distortion of incentives through trade policy evidently have so much more quantitative significance than the distortion of incentives through welfare state redistributions? At first glance, the theory of "efficient redistribution" might seem to explain why Sweden and other large welfare states are doing as well as they are. This theory holds that, if the dead-weight losses from any kind of redistribution rise, the political opposition to the redistribution will also rise, and ultimately to the point where further redistribution will cease. The social losses from redistribution are accordingly usually 63 fairly small. Some versions of the theory of efficient redistribution assume that the bargaining between the groups with conflicting interests about a redistribution will continue until the joint gains of the groups in question are maximized so that the society is fully efficient. The theory of efficient redistribution as it stands is not satisfactory, in part because it fails to distinguish between two different types of redistributions of income. There are not only the explicit redistributions that are at the center of debates about the welfare state, but also implicit redistributions. These implicit redistributions occur when a government program or other collective action changes the distribution of income without increasing the aggregate real income of the society, but the policy is rationalized by alleged benefits to the nation as a whole or to groups other than the group that seeks the redistribution. For example, protectionist measures or restrictions on competition that are represented as strengthening a national economy, but actually change the distribution of income in favor of the group that seeks the protection or restriction of competition, are implicit redistributions. The social loss from redistributions of income arises in large part from the criteria or conditions that are attached to or implicit in them. If an individual in an efficient economy with competitive markets is given cash with no strings attached, the incentives of the recipient are not impaired, because he or she continues to have an incentive to allocate all resources to their most productive uses. By contrast, a redistribution that is officially restricted to those in some industry, occupation, or locality - or that takes the form of a change in relative prices - distorts the incentives facing the recipients of the redistribution and adds to social costs. The redistributions that actually occur, and the ways they are carried out in practice, depend dramatically on "rational ignorance" - the fact that the typical citizen does not serve his or her interests by spending a lot of time studying public affairs and therefore is relatively uninformed about public policy. Rational ignorance makes it possible for an organized interest to obtain a redistribution that the majority of the electorate would not have tolerated had it been fully informed. Accordingly, a redistribution that can be made to appear to be a measure that actually strengthens a society, or that is so inconspicuous that it is not noticed by the average voter, can be politically viable, even if the recipients of the redistribution are relatively well off people who would not have been able to persuade the electorate to give them a transfer on altruistic grounds. The capacity for collective action is found mainly in established groups and is stronger at upper than at lower income levels. This is because collective action is possible only for groups that have small numbers, like the large firms in concentrated industries, or have access to "selective incentives" that are usually available only to insiders and relatively well established gropes. Those groups at the bottom of society, such as the poor and the unemployed, and some other groups, such as consumers and taxpayers, are virtually never able to act collectively. This implies that most of those groups that have the capacity to act collectively 64 are not in a position to obtain explicit redistributions on grounds of need. They must instead use their power to get implicit redistributions. Because of rational ignorance, they can often get substantial redistributions through actions and policies that do not appear to be redistributional and that appear to serve the society as a whole, or are so inconspicuous that they are not noticed by a rationally ignorant electorate. Organized interests accordingly prefer redistributions that are not unconditional cash transfers, but rather embody conditions that make them appear to have a general social purpose. Ideally, a coalition wants policies that change relative prices in its favor, and that do not involve cash transfers from the government budget. Coalitions that represent only a tiny part of the income-earning capacity of a society have an incentive to seek such redistributions even if the social costs are large multiples of the amount they win in the distributional struggle. The fact that many groups are not able to organize for collective action means that in most cases the losers from redistributions are not able to act collectively. It follows that there is usually little or no bargaining among gainers and losers from redistributions and thus little or no tendency for bargaining to reduce the social costs of redistribution. This factor, and the incentive for organized groups to choose untransparent and inherently conditional redistributions with relatively high social costs, means that the theory of efficient redistributions is wrong for implicit redistributions. Societies with a high density of narrow distributional coalitions have lower income levels and growth rates than would otherwise be expected. Small countries with high protection of manufactures have particularly high levels of implicit redistribution, since the concentrated industries behind protectionist barriers are able to fix prices with relative ease. Cartelized labor forces in these industries can also organize redistributions to themselves with relatively little constraint. This helps to explain the strong findings on international trade in Chapter 2. The aforementioned facts, along with a number of quantitative studies of the social costs of particular implicit redistributions that have been done by other economists, support the theory offered in this book. Although Sweden undoubtedly loses a good deal from implicit redistribution, there are many reasons for thinking that it probably does not lose as much from this as some other countries do. Sweden's relatively low level of protection of manufactures, its relatively high level of economic understanding, and its "encompassing" organizations suggest that implicit redistributions may not escape social control quite so much in Sweden as in some other countries. Usually, implicit redistributions delay innovations more and have higher overall social costs than explicit redistributions. This is partly because implicit redistributions exploit rational ignorance and cannot be transparent, and therefore entail conditions or criteria that restrict the redistribution to those in some industry or activity, which in turn distort the allocation of resources. The lack of transparency of implicit redistributions also means that they are less likely than explicit redistributions to be curtailed when their social costs get out of hand. In contrast, altruistically motivated explicit redistributions often involve some special factors that limit their social costs: the recipients are 65 generally not the most productive people in the society, so the misallocation of their time involves less social loss; they normally do not control any significant resource beyond their own time, so intermediate goods and auxiliary factors are usually not misallocated; their limited involvement in the productive process implies that the aid to them does not normally affect the rate of innovation, which is the main determinant of the rate of economic growth. Although a final answer to the second question must await further research, I propose a tentative answer designed to stimulate the thinking and research of those who know much more than I do about Sweden. Sweden may well lose somewhat more from explicit redistribution than most other countries, but explicit redistribution does not have nearly as much importance for economic performance as might be supposed from the ideological debates. At least in many countries, implicit redistribution is a more important influence on the economy. While Sweden's losses from implicit redistribution are no doubt substantial, there is no reason to suppose they are as high as in some other countries. Since the social losses from implicit redistributions are often much greater than those from explicit redistributions, the economies that have exceptionally high levels of implicit redistribution perform relatively badly. Sweden is therefore able, in spite of its high level of explicit redistribution, to surpass or at least match these countries. As I see it, that is probably why Sweden is not, relatively speaking, worse off. An auxiliary finding of the argument here is that a society can, if it has good policies generally and avoids redistributions that have no moral justification, provide decently for its poor, yet also be a dynamic and prosperous society. Too Much of A Good Thing is Bad: Nonlinearities and Lags I am very concerned that my argument should be balanced, fair-minded, and useful to thoughtful people of all political persuasions. Thus I am worried about the possibility that the argument in this book will be pushed too far. This danger can be seen most starkly by imagining that the moral concern for those of below-average income were to go to the point that each person with a below-average income would be given a transfer sufficient to bring him or her to the average level of income. If no one has a below- average income, no one can have an above-average income either: this would imply a system of taxes and transfers that would eliminate all inequality of incomes. And this, of course, would eliminate all incentive to earn income. This extreme case is useful in reminding us that the social loss from redistributions of income inspired by egalitarian motives is strikingly dependent on how much income is redistributed. The arguments and evidence earlier in this essay indicate that an open, competitive society can do a great deal to alleviate the misfortunes of the poor without losing its dynamism. The sclerosis in the Western societies is mainly not the result of efforts to relieve destitution, but rather of other causes. 66 Yet, after some point, additional egalitarian redistribution must bring disproportionately large costs to society. When transfers are sufficiently large, taxes must be so high that their excess burdens and their adverse effects on risk-taking and innovation are overwhelming. Moreover, the condition inherent in egalitarian redistribution - that the recipient lose entitlement to the transfer if he or she succeeds in earning a good income - ensures that the social costs of the distribution of the transfers must rise nonlinearly when redistribution comes close to the point of eliminating all inequalities. There need be virtually no loss of dynamism in a society from helping the poorest 5 percent: their misfortunes and disabilities would have limited the extent of their production and innovation in any case, and (if other things are right) there will be a cornucopia of output from the remaining 95 percent. But if a society tries through transfers to bring even those who are 5 percent below the average income closer to the average, all incomes must be about the same and nearly all of the incentive to produce and innovate will have been taken away. As society enters anything resembling this latter range, increased transfers must lead to wildly disproportionate losses of efficiency and innovation. Earlier in this book I presented some merely illustrative data to warn readers against the commonplace assumption that the large growth of the welfare state overwhelms other factors influencing economic performance. Lest data that were offered to motivate inquiry be interpreted recklessly or in a one-sided way, I present some further data that point in the opposite direction in Figure 4 and Table 5. From the figures on the size of government and economic growth in the last few years, it appears that the countries with larger public sectors have tended to grow more slowly than those with smaller public sectors. These further data, coming as they do from only a few years and being insufficient in other ways as well, establish nothing, but they do raise a useful question. They alert us to the possibility that redistribution could be having greater social costs in more recent times, when it has been pushed a good bit farther than in the 1950s and early 1960s. What about the egalitarian redistributions in Sweden today? Are they more or less explained by my earlier argument, showing that egalitarian redistributions can have social costs that are fairly small, especially in comparison to those arising from the redistributions obtained by well-established and relatively well-off organized interests? Or have they risen into the range where the social costs are absurdly disproportionate? This is a not a question that can be answered from afar, and it is in any case a matter for Swedes to decide. Moreover, to answer this question correctly one would have to go into many important aspects of the matter that I have not even touched on here. In general, these other aspects of the matter are dealt with very well in the impressive Swedish literature on the welfare state, so there was no reason for me to go into them here. The purpose of the present essay is not to settle ancient controversies or to summarize the existing literature, but rather to introduce some fresh perspectives that may enable people with a detailed knowledge of Sweden to get a better view of both sides of the matter. 67 Figure 4 Given the nonlinearity that has been described, we can also see why the argument about time lags discussed earlier in this essay was too simple. We can be reasonably certain that most of the adverse effects of the levels of egalitarian redistribution in Sweden in the 1930s or 1950s have already been felt. But it is too early to know what the full effects of the higher redistributions of the late 1970s and the 1980s will be. 68 Table 5: Average Government Size and GDP Growth, 1980-87 (percent) Annual GDP Growth Government consumption Social Security Transfers Government Expenditure Current Disbursement Total outlays Australia 2.87 18.5 9.3a 27.8 34.2a 37.3a Austria 1.67 18.7 20.0 38.7 45.3 51.1 Belgium 1.56 17.5 21.9 39.4 51.3 53.9 Canada 2.86 20.0 11.6 31.6 41.9 45.2 Denmark 1.83 26.4 17.0 43.4 55.9 59.1 Finland 3.27 19.5 10.0 29.5 35.9 39.8 France 1.67 19.1 21.3 40.4 47.1 50.6 Ireland 2.00 19.0 15.3a 34.3 49.0a 54.0a Italy 2.21 16.1 16.5 32.6 43.6 48.2 Japan 3.85 9.8 11.0 20.8 26.7 33.3 Netherlands 1.12 17.0 27.0 44.0 54.9 60.2 Norway 3.26 19.3 15.2 34.5 45.0 48.3 Sweden 1.79 28.1 18.3 46.4 59.8 63.8 Switzerland 2.04 13.1 13.3 26.4 30.3 30.3 United Kingdom 1.70 21.4 13.5a 34.9 43.8a 46.9a United States 2.55 18.2 11.2 29.4 34.6 35.9 West Germany 1.46 20.1 16.6 36.7 43.8 48.0 Note: For Definitions and Sources, see Table 1. a. 1980-86. How Bright are the Northern Lights? I hope this essay has succeeded in conveying my conviction that a society can, if its policies and institutions are intelligent, prevent destitution and even make fairly generous provision for its least fortunate citizens, yet still remain a prosperous and dynamic society. If a society opens it markets to imports and avoids special-interest legislation, cartelization, and collusion, it can be innovative and prosperous even while it significantly alleviates the privations of its poorer citizens. At least to a degree, this same conviction was part of the inspiration behind the Swedish welfare state. Thus I believe there really are Northern Lights. They are beautiful. They can also give societies a rough sense of direction. But they are not bright or stable enough to save a society, if it 69 rushes far ahead without taking along any further sources of light, from stumbling into catastrophe. From anonymous_animus at yahoo.com Tue Sep 27 19:06:15 2005 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Tue, 27 Sep 2005 12:06:15 -0700 (PDT) Subject: [Paleopsych] dealing with dictators In-Reply-To: <200509271800.j8RI0PX03919@tick.javien.com> Message-ID: <20050927190615.86308.qmail@web30807.mail.mud.yahoo.com> >>Getting rid of corrupt sociopath dictators requires either A) a long, uncertain process dependent on international diplomatic pressure; or B) a short, uncertain process dependent on military might. Well, I guess there's always C) God turns said sociopath dictator into a donkey, but that hasn't happened since Nebuchadnezzar.<< --Another option would be to create conditions in which the dictator's own people see him as weak or naive, and replace him. A revolution from below is also possible, as we've seen in many recent cases. If the US hadn't been occupying Iraq, it's even possible that Iran would have gone toward more reform instead of backwards. When an external enemy is present, dictators last longer because they are able to turn anger within the country toward the external threat, which further empowers the dictator and prevents him from slipping in the eyes of his supporters. Dictators are notorously bad at managing their economies, and without an external enemy, the dream of prosperity they attach to themselves quickly turns sour. Kim Jong Il routinely uses fear/anger toward the US to prop himself up, and Bush was probably right to put the US in the background rather than the foreground of negotiations, giving China center stage. Michael __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From shovland at mindspring.com Tue Sep 27 19:15:21 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Tue, 27 Sep 2005 12:15:21 -0700 (GMT-07:00) Subject: [Paleopsych] dealing with dictators Message-ID: <2759387.1127848521770.JavaMail.root@mswamui-backed.atl.sa.earthlink.net> Do some research on "mindwar." It's the main option we have available. I'm attaching one of the seminal papers on the subject. Steve Hovland -----Original Message----- From: Michael Christopher Sent: Sep 27, 2005 12:06 PM To: paleopsych at paleopsych.org Subject: [Paleopsych] dealing with dictators >>Getting rid of corrupt sociopath dictators requires either A) a long, uncertain process dependent on international diplomatic pressure; or B) a short, uncertain process dependent on military might. Well, I guess there's always C) God turns said sociopath dictator into a donkey, but that hasn't happened since Nebuchadnezzar.<< --Another option would be to create conditions in which the dictator's own people see him as weak or naive, and replace him. A revolution from below is also possible, as we've seen in many recent cases. If the US hadn't been occupying Iraq, it's even possible that Iran would have gone toward more reform instead of backwards. When an external enemy is present, dictators last longer because they are able to turn anger within the country toward the external threat, which further empowers the dictator and prevents him from slipping in the eyes of his supporters. Dictators are notorously bad at managing their economies, and without an external enemy, the dream of prosperity they attach to themselves quickly turns sour. Kim Jong Il routinely uses fear/anger toward the US to prop himself up, and Bush was probably right to put the US in the background rather than the foreground of negotiations, giving China center stage. Michael __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com _______________________________________________ paleopsych mailing list paleopsych at paleopsych.org http://lists.paleopsych.org/mailman/listinfo/paleopsych -------------- next part -------------- A non-text attachment was scrubbed... Name: MindWar.pdf Type: application/pdf Size: 45683 bytes Desc: not available URL: From checker at panix.com Wed Sep 28 19:32:36 2005 From: checker at panix.com (Premise Checker) Date: Wed, 28 Sep 2005 15:32:36 -0400 (EDT) Subject: [Paleopsych] CHE: Educational Testing Service Expands Efforts to Measure Computer-and-Information Literacy Message-ID: Educational Testing Service Expands Efforts to Measure Computer-and-Information Literacy News bulletin from the Chronicle of Higher Education, 5.9.20 http://chronicle.com/daily/2005/09/2005092002t.htm The Educational Testing Service is expanding its efforts to measure how savvy students are about technology and about the information that they get online. After unveiling an information-literacy test last year aimed at students entering their junior year of college, the testing service has designed a new version for high-school seniors, to help colleges decide if students can handle basic information-processing tasks needed for college work. The new test will be called the ICT Literacy Assesment-Core Level. The first three letters stand for "information and communication technology." The new test was developed based on feedback from college officials, said Teresa M. Egan, project manager for new-product development at ETS. "They were really in need of something that would measure the skills of students transitioning from high school to college," she said. The testing service will begin pilot studies of the new test in January, Ms. Egan said. For the first year or so, colleges who give the test will receive only aggregate scores rather than individual scores for each test-taker. Later, once ETS officials have developed a baseline, individual scores will be given. ETS officials say that by January, they will begin giving individual scores for the test the organization developed last year, which will now be called ICT Literacy Assesment-Advanced Level. Both tests are administered online, and attempt to measure both computer skills, such as whether students know how to send e-mail attachments, and more general information-processing skills, such as whether students can determine if an online source is reliable. Scores for the advanced-level test will range from 400 to 700 points, and score reports will also contain breakdowns on how students did in each of seven areas. ETS plans to increase the price of the test from about $25 per student to between $35 and $40 per student. Officials at the testing service have also reduced the length of the exam, from two hours to 75 minutes. "There was a fatigue factor of students sitting for two hours," said Ms. Egan. And colleges will now have the option of administering the test in two parts, so that it can more easily be given during college courses, she said. The California State University System was among the first colleges to give the exam, which it used to test 3,300 students on its 23 campuses this year. Ilene F. Rockman, manager of an information-competence program for the Cal State office of the chancellor, said the test showed what she had suspected -- that many students need help when it comes to information literacy. "The assumptions that are sometimes made, that students are information- and communication-technology literate, were not always borne out by the results of this assessment," she said. "What I have said many times is that students may know how to surf the Web, they may know how to download music and send e-mail, but that does not mean they know how to analyze information." Neither she nor Ms. Egan would elaborate on what the aggregate scores revealed about students' strengths and weaknesses when it comes to IT literacy. Gordon W. Smith, Cal State's director of systemwide library programs, said he hopes that one day information literacy will be considered just as important as math and reading competency. He hopes that the tests might lead Cal State campuses and other colleges to offer remedial courses or tutorials to students who score poorly. "We certainly have some work to do in order to bring the skill levels of our students in information literacy up to where they ought to be," he said. _________________________________________________________________ Background article from The Chronicle: * [71]Testing Service to Unveil an Assessment of Computer and Information Literacy (11/12/2004) References 71. http://chronicle.com/weekly/v51/i12/12a03301.htm E-mail me if you have problems getting the referenced article. From checker at panix.com Wed Sep 28 19:32:42 2005 From: checker at panix.com (Premise Checker) Date: Wed, 28 Sep 2005 15:32:42 -0400 (EDT) Subject: [Paleopsych] CHE: Putting Liberal Education on the Radar Screen Message-ID: Putting Liberal Education on the Radar Screen The Chronicle of Higher Education, 5.9.23 http://chronicle.com/weekly/v52/i05/05b02001.htm By By CAROL GEARY SCHNEIDER and DEBRA HUMPHREYS As we prepare for the next round of college applications, what issues are on everyone's minds? We will be admitting one of the largest classes ever to pursue a college degree. Many applicants are, no doubt, preoccupied with such pressing questions as whether they will do well on the SAT or make the right college choice. Their parents might be worrying about tuition costs, and policy makers about continuing to increase access to higher education, improving graduation rates, decreasing college costs, and putting into effect new forms of institutional accountability. But what about learning? The national conversations about affordability, access, graduation rates, and accountability are important, of course. But we also need a parallel public conversation about the kinds of learning today's graduates need -- a conversation that directly engages students and their parents. With increasing urgency, employers in a wide array of sectors are calling for graduates who are skilled communicators, scientifically literate, adept at quantitative reasoning, oriented to innovation, sophisticated about diversity, and grounded in cross-cultural exchange. Civic leaders are expressing concern about declining rates of civic knowledge among the young and what that might mean for the future of our democracy. What we call "liberal education" has long responded to such important public concerns, now in new ways, as colleges are experimenting with how to meet 21st-century needs -- for example through asking undergraduates to conduct research, thematically linking a series of courses, and promoting service learning, to name but a few strategies. There is little evidence, however, that the public is aware of such changes in liberal education, or that high-school students and their parents have been part of any discussion about what graduates need to know in today's world. Is it any wonder that some students have come to see the college degree as just a ticket to be punched on the way to their first job? Although liberal education has changed over time, it has always been concerned with cultivating intellectual and ethical judgment, helping students comprehend and negotiate their relationships with the larger world, and preparing them for lives of civic responsibility and leadership. It is a philosophy of education rather than a set of majors or a curriculum at a particular kind of institution. It is a focus not just at small liberal-arts colleges, but throughout higher education. Today it helps students, both in their general-education courses and in their major fields of study, analyze important contemporary issues like the social, cultural, and ethical dimensions of the AIDS crisis or meeting the needs of an aging population. But liberal education and what it means have slipped off the public radar screen. That's why the Association of American Colleges and Universities has begun a decade-long campaign, Liberal Education and America's Promise: Excellence for Everyone as a Nation Goes to College, to expand public understanding of the value of a liberal education. In preparation for the campaign, we organized a series of eight focus groups with college juniors and seniors and college-bound high-school students from four regions of the country. The responses from all eight groups are serious and sobering. Today's high-school students are largely uninformed about the college curriculum and uncertain about its demands, while the resources available to guide their preparation for college life are very limited. Students do not regard high-school guidance counselors or colleges themselves as trusted sources of information. Operating in a vacuum, they have little understanding of the kinds of learning that either their future employers or their faculty members see as important. While some believe that the college degree is little more than a "piece of paper," most students do recognize that something important goes on during the college years. The problem is they don't really know what that "something" is or ought to be. We asked our focus groups to examine a list of college outcomes and identify which are the most and least important to them. The rankings produced across the groups are remarkably consistent. What students most value is their own preparation for professional success. They believe that such things as maturity, work habits, self-discipline, and time management are what they need to achieve in college. A few of the college juniors and seniors also recognize the importance of communication, problem solving, and critical thinking. Whether they rank those outcomes high or low, however, none of the students we interviewed identify specific courses, assignments, or activities that help prepare them to meet those outcomes. The most alarming finding has to do with what both current and prospective students consider the least important outcomes of a college education: values and ethics, an appreciation of cultural diversity, global awareness, and civic responsibility. When we further asked students about the importance of deepening their knowledge of American culture and history, of cultures outside the United States, and of scientific knowledge and its importance in the world -- three staples of a strong liberal education -- each ranked at the bottom of desired outcomes. Today's students understand that college is important to their success in the work force, but they do not recognize its role in preparing them as citizens, community participants, and thoughtful people. They do not expect college to enable them to better understand the wider world; they view college as a private rather than a public good. As a result, they also seem to believe that learning is mostly about individual development and simple information transfer. That is why they tend to think that if they have already studied a topic in high school (for example, American history or science), there is no logical reason to ever study it again. Moreover, we found little difference between the outcomes valued by high-school seniors and those valued by college students. That suggests that colleges are not conveying the importance of liberal education to their students. Indeed, our focus-group findings indicate a profound lack of understanding about the tradition of liberal education. We found that high-school students are almost entirely unfamiliar with the term "liberal education" and that college students are only somewhat familiar with it. Some of those who have heard the term tend to associate it only with traditional liberal arts and sciences, rather than with a broader philosophy of education important for all students, whatever their chosen field of study. Some think it occurs only in the arts and humanities, rather than in the sciences. Among those students who associate liberal education with learning critical thinking, almost all see it only as something that happens in those parts of the curriculum considered "general education," rather than in detailed studies in particular fields. The confusion goes on. For some students, a liberal education is one that is politically skewed to the left. As one college student put it, it is "education directed toward alternative methods, most often political in nature." Another college student remarked, "Initially, I thought and heard of 'liberal' as in Democrats and politics. I am conservative, so my initial reaction was to brace myself, set up a defense of my values." The lack of understanding among students -- and their parents -- about what a liberal education is matters profoundly to the futures of the students themselves. It matters to how well prepared they will be as the workers of tomorrow and as citizens in our democracy. But it also matters to the future of that democracy. We have long passed the time when we could worry only about preparing the elite, the leaders of society. In today's complex and global environment, shouldn't we aspire to provide a liberal education to all who pursue a college degree? That is what we need to be talking about -- and not just among ourselves. Carol Geary Schneider is president, and Debra Humphreys is vice president for communications and public affairs, at the Association of American Colleges and Universities. From checker at panix.com Wed Sep 28 19:33:36 2005 From: checker at panix.com (Premise Checker) Date: Wed, 28 Sep 2005 15:33:36 -0400 (EDT) Subject: [Paleopsych] News.com: Intelligence in the Internet age Message-ID: Intelligence in the Internet age By Stefanie Olsen Story last modified Mon Sep 19 04:00:00 PDT 2005 http://news.com.com/2102-11395_3-5869719.html?tag=st.util.print [Thanks to Eugen for this.] It's a question older than the Parthenon: Do innovations and new technologies make us more intelligent? A few thousand years ago, a Greek philosopher, as he snacked on dates on a bench in downtown Athens, may have wondered if the written language folks were starting to use was allowing them to avoid thinking for themselves. Today, terabytes of easily accessed data, always-on Internet connectivity, and lightning-fast search engines are profoundly changing the way people gather information. But the age-old question remains: Is technology making us smarter? Or are we lazily reliant on computers, and, well, dumber than we used to be? News.context What's new: Philosophers, technologists and writers are debating whether new innovations and technologies make us smarter or just lazily reliant on computers. Bottom line: The ability to reason and learn won't fundamentally change because of technology. On the other hand, technology, from pocket calculators to the Internet, is radically changing the notion of the intelligence necessary to function in the modern world. More stories on this topic "Our environment, because of technology, is changing, and therefore the abilities we need in order to navigate these highly information-laden environments and succeed are changing," said Susana Urbina, a professor of psychology at the University of North Florida who has studied the roots of intelligence. If there is a good answer to the question, it probably starts with a contradiction: What makes us intelligent--the ability to reason and learn--is staying the same and will never fundamentally change because of technology. On the other hand, technology, from pocket calculators to the Internet, is radically changing the notion of the intelligence necessary to function in the modern world. Take Diego Valderrama, an economist with the Federal Reserve Bank in San Francisco. If he were an economist 40 years ago, he may have used a paper, pencil and slide rule to figure out and chart by hand how the local economy might change with a 1 percent boost in taxes. But because he's a thoroughly modern guy, he uses knowledge of the C++ programming language to create mathematical algorithms to compute answers and produce elaborate projections on the impact of macroeconomic changes to work forces or consumer consumption. Does that mean he's not as bright as an economist from the 1950s? Is he smarter? The answer is probably "no" on both counts. He traded one skill for another. Computer skills make him far more efficient and allow him to present more accurate--more intelligent--information. And without them, he'd have a tough time doing his job. But drop him into the Federal Reserve 40 years ago, and a lack of skill with the slide rule could put an equal crimp on his career. "The notion that the world's knowledge is literally at your fingertips is very compelling and is very beguiling." --Vint Cerf, Internet pioneer Intelligence, as it impacts the economist Valderrama, is our capacity to adapt and thrive in our own environment. In a Darwinian sense, it's as true now as it was millions of years ago, when man's aptitude for hearing the way branches broke or smelling a spore affected his power to avoid predators, eat and survive. But what makes someone smart can vary in different cultures and situations. A successful Wall Street banker who has dropped into the Australian Outback likely couldn't pull off a great Crocodile Dundee impression. A mathematical genius like Isaac Newton could be--in fact, he was--socially inept and a borderline hermit. A master painter? Probably not so good at balancing a checkbook. What's undeniable is the Internet's democratization of information. It's providing instant access to information and, in a sense, improving the practical application of intelligence for everyone. Nearly a century ago, Henry Ford didn't have the Internet, but he did have a bunch of smart guys. The auto industry pioneer, as a parlor trick, liked to claim he could answer any question in 30 minutes. In fact, he had organized a research staff he could call at any time to get him the answer. Today, you don't have to be an auto baron to feign that kind of knowledge. You just have to be able to type G-O-O-G-L-E. People can in a matter of minutes find sources of information like court documents, scientific papers or corporate securities filings. "It's true we don't remember anything anymore, but we don't need to." --Jeff Hawkins, co-founder, Palm Computing "The notion that the world's knowledge is literally at your fingertips is very compelling and is very beguiling," said Vint Cerf, who co-created the underlying architecture of the Internet and who is widely considered one of its "fathers." What's exciting "is the Internet's ability to absorb such a large amount of information and for it to be accessible to other people, even if they don't know it exists or don't know who you are." Indeed, Doug Engelbart, one of the pioneers of personal computing technology in the 1960s, envisioned in the early '60s that the PC would augment human intelligence. He believes that society's ability to gain insight from information has evolved with the help of computers. "The key thing about all the world's big problems is that they have to be dealt with collectively," Engelbart said. "If we don't get collectively smarter, we're doomed." The virtual memory According to at least one definition, intelligence is the "ability to reason, plan, solve problems, think abstractly, comprehend ideas and language, and learn." Yet intelligence is not just about book learning or test scores; it also reflects a deeper understanding of the world. On average, people with high IQs are thought to live longer, earn more money, process information faster and have larger working memories. Yet could all this information provided by the Internet and gadgets dampen our motivation to remember anything? Working with the Treo handheld computing device he helped create, Jeff Hawkins can easily recount exactly what he did three years ago on Sept. 8, factor 9,982 and Pi, or describe a weather system over the Pacific Ocean. But without his "smart" phone, he can't recall his daughter's telephone number offhand. It's a familiar circumstance for people living in the hyper-connected Internet age, when it has become easier to program a cell phone or computer--instead of your brain--to recall facts or other essential information. In some sense, our digital devices do the thinking for us now, helping us with everything from calendar scheduling and local directions to in-depth research and "Jeopardy"-like trivia. "The key thing about all the world's big problems is that they have to be dealt with collectively. If we don't get collectively smarter, we're doomed." --Doug Engelbart, personal computing visionary "It's true we don't remember anything anymore, but we don't need to," said Hawkins, the co-founder of Palm Computing and author of a book called "On Intelligence." "We might one day sit around and reminisce about having to remember phone numbers, but it's not a bad thing. It frees us up to think about other things. The brain has a limited capacity, if you give it high-level tools, it will work on high-level problems," he said. Only 600 years ago, people relied on memory as a primary means of communication and tradition. Before the printed word, memory was essential to lawyers, doctors, priests and poets, and those with particular talents for memory were revered. Seneca, a famous teacher of rhetoric around A.D. 37, was said to be able to repeat long passages of speeches he had heard years before. "Memory," said Greek playwright Aeschylus, "is the mother of all wisdom." People feared the invention of the printing press because it would cause people to rely on books for their memory. Today, memory is more irrelevant than ever, argue some academics. "What's important is your ability to use what you know well. There are people who are walking encyclopedias, but they make a mess of their lives. Getting a 100 percent on a written driving test doesn't mean you can drive," said Robert Sternberg, dean of Arts and Sciences at Tufts University and a professor of psychology. Tomorrow: A look at what makes us smart in the Internet age. And what happens when the lights go out? From checker at panix.com Wed Sep 28 19:33:46 2005 From: checker at panix.com (Premise Checker) Date: Wed, 28 Sep 2005 15:33:46 -0400 (EDT) Subject: [Paleopsych] Harper's: The Uses of Disaster Message-ID: The Uses of Disaster http://harpers.org/TheUsesOfDisaster.html Notes on bad weather and good government Posted on Friday, September 9, 2005. This essay on the relationship between disasters, authority, and our understanding of human nature went to press as Hurricane Katrina hit the Gulf Coast. The excerpt below is followed by a postscript, available only on the Web, that specifically addresses the disaster in New Orleans. Originally from a forthcoming issue of Harper's Magazine, October 2005. By Rebecca Solnit. After the storm, 1865 In his 1961 study, "Disasters and Mental Health: Therapeutic Principles Drawn from Disaster Studies," sociologist Charles Fritz asks an interesting question: "Why do large-scale disasters produce such mentally healthy conditions?" One of the answers is that a disaster shakes us loose of ordinary time. "In everyday life many human problems stem from people's preoccupation with the past and the future, rather than the present," Fritz wrote. "Disasters provide a temporary liberation from the worries, inhibitions, and anxieties associated with the past and the future because they force people to concentrate their full attention on immediate moment-to-moment, day-to-day needs." This shift in awareness, he added, "speeds the process of decision-making" and "facilitates the acceptance of change." The state of mind Fritz describes resembles those sought in various spiritual traditions. It recalls Buddhism's emphasis on being in the moment, nonattachment, and compassion for all beings, and the Christian monastic tradition's emphasis on awareness of mortality and ephemerality. From this perspective, disaster can be understood as a crash course in consciousness. We should not be surprised, then, that what transpires in the immediate aftermath of a disaster is nothing like the popular version. People rarely panic or stampede, nor do they often immediately engage in looting or other acts of opportunism. The Scottish-born mathematician Eric Temple Bell, who witnessed the 1906 San Francisco earthquake and fire, saw "no running around the streets, or shrieking, or anything of that sort" but instead people who "walked calmly from place to place, and watched the fire with almost indifference, and then with jokes, that were not forced either, but wholly spontaneous." Another survivor, San Francisco editor Charles B. Sedgwick, noted-perhaps somewhat hyperbolically-that "even the selfish, the sordid and the greedy became transformed that day-and, indeed, throughout that trying period-and true humanity reigned." This phenomenon of "surprising" human kindness and good sense is replicated time and again. Many official disaster-preparedness scenarios nonetheless presume that human beings are prone to panic and in need of policing. A sort of Hobbesian true human nature emerges, according to this version, and people trample one another to flee, or loot and pillage, or they haplessly await rescue. In the movie version, this is the necessary precondition for John Wayne, Harrison Ford, or one of their shovel-jawed brethren to save the day and focus the narrative. In the government version, this is why we need the government. In 1906, for example, no one quite declared martial law, but soldiers, policemen, and some armed college students patrolled the streets of San Francisco looking for looters, with orders to shoot on sight. Even taking food from buildings about to burn down was treated as a crime: property and order were prized above survival or even reason. But "the authorities" are too few and too centralized to respond to the dispersed and numerous emergencies of a disaster. Instead, the people classified as victims generally do what can be done to save themselves and one another. In doing so, they discover not only the potential power of civil society but also the fragility of existing structures of authority. * * * The events of September 11, 2001, though entirely unnatural, shed light on the nature of all disasters. That day saw the near-total failure of centralized authority. The United States has the largest and most technologically advanced military in the world, but the only successful effort to stop the commandeered planes from becoming bombs was staged by the unarmed passengers inside United Airlines Flight 93. They pieced together what was going on by cell-phone conversations with family members and organized themselves to hijack their hijackers, forcing the plane to crash in that Pennsylvania field. The police and fire departments responded valiantly to the bombings of the World Trade Center towers and the Pentagon, but most of the people there who survived did so because they rescued themselves and one another. An armada of sailboats, barges, and ferries arrived in lower Manhattan to see who needed rescuing, and hundreds of thousands were evacuated by these volunteers, whose self-interest, it is reasonable to assume, would have steered them away from, not toward, a disaster. In fact, coping with the swarm of volunteers who, along with sightseers, converge on a disaster is part of the real task of disaster management. The days after 9/11 constituted a tremendous national opening, as if a door had been unlocked. The aftermath of disaster is often peculiarly hopeful, and in the rupture of the ordinary, real change often emerges. But this means that disaster threatens not only bodies, buildings, and property but also the status quo. Disaster recovery is not just a rescue of the needy but also a scramble for power and legitimacy, one that the status quo usually-but not always-wins. The Bush Administration's response after 9/11 was a desperate and extreme version of this race to extinguish too vital a civil society and reestablish the authority that claims it alone can do what civil society has just done-and, alas, an extremely successful one. For the administration, the crisis wasn't primarily one of death and destruction but one of power. The door had been opened and an anxious administration hastened to slam it shut. You can see the grounds for that anxiety in the aftermath of the 1985 Mexico City earthquake, which was the beginning of the end for the one-party rule of the PRI over Mexico. The earthquake, measuring 8.0 on the Richter scale, hit Mexico City early on the morning of September 19 and devastated the central city, the symbolic heart of the nation. An aftershock nearly as large hit the next evening. About ten thousand people died, and as many as a quarter of a million became homeless. The initial response made it clear that the government cared a lot more about the material city of buildings and wealth than the social city of human beings. In one notorious case, local sweatshop owners paid the police to salvage equipment from their destroyed factories. No effort was made to search for survivors or retrieve the corpses of the night-shift seamstresses. It was as though the earthquake had ripped away a veil concealing the corruption and callousness of the government. International rescue teams were rebuffed, aid money was spent on other programs, supplies were stolen by the police and army, and, in the end, a huge population of the displaced poor was obliged to go on living in tents for many years. That was how the government of Mexico reacted. The people of Mexico, however, had a different reaction. "Not even the power of the state," wrote political commentator Carlos Monsiv?s, "managed to wipe out the cultural, political, and psychic consequences of the four or five days in which the brigades and aid workers, in the midst of rubble and desolation, felt themselves in charge of their own behavior and responsible for the other city that rose into view." As in San Francisco in 1906, in the ruins of the city of architecture and property, another city came into being made of nothing more than the people and their senses of solidarity and possibility. Citizens began to demand justice, accountability, and respect. They fought to keep the sites of their rent-controlled homes from being redeveloped as more lucrative projects. They organized neighborhood groups. And eventually they elected a left-wing mayor-a key step in breaking the PRI's monopoly on power in Mexico. * * * Americans work more hours now than anyone else in the industrialized world. They also work far more than they themselves did as recently as a few decades ago. This shift is economic--call it Reaganomics or Chicago-style "liberalism" or "globalization"--but it is cultural too, part of an odd backlash against unions, social safety nets, the New Deal and the Great Society, against the idea that we should take care of one another, against the idea of community. The proponents of this shift celebrate the frontier ideals of "independence" and the Protestant work ethic and the Horatio Alger notion that it's all up to you. In this light, we can regard the notion of "privatization" as a social phenomenon far broader than a process by which government contracts are granted. Citizens are redefined as consumers. Public participation in electoral politics falters, and with it any sense of collective or individual political power. Public space itself--the site for the First Amendment's "right of the people peaceably to assemble"--withers away. Free association is aptly termed, for there is no profit in it. And since there is no profit in it, we are instead encouraged by our great media and advertising id to fear one another and regard public life as a danger and a nuisance, to live in secured spaces, communicate by electronic means, and acquire our information from that self-same media rather than from one another. The barkers touting our disastrous "ownership society" refuse to acknowledge that it is what we own in common that makes us strong. But disaster makes it clear that our interdependence is not only an inescapable fact but a fact worth celebrating--that the production of civil society is a work of love, indeed the work that many of us desire most. Postscript At stake in stories of disaster is what version of human nature we will accept, and at stake in that choice is how will we govern, and how we will cope with future disasters. By now, more than a week after New Orleans has been destroyed, we have heard the stories of poor, mostly black people who were "out of control." We were told of "riots" and babies being murdered, of instances of cannibalism. And we were provided an image of authority, of control--of power as a necessary counter not to threats to human life but to unauthorized shopping, as though free TVs were the core of the crisis. "This place is going to look like Little Somalia," Brigadier General Gary Jones, commander of the Louisiana National Guard's Joint Task Force told the Army Times. "We're going to go out and take this city back. This will be a combat operation to get this city under control." New Orleans, of course, has long been a violent place. Its homicide rate is among the highest in the nation. The Associated Press reports that last year "university researchers conducted an experiment in which police fired 700 blank rounds in a New Orleans neighborhood in a single afternoon. No one called to report the gunfire." That is a real disaster. As I write this, however, it is becoming clear that many of the stories of post-disaster Hobbesian carnage were little more than rumor. "I live in the N.O. area and got back into my house on Saturday," one resident wrote to Harry Shearer's website. "We know that the looting was blown out of proportion and that much of it was just people getting food and water, or batteries and other emergency supplies. That is not to say that some actual looting did not go on. There was, indeed, some of that. But it was pretty isolated. As was the shooting and other violence in the streets." As the water subsides and the truth filters out, we may be left with another version of human nature. I have heard innumerable stories of rescue, aid, and care by doctors, neighbors, strangers, and volunteers who arrived on their own boats, and in helicopters, buses, and trucks--stories substantiated by real names and real faces. So far, citizens across the country have offered at least 200,000 beds in their homes to refugees from Katrina's chaos on hurricanehousing.org, and unprecedented amounts have been donated to the Red Cross and other charities for hurricane victims. The greatest looter in this crisis may be twenty-year-old Jabbar Gibson, who appropriated a school bus and evacuated about seventy of his New Orleans neighbors to Houston. Disasters are almost by definition about the failure of authority, in part because the powers that be are supposed to protect us from them, in part also because the thousand dispersed needs of a disaster overwhelm even the best governments, and because the government version of governing often arrives at the point of a gun. But the authorities don't usually fail so spectacularly. Failure at this level requires sustained effort. The deepening of the divide between the haves and have nots, the stripping away of social services, the defunding of the infrastructure, mean that this disaster--not of weather but of policy--has been more or less what was intended to happen, if not so starkly in plain sight. The most hellish image in New Orleans was not the battering waves of Lake Pontchartrain or even the homeless children wandering on raised highways. It was the forgotten thousands crammed into the fetid depths of the Superdome. And what most news outlets failed to report was that those infernos were not designed by the people within, nor did they represent the spontaneous eruption of nature red in tooth and claw. They were created by the authorities. The people within were not allowed to leave. The Convention Center and the Superdome became open prisons. "They won't let them walk out," reported Fox News anchor Shepard Smith, in a radical departure from the script. "They got locked in there. And anyone who walks up out of that city now is turned around. You are not allowed to go to Gretna, Louisiana, from New Orleans, Louisiana. Over there, there's hope. Over there, there's electricity. Over there, there is food and water. But you cannot go from here to there. The government will not allow you to do it. It's a fact." Jesse Jackson compared the Superdome to the hull of a slave ship. People were turned back at the Gretna bridge by armed authorities, men who fired warning shots over the growing crowd. Men in control. Lorrie Beth Slonsky and Larry Bradshaw, paramedics in New Orleans for a conference, wrote in an email report (now posted at CounterPunch) that they saw hundreds of stranded tourists thus turned back. "All day long, we saw other families, individuals and groups make the same trip up the incline in an attempt to cross the bridge, only to be turned away. Some chased away with gunfire, others simply told no, others to be verbally berated and humiliated. Thousands of New Orleaners were prevented and prohibited from self-evacuating the city on foot." That was not anarchy, nor was it civil society. This is the disaster our society has been working to realize for a quarter century, ever since Ronald Reagan rode into town on promises of massive tax cuts. Many of the stories we hear about sudden natural disasters are about the brutally selfish human nature of the survivors, predicated on the notion that survival is, like the marketplace, a matter of competition, not cooperation. Cooperation flourishes anyway. (Slonsky and Bradshaw were part of a large group that had set up a civilized, independent camp.) And when we look back at Katrina, we may see that the greatest savagery was that of our public officials, who not only failed to provide the infrastructure, social services, and opportunities that would have significantly decreased the vulnerability of pre-hurricane New Orleans but who also, when disaster did occur, put their ideology before their people. Rebecca Solnit September 8, 2005 About the Author Rebecca Solnit is the author of several books, including Hope in the Dark: Untold Histories, Wild Possibilities, and, most recently, A Field Guide to Getting Lost. She lives in San Francisco. This is The Uses of Disaster, originally from October 2005, published Friday, September 9, 2005. It is part of [12]Features, which is part of [13]Harpers.org. From checker at panix.com Wed Sep 28 19:41:15 2005 From: checker at panix.com (Premise Checker) Date: Wed, 28 Sep 2005 15:41:15 -0400 (EDT) Subject: [Paleopsych] MSNBC: Engineers turn to biology for inspiration. Message-ID: Engineers turn to biology for inspiration. http://www.msnbc.msn.com/id/9377826/site/newsweek/#storyContinued/print/1/displaymode/1098/ Sept. 26, 2005 issue - If we have Batman and Spider-Man, why don't we have any mussel superheroes?" asks biochemist Herbert Waite of the University of California, Santa Barbara. Mussels may not be the biggest or the flashiest of sea creatures. But they do one thing exceedingly well. They make a glue that lets them anchor themselves firmly to a rock and remain there?drenched by water, buffeted by the ocean's waves. "I don't know any other adhesive that can do that," says Waite. In fact, nature can accomplish feats that engineers have only been able to dream of until now. But as scientists peer deeper into the cellular and molecular workings of nature, engineers are starting to find information they can apply to everything from advanced optics to robotics?even a mussel-inspired glue that could one day be used to repair shattered bones. The result is a new field called biomimicry, or biologically inspired design. And though nature's innovations often need radical adaptation to suit human purposes, the new approach has the potential to improve the way we do everything, from desalinating water to streamlining cars. "If you have a design problem, nature's probably solved it already," says Janine Benyus, cofounder of the Biomimicry Guild. "After all, it's had 3.8 billion years to come up with solutions." In fact, nature turns out to be an enormous wellspring of ideas. Jewel beetles, which lay their eggs in freshly burned trees, can detect fires from miles away; the defense industry is studying the beetles for clues to designing new low-cost, military-grade infrared detectors. Meanwhile, Volvo is tapping locusts' famed ability to fly in dense swarms without colliding for a possible key to anti- collision devices in cars. NASA-supported researchers at Princeton are analyzing the remarkable strength of abalone shells to help make impact-resistant coatings for thermal tiles. And the Defense Advanced Research Projects Agency is funding development of a robot that can climb vertical surfaces, using the same principle that geckos use to walk up walls and saunter upside down across ceilings. "Imagine a Mars rover that's not limited to flat terrain," says biologist Kellar Autumn of Lewis & Clark College, who is working with DARPA. Even everyday devices may benefit from nature-inspired improvements. You know how the screens on digital cameras and laptops wash out in bright light? The solution could lie in peacock feathers. Their iridescent blues and greens come not from pigments?the only actual pigment in peacock feathers is brown?but from repeating microstructures on the feather that reflect certain wavelengths in perfect sync, intensifying a given hue. Using the same principle, Qualcomm is designing a display that uses adjustable microstructures just behind the screen's surface to create color. Because its brightness depends on ambient light rather than illumination from within, the colors actually intensify outdoors. And since a display that doesn't generate its own light requires less power, says Miles Kirby, director of product management, "a screen like this in a cell phone could be always on, and the phone could go longer between charges." Ultimately, the goal is not just to mimic nature's designs, but her production methods as well. Scientists at Sandia National Laboratories are devising novel assembly techniques inspired by seashells. Combined in a beaker of water, molecules with segments that are drawn to water and others that are repelled by it arrange themselves in predictable patterns. "As the water evaporates, they self-assemble into layers, like those in shells," says Jeffrey Brinker at Sandia. Using the same principle, but different types of molecules, he's making water filters with pores just a nanometer in diameter. "You don't need fancy instruments or nanotweezers," he says. And the process works at room temperature, without industrial furnaces and toxic solvents. "The truth is, natural organisms have managed to do everything we want to do without guzzling fossil fuels, polluting the planet or mortgaging the future," says Benyus. No wonder some people call them superheroes. From checker at panix.com Wed Sep 28 19:41:20 2005 From: checker at panix.com (Premise Checker) Date: Wed, 28 Sep 2005 15:41:20 -0400 (EDT) Subject: [Paleopsych] James Hayden Tufts: Review of Folkways by William Graham Sumner Message-ID: James Hayden Tufts: Review of Folkways by William Graham Sumner http://spartan.ac.brocku.ca/~lward/Tufts/Tufts_1907.html _________________________________________________________________ Citation: James Hayden Tufts. "Review of Folkways. A study of the sociological importance of usages, manners, customs, and morals by William Graham Sumner." Psychological Bulletin, 4 (1907):384-388. _________________________________________________________________ Folkways. A study of the sociological importance of usages, manners, customs and morals. WILLIAM GRAHAM SUMNER. Boston, Ginn & Co., 1907. Pp. v + 692. Professor Sumner's former students will not need the implication in the preface that this book is built out of material gradually accumulated during years of instruction. The range of illustration, the crisp, clear English, the vigorous dicta on policies and current conceptions, bring back vividly the memories of what has been to many a stimulating and fruitful experience. Other readers will find the class-room genesis of the book equally apparent. The great accumulation of material, of which the present volume presents but a part, has evidently grown in the work of instruction. To some extent, at least, it might be easily organized under other titles. The book is far fuller and richer than a work aus einen Guss, but it is also less sure in the ordering and arrangement of its material. The central purpose of the author is to state and illustrate his views as to Folkways and Mores. Although the former is taken for the title the focus of interest is almost entirely in the latter. "The folkways are habits of the individual and customs of the society which arise from efforts to satisfy needs:" The struggle to maintain existence was carried on individually but in groups. Each profited by the other's experience; hence there was concurrence toward that which proved to be most expedient. All at last adopted the same way for the same purpose; hence the ways turned into customs and became mass phenomena. " The young learn them by tradition, imitation and authority." Although the above would suggest a rather definitely utilitarian, and in this sense rational origin for folkways, it is insisted that the habits arise from recurrent needs and are not themselves foreseen or intended. " They are not noticed until they have long existed, and it is still longer before they are appreciated." Moreover, a further _________________________________________________________________ (385) factor which the author calls 'irrational,' enters into the formation of folkways, namely, the aleatory interest, the element of good and bad luck. " One might use the best known means with the greatest care, yet fail of the result. On the other hand, one might get a great result with no effort at all. One might also incur a calamity without any fault of his own." All such good and bad luck was attributed to superior powers, hence ',the aleatory element has always been the connecting link between the struggle for existence and religion." It was only. by religious rites that the aleatory element in the struggle for existence could be controlled." [In view of this last statement and of various others like it, it is evident that Professor Sumner uses ' irrational' in the sense of ' mistaken,' rather than in the sense of ' not adapting means to ends.' If a savage believes that sympathetic magic will give him a good crop it is just as rational a process for him as a large part of human activities. To the next generation present methods of treating many diseases, or of guarding against commercial panics, or of educating children, may appear to be as far wide of the mark as the savage interpretation. So, when it is said ' The nexus between them (ghosts, demons, another world) and events was not cause and effect but magic,' it is obvious that the author must mean, ' Cause and effect as viewed by modern science.' For the savage believes profoundly that he is working for the cause of his good or ill luck when he looks to the other world, and seeks to control his welfare by the chain of what is to his mind cause and effect.] Another 'irrational' element in the folkways is due to accident or a mistaking of the post hoc for a propter hoc. Some customs formed in this way and also some formed by inference from the supposed will of the gods may be decidedly harmful. The Mores are the folkways raised to another plane. " The mores are the folkways including the philosophical and ethical generalizations as to societal welfare which are suggested by them, and inherent in them as they grow." The two elements out of which the conception of welfare is formed are ' right' and ' true.' The exact psychological root of ' right' is somewhat variously stated. The problem has evidently got its formulation in opposition to intuitionism, and without reference to the questions which now most interest the social psychologist. It is insisted that " the notion of right is in the folkways. It is not outside of them, of independent origin, and brought to them to test them." So far, it is easy to follow. But the precise element or elementsin the folkways that gives rise to the idea of 'right' is not so readily located. The following leaves it uncertain whether the _________________________________________________________________ (386) stress is to be placed on the habitual factor or on the ancestral source. " The right way is the way the ancestors used which has been handed down. The tradition is its own warrant." The next citation seems to make the ancestral the ultimate source; " In the folkways, what-ever is, is right. This is because they are traditional and therefore contain in themselves the authority of the ancestral ghosts." The question then arises, What is meant by ' authority of the ancestral ghosts ' ? Certain passages seem to use the term as equivalent to ' ghost fear.' " Thus (p. 28 f.) it may well be believed that notions of right and duty and of social welfare were first developed in connection with ghost fear and other worldliness, and therefore that in that field also, folkways were first raised to mores." So in the preface : " They (the folkways) are intertwined with goblinism and demonism and primitive notions of luck, and, so they win traditional authority." On the other hand we read that " the ways of the older and more experienced members of a society deserve great authority in any primitive group" and this is spoken of as a ' rational authority' (p.11). Again, four elements are enumerated (p. 30), as ' ghost fear, ancestral authority, taboos, and habit.' The authority in the reference is apparently rational chiefly in the sense that it is more skilful in the use of means to ends. The question as to whether authority is also based in part upon a will or purpose directed toward the good of the group is not raised. The author's categories for explanation are on the whole frankly individualistic, although sentiments of ' loyalty to the group, sacrifice for it' are recognizedphrases which certainly imply the metaphysics,' of V?lkerpsychologie. By the other element involved in the mores, namely, that they are true is meant that they fit into a consistent view of the world and its powers natural or supernatural, and therefore give to-the particular the value of a place in a system, a world philosophy. Thus the folk--ways take on larger meaning and value. They are also reinforced by reflection on pleasures and pains that follow according as they succeed well or ill. The notion of welfare was a resultant from the mystic and the utilitarian generalizations combined. When this has been formed the folkways become mores. The valuable in this is chiefly its emphasis upon the fact that in customs or mores we have not only habits but also judgments of value. So far he agrees with Hobhouse (Morals in Evolution, p. 13 ff.). But whereas Hobhouse starts the approval or disapproval largely in some sympathy or antipathy, although speaking also of ' impulses social and selfish,' Professor Sumner relies on, (1) a more definitely rational or utilitarian con- _________________________________________________________________ (387) -ception, (2) a mystic sanction of ghost fear, (3) possibly also a conception of ' authority' in ancestors, and (4) connection with a world-system. There seems little doubt to the reviewer that the element stressed by Hobhouse enters in; it finds expression in all the various organs of group opinion. Further, it seems evident that the conception of authority implies a conception of social unity which may be backed by fear but is never to be derived from it. An ethical philosopher, jealous for his profession, might find ground for criticism in the apparently conflicting doctrines as to the relation of reflective thought or ethical criticism to the mores. On the one hand, philosophy and ethics seem to be regarded as invariably noxious; on the other, the author not only criticizes unhesitatingly and unsparingly the present mores, using for [']the purpose standards and principles which are certainly ethical . and philosophical, but he provides also for a legitimate function of such critical reflection. On the one hand, he writes that philosophy and ethics " often interfere in the second stage of the sequence --- act, thought, act. Then they produce harm." So, too, ' great principles' are usually referred to in quotation marks and with the imputation that they are neither great nor worthy to be followed as principles. On the other hand, it is said, that ' free and rational criticism of traditional mores is essential to societal welfare.' The solution for such contradictory statements is doubtless found in the author's conviction that most philosophy and ethics have been formed in an abstract and speculative fashion, without regard to the guiding principle of social welfare. Nevertheless a large number of the author's own keen sarcasms and judgments are not reasoned; they doubtless rest on general principles of the author's and are presented in as categorical form as any of the theories which he considers as ' ethics' and ' philosophy.' But it is ungracious to dwell upon matters of this sort. Every student of social psychology, morality, and the history of civilization will be grateful to Professor Sumner for the wealth of material which is here presented. The illustrative material is grouped under such headings as Labor, Wealth, Slavery, Cannibalism, Codes of Manners, Primitive Justice, Uncleanness, Sex, Marriage, Sacral Harlotry, Child Sacrifice, Sports, Drama, Asceticism. It has been gathered from a great range of authors, and although the student misses the names of some important workers in the field, he will be grateful that many sources have been laid under contribution which are not usually drawn upon in similar works. The author's earlier studies in the field of economic history have doubtless served a purpose here, and the obiter _________________________________________________________________ (388) dicta on various sentiments and conceptions current in the political, educational, social and religious field, enliven the pages. Such themes as ' Missions,' 'Democracy, The People, Pensions,' call out vigorous expressions. Every reader will hope that the author will soon be able to carry out the further plan announced in the preface of publishing another volume or volumes of similar material upon other topics. J.H.T. The Mead Project, Department of Sociology, Brock University, St. Catharines, Ontario, Canada L2S 3A1 (905) 688-5550 x 3455 Please direct written communications to Dr. Lloyd Gordon Ward 4501 - 44 Charles Street West Toronto Ontario Canada M4Y 1R8 Phone: (416) 964-6799 Last edited: 12/15/2004 From checker at panix.com Wed Sep 28 19:41:25 2005 From: checker at panix.com (Premise Checker) Date: Wed, 28 Sep 2005 15:41:25 -0400 (EDT) Subject: [Paleopsych] Eugene Woodbury: Three visions of the distant, uncertain shore: Philip Pullman, C.S. Lewis, and Joseph Smith Message-ID: Three visions of the distant, uncertain shore: Philip Pullman, C.S. Lewis, and Joseph Smith http://www.eugenewoodbury.com/essay100.htm et seq. [Click the URLs to get the footnotes, which require Java. Ugh. I just finished His Dark Materials, which both of my daughters strongly urged upon me, and which I recommend. I've scarely read any fantasy, not The Lord of the Rings or Harry Potter. Maybe a novel by C.S. Lewis at long time ago. I've read maybe a hundred science fiction books. I've yet to read any of the Mormon texts.] Philip Pullman His Dark Materials The Golden Compass The Subtle Knife The Amber Spyglass Sally Lockhart The Ruby in the Smoke Shadow in the North The Tiger in the Well The Tin Princess Also Count Karlstein I was a Rat! Clockwork Spring-Heeled Jack With the publication of The Amber Spyglass and the completion of the "His Dark Materials" trilogy, Philip Pullman has produced a first-rate adventure that dares for the first time since C.S. Lewis's "Chronicles of Narnia" to place the entire sweep of Christian eschatology at the heart of a young adult fantasy series. Having set the stage for the apocalyptic showdown in the The Golden Compass, and then filling out the cast of characters in The Subtle Knife, Pullman goes on in The Amber Spyglass to question the existence of God, the nature of good and evil, the nature of thought and matter. The structure of his argument holds so well over 1000 pages because the author has set his foundation firmly in the classics, a good place to begin any discussion of the meaning of life. Borrowing from Dante and Vergil, he sends Will Parry and Lyra Silvertongue on the mythic heroic journey: literally from the top of the world, to the depths of hell, and back to Eden. The title of the trilogy comes Book II of Milton's Paradise Lost, which itself foreshadows the theological challenge Pullman has laid out for himself: But all these in thir pregnant causes mixt Confus'dly, and which thus must ever fight, Unless th' Almighty Maker them ordain His dark materials to create more Worlds And considering how well he rises to the challenge, I think it only appropriate that Andrew Marvell's summation of Milton's work, found in the introduction to the Second Edition (1674), so well applies here as well. In slender Book his vast Design unfold, Messiah Crown'd, Gods Reconcil'd Decree, Rebelling Angels, the Forbidden Tree, Heav'n, Hell, Earth, Chaos, All; the argument Held me a while misdoubting his Intent "Yet as I read," Marvell records, "I lik'd his project." An understatement, to say the least. Displaying a breathtaking reach of imagination (his conceptualization of the "daemon," alone, surpasses expectations, and strikes deep chords of affirmation), Pullman pulls off his equivalent epic with a sagacity and a depth of feeling that stirs the soul. Into the Breach To a sufficient extent that "His Dark Materials" constitutes some of the most important writing in the genre in the last half-century. It is a work of serious literary weight, and works of serious literary weight beg comparison, or at least a vigorous shoving match. At first glance Lewis's "Narnia" seems the prime candidate. As in Pullman's trilogy, Lewis's protagonists cross the boundaries of adulthood as they cross the boundaries between worlds. The decisive element perhaps in all successful juvenile fantasy is this transitional period between childhood and adulthood, where the characters possess the qualities of both simultaneously. This is difficult--if not impossible--to depict in real life (which is perhaps why I so dislike all the video renditions of Narnia I've ever seen. Though I think that Hayao Miyazaki could carry it off--note the relationship between Nausicaa and Asbel, and Lyra and Will.) But as a literary device it works wonderfully when done right. Harry Potter, for example. And it's not a matter of portraying children as small grownups. Though Lyra and Will and Harry Potter (and Miyazaki's Nausicaa) are often called on to behave as no child could or would--no matter how brave or precocious--they are not behaving as adults could or would, either. They act, rather, even when yielding to their darker impulses, with a purity of intent that adults never achieve. They thus represent a state of transcendence: in the world, but not beholding to the distracting and prosaic and cynical concerns that become the inevitable burden of growing old. So these are easy associations to make. Even easier to make when you consider that both Lewis and Pullman studied at Oxford and went on to teach literature (Pullman at Westminster College, Lewis at Oxford and Cambridge). In terms of theological surmise, although both works similarly circumnavigate the continents that separate Genesis and the Ends of the Earth, the more appropriate mirror to hold up to Pullman's work is the lesser known "Space Trilogy." To begin with, both Pullman's "His Dark Materials" and Lewis's "Space Trilogy" are informed by an intimate knowledge of the academic environment. Out of the Silent Planet sets forth from Cambridge; The Golden Compass originates at Oxford, and both are ultimately concerned with the triumph of good over evil. But these are also correlations that can distract more than they inform, and hide the more important similarities hidden deep within the stories the two authors tell. A Return to the Schoolyard Their styles, to begin with, differ considerably. Pullman sweeps his landscape with a spyglass, pulling his characters into focus with the long lense; Lewis writes with a microscope, focused on the small, sharp, human foibles that make his human (and no so human) actors human. His comminatory narrative shines above all else, proving the old writer's adage wrong: you can show by telling. ^(1) It is, to be sure, a strange talent. Heroes and villains of Shakespearean magnitude only peripherally step onto his stage: Aslan is the Lion, and the White Witch is, well, a wicked one. But if Lewis doesn't have much to say about the melodramatics of evil, he has plenty to say about ordinary meanness (both the unpleasant and the small). ^(2) Enough to constitute two notable volumes: The Great Divorce and The Screwtape Letters. He has Screwtape, in fact, complain of the task that he, the author, has been reduced to: sinners "so muddled in mind, so passively responsive to environment," as to render them "hardly worth damning." And not so pleasant to have around, either. Many of his child actors seem refugees from some hellish school playground, gripped by a kind of nascent nastiness that occasionally infects the narrator; though, as in The Screwtape Letters, Lewis's slings and arrows more often than not puncture his protagonists. In That Hideous Strength, Lewis diverts the point-of-view of the first two books away from the now Jeremianic Ransom and focuses instead on Mark and Jane Studdock. Two very ordinary people--indistinguishable even today from any middle-class professional couple--with very ordinary problems, contemplating ending a marriage that has ceased to inspire either of them. "He was an excellent sleeper," Jane Studdock observes of her husband. "Only one thing ever seem able to keep him awake after he had gone to bed, and even that did not keep him awake for long." And poor Mark Studdock, whose soul is up for sale in That Hideous Strength, hardly comprehends the Faustian bargain he is negotiating. Like the rest of us, he's after a good job, better pay, an enhanced reputation. His weakness is a quiet insecurity, a wanting to be liked: "If he were ever cruel it would be downwards, to inferiors and outsiders who solicited his regard, not upwards to those who rejected him. There was a good deal of the spaniel in him." Yet nobody shouts or weeps or carries on, no lawyers are retained, no divorce papers filed. The apocalypse waits upon the fate of a mundane marriage that shows every sign of dying with a whimper. Yet the import of this lost cause is never lost. Lewis's eschatology can be as subtle as his sense of the fine divide--that moment of zero slope along the curve--between what makes right and wrong: There may have been a time in the world's history when such moments fully revealed their gravity, with witches prophesying on a blasted heath or visible Rubicons to be crossed. But . . . it all slipped past in a chatter of laughter, of that intimate laughter between fellow professionals, which of all earthly powers is strongest to make men do very bad thing before they are yet, individually, very bad men. Lewis's attention to such subtleties of human frailty, his acuity of observation, makes for a rhetorical weapon with a dangerous edge. Lewis is too easily able to reduce his enemies with ad hominem appraisals that possess the veneer of rational discourse. And in combination with his sometimes reactionary Victorianism, it turns into a kind of blunderbuss, and you hear the sound of the white Englishman's burden falling to the floor with a hollow clunk. Equating quality of character with the wearing of corsets, for example; and a remark about Eustace Scrubb's parents at the beginning of The Voyage of the Dawn Treader being "vegetarians, nonsmokers, and teetotalers" and wearing a "special kind of underclothes" that is so far out of left field I cannot pretend to understand what he meant by it. ^(3) Subversive Christianity And then there's Lewis's theology, outside the context of which nothing he wrote can be intelligently discussed. Lewis is carrying on in Narnia the job he began in Mere Christianity, laying on top of his stories a thick layer of apologetics, answering his academic critics (The Silver Chair being a case in point) with children's voices. And when it's your world and your rules, it's not hard to win all the arguments. It's not exactly fighting fair, and Lewis, making the most of his education, with a rich command of allegory at his fingertips, knows how not to show his hand all at once. Lewis risks, nevertheless, what may be called the Socrates Syndrome. George Bernard Shaw describes it well in the introduction to Saint Joan: the intelligent, rhetorically-gifted individual, convinced of his own rightness, who never quite understands that his brilliant arguments, although transfixing to the choir, only piss off those who disagree with him. Having been weaned on Lewis, I have developed something of an immunity to his faults. He comes across to me now almost as one of his characters, a frumpy Edwardian, the eccentric relation who pops up every Thanksgiving grumbling about the slipshod state of the modern world. You put up with him because when you settle him down the old guy tells such good stories. Nevertheless, extreme annoyance is exactly my reaction to Plato. His mentor's fate may have been unjust, but it doesn't surprise me one bit. But C.S. Lewis is read primarily by children to whom these machinations are mostly transparent, or by adults who have already claimed discipleship. It is the surprising strength of Lewis's ecumenicism that demands study by any serious propagandist, as the whole Christian world wants to claim him as their own, even those sects whose theological differences are sufficient to bring them to evangelical knife points. ^(4) I suspect Lewis has achieved such a mythic status because what he stands for eclipses what he says. Few of his fans, I'm convinced, have read carefully what the man actually wrote (true of Holy Scripture in general). Notwithstanding all this, the enormous popularity of the series proves yet again the power of raw story to overcome deficiencies in the prose (J.K. Rowling, being another prime example). Which is why I praise "The Chronicles of Narnia" as one of the most subversive works of young adult fiction ever written. [next page] Eugene Woodbury | Reading | Pullman Lewis Smith 2 http://www.eugenewoodbury.com/essay110.htm C.S. Lewis Chronicles of Narnia The Magician's Nephew The Lion, the Witch and the Wardrobe The Horse and His Boy Prince Caspian The Voyage of the Dawn Treader The Silver Chair The Last Battle Space Trilogy Out of the Silent Planet Perelandra That Hideous Strength Christian Apologetics Mere Christianity The Screwtape Letters The Great Divorce The Weight of Glory Also Till We Have Faces The Abolition of Man To the Contrary Subversiveness, you see, is not necessarily a bad thing. To good or bad ends, it depends on which side you agree with. (We don't really mind the cheap shots when we wish we thought of them first.) And I'm not sure that what you can't see can hurt you, else the world would be full of many more Anglicans than it is. There is a quality of cluelessness--call it innocence--that protects children from ulterior motives, just as it protects them from the Specters of Citt?gazze. Philip Pullman has also been branded with the label, not because he is, but because people don't agree with him. And because people liked to be shocked and offended, and thereby reassured that we'd all be better off if everybody else saw the world exactly the way we see it. Taking the label at face value, "His Dark Materials" is, yes, an exercise in not seeing the world the way most Americans see it. (Not that I believe that Pullman had Americans particularly in mind, but we rise always to the occasion.) But there is a difference. You can't exactly be subversive when you lay all your cards on the table. Pullman does. And quite a lot of cards Pullman does put on the table, embracing Really Big Ideas in not-so-acceptable ways. In this reworking of Paradise Lost, he asks a compelling hypothetical. Given that Milton's version gives the devil all the good lines, what if--because it's the winner's version that's always the accepted version--what if those rebellious angels were on the side of right all along? For our bad guy, Pullman posits that Metatron ^(5) has pulled a coup d'etat on God, thrown out the good guys, and decided that it's time to tighten the screws--using the Church as his instrument--the human race having gotten a bit too carried away with this free agency stuff. Frankly, not an unreasonable surmise, considering the way organized religions (and governments) have behaved throughout great swathes of human history. Personally, I like the idea that if we were in fact that unruly third of the host of heaven cast down to Earth, it would go a long way in explaining why human beings can be so awful to each other, and why power and agency are so coveted yet so abused. In the larger view, though, Pullman has adopted a more Olympian than Christian architecture. The Gods meddling with the humans. (Compare Vergil.) But it's an unfortunate commentary about our jaded times that heresy--by which I mean nontraditional ways of looking at the relationship between God and man, not blasphemy, with which it is often confused--doesn't get much of a rise out of anybody but the Fundamentalist fringe, and then them for all the wrong reasons. It's somewhat reassuring to see that J.K. Rowling has managed to ruffle the feathers of a few Muggles. But very few. Outrage is typically reserved for shocking! (always include the exclamation point) discoveries of hints of teenage sexuality, implicit (as in The Goats by Brock Cole), or explicit (as in The Wind Blows Backward by Mary Downing Hahn). In any case, for the easily offended sex is suggested--though never stated explicitly, you can read into it what you will--in, of course, the Garden of Eden scenes, foreshadowed throughout the series. The real shocker, though, is Pullman's exegesis. This retelling of man's fall "upwards" into grace positions Pullman as a modern Pelagius to C.S. Lewis's Augustine. And here, finally, there emerges the possibility of a philosophical nexus between these two authors, and one more, that great, grossly underestimated, early 19^th century transcendentalist neo-Pelagian, Joseph Smith. ^(6) Saints and Heretics Pelagius was a contemporary of Augustine, well educated and fluent in Latin, most probably a native of Ireland. ^(7) He resided in Rome during the late 4^th century and there developed a theology of salvation and personal perfection that two decades later, at the Council of Carthage in 418 would be declared heresy. Augustine's view of the Fall of Adam, Original Sin, the necessity of child baptism and the necessity of the Grace of Christ, would become the unquestioned orthodoxy of the Catholic church. In the spring of 1820, in western New York State, Pelagius found himself a champion in the person of Joseph Smith. A Yankee (born in Vermont), and a Methodist by upbringing, Smith saw visions of God as a fourteen year old boy, was instructed by an angel to dig out of a nearby hill the ancient record of the ancient Americas, which he published as the Book of Mormon. He went on to define a theology both outrageously unique and brazenly syncretic; it would be received by the greater Christian community about as graciously then (and today) as Pelagius's preachings were fourteen centuries before. Joseph Smith's effort was not simply to reject Original Sin and child baptism (his second Article of Faith reads, "We believe that men will be punished for their own sins, and not for Adam's transgression"; from the Book of Mormon: "little children need no repentance, neither baptism"), and knit together Protestant grace and the Catholic sacraments. His boldest step was to portray the human race as gods in embryo, not the offspring but the siblings of Christ. The kernel at the core of this theology is found in Psalms 82:6, "I have said, Ye are gods; and all of you are children of the most High," which Christ later quotes in John 10:34, and which Joseph Smith chose to take literally. Compare Joseph Smith's writings with Balthamos's assertion (in The Amber Spyglass) that Dust itself is matter made self-aware, that the Angels "condensed out of Dust" and are co-eternal with God, and not the original creations of God. "Man was also in the beginning with God," reads the Doctrine & Covenants. "Intelligence, or the light of truth, was not created or made, neither indeed can be." The most definite pronouncement of this doctrine was made in a funeral address now known as the King Follett sermon, first published in the Times and Seasons, August 15, 1844: There never was a time when there were not spirits; for they are co-equal with our Father in heaven. . . . [I] proclaim from the house-tops that God never had the power to create the spirit of man at all. God himself could not create himself. Intelligence is eternal and exists upon a self-existent principle. It is a spirit from age to age, and there is no creation about it. Ask an informed Christian what disqualifies Mormonism from Christian fellowship, and this is the doctrine he will site. More unfortunate is that the leadership of the Mormon Church has taken the criticism to heart, and has for decades been steadily covering up and backing away from what Joseph Smith preached. ^(8) Ever since rejecting polygamy in order to gain Utah statehood at the turn of the century, the church has turned ever more sharply towards an aspect of Pelagianism that Joseph Smith never fully embraced. Call it the revenge of the Augustinians. His Good Materials Pelagius was an ascetic, out of the Stoical tradition, and Joseph Smith definitely was not. Although the modern church has tried hard to turn him into one (it makes for a nice fit with the poor, illiterate, farm boy, Horatio Alger image). Smith loved life, loved women enough to reinvent polygamy at the same time he was inventing a brand-new religion, was at home in the physical and often gave as good as he got (which, in part, eventually got him killed). "The great principle of happiness," he wrote, "consists in having a body. The devil has no body, and herein is his punishment." On this point all three authors converge. "Dust loves matter," observes Mary Malone. Lewis uses almost the same language: "God never meant man to be a purely spiritual creature. . . . He likes matter. He invented it." God, pouts Screwtape, is "a hedonist at heart." In That Hideous Strength Lewis creates the opposite of Dust, the macrobe. Like the microbe ubiquitous, but situated "above the animal level of animal life." And while communication between humans and macrobes has been "spasmodic, and . . . opposed by numerous prejudices," it has had a "profound influence," which if known would rewrite all of history. But the macrobes are the stuff of dark angels, inimical to human freedom, with a Manichaean loathing for matter and emotion. So much like the councils of Pullman's Church (in which Lewis's Reverend Straik would certainly find welcome tenure), the ultimate goal of the macrobes is to compromise the intellect and crush the will. Keep the context in mind when Rita Skadi contends that "[this] is what the Church does, and every church is the same: control, destroy, obliterate every good feeling." Lewis wouldn't necessarily disagree: I know some muddle-headed Christians have talked as if Christianity thought that sex, or the body, or pleasure were bad in themselves. But they [are] wrong. Christianity . . . thoroughly approves of the body [and] believes that matter is good. In the conclusion to his chapter on sexual morality in Mere Christianity (that surely places him at odds with the conservative--and surprisingly gnostic--Protestant view that presently eclipses the American religious landscape), Lewis unapologetically states that the "sins of the flesh are bad, but they are the least bad of all sins." He provides us with this vivid comparison: "A cold, self-righteous prig who goes regularly to church may be far nearer to hell than a prostitute." To which he adds, "Of course, it is better to be neither." The Ferryman This distorted emphasis on "sins of the flesh" reflects that incessant human need to judge and evaluate and categorize, which arises partly out of necessity, mostly out of prejudice. The great sins, Lewis argues, are spiritual in nature, or rather, metaphysical. And the greatest of all, he insists, is pride. There is much irony in the fact, Lewis admits: "Other vices may sometimes bring people together: you may find good fellowship and jokes and friendliness among drunken people or unchaste people. But Pride always means enmity." The problem is, it's a lot easier to tell if a man smokes, or is a drunk, or sleeps around, and the strictures of organized religion are readily amenable to the human need to define tribal allegiances, to say who's on our side, and who's not. Even when it comes to outright war, religious wars are rarely about religion. It'd be almost reassuring to believe that what really divides Catholics and Protestants in Northern Ireland is the question of Papal infallibility and salvation by grace vs. works. But at the core of most "religious" conflict are battles over property and power and the right to rule. Religion supplies each side with the flags, the uniforms, and a convenient, existential grievance, if one happens to be lacking. And the choicest piece of real estate in any religious conflict is heaven. Regardless of the strength of sincere belief, heaven is still a hypothetical. But that hasn't kept anyone from staking a claim. Sort of like selling the naming rights to craters on the moon. It'd be hard to come up with a better example of this pretension in action than the "Rapture," according to which all the good, God-fearing folk (Christian God-fearing folk, that is) will be "caught up into heaven" right before the apocalypse counts down to zero. The rest of us sad sacks will get "left behind." ^(9) Compared with this, Pullman's vision of the afterlife, pursuing Dante and Vergil, is almost refreshing. We all go into the dark, as Eliot phrased it, and it sucks big time. Lewis's hell in The Great Divorce is equally dark, though its occupants there are tormented by the banalities of evil. Hell is both small and infinite. Infinitely small. Heaven can't join hell simply because it can't fit. Even Minos, as it turns out, would rather rule the dead than judge them. It is a hard reality for those looking forward to an afterlife in which they will lord their righteousness over their neighbors. But like C.S. Lewis's dwarves, who make it into heaven fine, but are blind to its gifts, the dead in Pullman's Hades can't see the hell they carry inside them. The Harpies tell Lyra and Will and the Gallivespians, Thousands of years ago, when the first ghosts came down here, [God] gave us the power to see the worst in every one, and we have fed on the worst ever since, till our blood is rank with it and our very hearts are sickened. Lewis takes an opposite, but not opposing, tack. It is not even the name of the god that matters, Aslan tells Prince Emeth, but how we behave in the name of that god that instructs the better "angels of our nature": Therefore, if any man swear by Tash and keep his oath for the oath's sake, it is by me that he has truly sworn, though he know it not, and it is I who reward him. And if any man do a cruelty in my name, then, though he says the name Aslan, it is Tash whom he serves and by Tash his deed is accepted. Joseph Smith also preached judgement relative to all possible factors. He considered it "preposterous" that anybody would be damned "because they did not believe the gospel." God, he declared, will award judgment or mercy to all nations according to their several desserts, their means of obtaining intelligence, the laws by which they are governed, the facilities afforded them of obtaining correct information, and His inscrutable designs in relation to the human family. In an echo of Vergil, Smith envisioned that these "several desserts" would require a heaven with three rings, the innermost, or highest, divided into three more. It is one of his oddest creations, and one that Mormons (proving themselves equally succeptible to human nature) have gravitated towards with particular enthusiasm. So much so that it's given rise to the joke about St. Peter giving the newly deceased a tour of Heaven. They pass by a heavily secured door, behind which a great congregation seems to be in assembly. And what is behind that impressive door? St. Peter is asked. "Ah," he says, taking the group aside and speaking in the strictest of confidences, "That's where we keep the Mormons. They think they're the only ones here." In the end, Smith concludes, "we shall all of us eventually have to confess that the Judge of all the earth has done right, [for] a man is his own tormenter and his own condemner." The Justifying Will The essential statement of man's relationship to his own salvation is found in the Book of Mormon: "by grace we are saved, after all we can do" (2 Nephi 25:23). That comma is much debated: whether we are saved only after exerting all, or saved despite our best efforts. Drawing on the Stoical tradition, Pelagius would have aligned himself with the former, believing that "the moral strength of man's will" was sufficient to bring a man to salvation. Justification itself depends on faith alone (anticipating Luther by a millennium), though it does not automatically sanctify the soul. Even for Lewis, our attending Augustinian, the physical must follow upon the existential, and action upon reason. But must follow. It should come as no surprise that the preeminent explainer of the Christian religion should prove a master of the dialectic. This is most apparent in That Hideous Strength, described by Lewis as a "fairy tale for adults." And a grim tale it is. Lewis is fighting with the gloves off, but at least here he stays inside the ropes. Throughout the "Space Trilogy," thought and meaning, discovered in dialogue, resolve to action: Ransom kills Weston only when other means of reason have been exhausted, after lengthy discussion; Merlin is summoned only at the climax of the conflict, with a full knowledge of what must be done. Pullman's only similarly-informed counterpart, his man with a very big plan, Lord Asriel, is kept mostly off-stage. And he never really explains himself; he just is. At the opposite extreme, Asriel's lover and Lyra's mother, the inscrutable Mrs. Coulter, propels herself from moment to brutal moment, the grasp of meaning hovering always beyond her fingertips, while Will and Lyra and Mary Malone leap continually into the Kierkegaardian dark. As with the Studdocks, they "see through a glass, darkly"; it is action that precipitates knowledge and leads to belief, the product of which might be called trust or obedience. Obedience to this faith is not blind; obedience for Lewis requires the clearest of all vision: to see the self through the eyes of God, and then to acknowledge the humility necessary to act upon that raw and white-hot knowledge. When Mark Studdock discovers heaven, "all the lout and clown and clod-hopper in him was revealed to his reluctant inspection." Lyra likewise learns the difference--between doing what she wants, and doing what she knows is right--when she disobeys the advice of the Alethiometer: I done something very bad [she tells Will]. Because the Alethiometer told me I had to stop looking for Dust--at least I thought that's what it said--and I had to help you. I had to help you find your father. And I could, I could take you to wherever he is, if I had it. But I wouldn't listen. I just done what I wanted to do, and I shouldn't . . . . Lyra's obedience to the Alethiometer is the opposite of that "obedience" rejected by Rita Skadi, when the good witch (not all witches are good in Pullman's universe, but the ones we know are) observes that "every increase in human freedom has been fought over ferociously between those who want us know more and be wiser and stronger, and those who want us to obey and be humble and submit." That is that same viral strain of "obedience" preached to Mark Studdock in the "Objective Room": a bowing down to men who on one hand embrace iconoclasm as the right of those "more equal" than the rest, and at the same time preach acquiescence as the mark of the pure and the faithful. Eugene Woodbury | Reading | Pullman Lewis Smith 3 http://www.eugenewoodbury.com/essay120.htm Joseph Smith The Book of Mormon The Doctrine & Covenants The Pearl of Great Price Teachings of the Prophet Joseph Smith (ed. Joseph Fielding Smith) The Eternal Siege As with these elements of story, narrative, and character, there are issues of substance between Lewis and Pullman that seem more diametrical at first glance, but which, I believe, dissolve under the light of closer examination. At the heart of it, Lewis is a monarchist. Pullman is a republican, and so the monarchal Church is the enemy. The witch Rite Skadi thus sums her centuries of observation: "Every church is the same: control, destroy, obliterate every good feeling." Mary Malone later calls Christianity a "well-meaning mistake." Considering my own measured antipathy toward the "organized" part of organized religion, I can sympathize with the sentiment. The problem is, religions sprout like crabgrass even in the most desolate of landscapes. Any examination of human civilization, I believe, drives towards one or both of two conclusions: there is either an ecclesiastical god, or there is such an inclination in the human animal bred deeply in the bone. ^(10) The Church is the way it is because people are the way they are. And therefore suffused with human weakness: the idea that the contemporary church would even qualify as some sort of blueprint for a Kingdom of Heaven is one Lewis rejects over and over again. "You are to imagine us," Ransom lectures Mrs. Studdock, "living on a world where the criminal classes of the [angels] have established their headquarters." It is a theme that permeates all of Lewis's writing. Facing the final showdown with evil, Ransom reminds Merlin, "We are four men, some women, and a bear ^(11) . . . . The Faith itself is torn to pieces . . . . The Hideous Strength holds all this Earth in its fist to squeeze as it wishes." A situation not so different from that faced by the desperate heroes battling the Church in The Subtle Knife and The Amber Spyglass. Yet battle they must, against desperate odds. Because Lewis, while a monarchist, is a democrat, suspicious of the collective, holding out great hope in the wisdom and resources of ordinary men. Lewis may not be a deist, but his God is forced to play the role. Consider angels. Like Pullman's, Lewis's good angels stand mostly apart from human activity. Lewis's Gods are forbidden to "send down the Powers to mend or mar in this Earth until the end of all things." In the meantime, the Oyeresu communicate through Ransom, who seeks out Merlin (as John Parry seeks out his son), while the dark forces at the Institute gather about a disembodied head, their "new man" (Lyra, like Jane Studdock, dreams of a severed head), a gateway to the gods. It is the revolt against nature which both emboldens evil and destroys it. The means become the ends. The subtle knife looses upon the world the Specters, destroyers of souls. Yet it is the "one weapon in all the universes that could defeat the tyrant," Will's father tells him. Ransom crosses the dimensions of heaven by means of a "subtle engine," devised by his archenemy Weston to breach the wall of heaven and undo Eden. ^(12) Weston dead, the Institute on brink of destruction, Ransom reflects, If of their own evil will they had not broken the frontier and let in the celestial Powers, this would be their moment of victory. Their own strength has betrayed them. They have gone to the gods who would not have come to them, and pulled down Deep Heaven on their heads. The same fate awaits Metatron (and Lord Asriel and Mrs. Coulter) in the climactic battle in The Amber Spyglass, "Deep Heaven" literally pulled down upon their shoulders, tumbling them into the same Abyss that swallows up Bracton and the Reverend Straik, who dreamed of the Kingdom of God established by "the powers of science" as its "irresistible instrument." Like Father Gomez and the Constitorial Court, men building kingdoms on Earth and rendering unto God that which is Caesar's, The National Institute of Coordinated Experiments, Lewis informs us, "was the first-fruits of that constructive fusion between the state and the laboratory on which so many thoughtful people base their hopes of a better world." But its heart belonged to hell. The Last Republic There is no institutional solution to righteousness. Human beings build cities on a hill, but they can never found a kingdom of heaven on Earth without first building a Gulag Archipelago. So when Will's father tells him, "It's time we started again, but properly this time," he is not proposing yet another utopian dream soon to degrade into self-righteous totalitarianism. As Will remembers later, [My father] said we have to build the Republic of Heaven where we are. . . . I thought he just meant Lord Asriel and his new world, but he meant us, he meant you and me. . . . No one could [build Heaven] if they put themselves first. We have to be all those difficult things like cheerful and kind and curious and patient, and we've got to study and think and work hard, all of us, in all our different worlds. . . . "We shouldn't live as if it mattered more than this life in this world," says Lyra, "because where we are is always the most important place." ^(13) Instructive in this regard is a comparison of Edens. In each lines can be drawn between Weston and Mary Malone, and between Ransom and Father Gomez, between those who fear truth and knowledge, and those who trust it implicitly. One hears echoes of Lewis's Malacandra and Perelandra in the land of Pullman's Mulefa, in Will and Lyra's return there from Hades and Armageddon (compare the final chapter of The Last Battle). But a return to the Garden is not a return to paradise; it is a graduation from innocence into knowledge. In his acknowledgments, Pullman credits an essay by Heinrich von Kleist titled "The Marionette Theater." ^(14) The themes of this essay--drawing out the essential contrast between experience and innocence, and pointing to the deliberate labor that any return to Eden must require--play out with Lyra and her mastery of the Alethiometer, in an extension on the mustard seed allegory, delivered by the most unlikely of characters, and in a wonderful concluding discourse upon grace and works. As the angel Xaphania instructs Lyra, You read [the Alethiometer] by grace, and you can regain it by work. But your reading will be even better then, after a lifetime of thought and effort, because it will come from conscious understanding. Grace attained like that is deeper and fuller than grace that comes freely, and furthermore, once you've gained it, it will never leave you. This is the whole point of Eden. The problem with archetypes (and with such laden words as "grace") is that it's easy to remember the mythology and forget the original point. In the Biblical story God's greatest act is to permit Eve to be tempted, to allow the knowledge to flow to hearts and minds capable of accepting it. Again, Joseph Smith got this one right, portraying the "Fall" as a necessary step upwards in the evolution of the human race: And now, behold, if Adam had not transgressed he would not have fallen, but he would have remained in the garden of Eden. . . . wherefore they would have remained in a state of innocence, having no joy, for they knew no misery; doing no good, for they knew no sin. But behold, all things have been done in the wisdom of him who knoweth all things. Adam fell that men might be; and men are, that they might have joy. (2 Nephi 2:22-25) ^(15) A Tale Newly Told "This is good doctrine," Joseph Smith boasted. "It tastes good." In other words, this is the way the story should be told. "We all need stories," Pullman points out, "but children are more frank about it." Indeed, the admonition to "become as little children" is, if anything, an admonition to treat the structure of story seriously, to recognize that even if you don't believe in Santa Clause, you should still believe in the story. Because some subjects are "too large for adult fiction; they can only be dealt with adequately in a children's book." Or perhaps, as Lewis prefaced That Hideous Strength, in a fairy tale. All religious--all political, nationalistic, ideological--belief resolves to story, because the essence of faith and feeling cannot be reduced to objective fact, and story is the only way experience can be effectively transmitted from one mind to another. Mormonism (as an example) is known today for its staid, business-suited veneer, for its proscriptive moral code. A far cry from the infinite expanse of imagination that Joseph Smith suffused into a green and vibrant theology. Smith began his ministry at the age of fourteen, and began a religion with the epic story of two teenagers (Nephi and Mormon). These are the stories that persevere, that still reach out from beneath the layers of propriety, earnestness, and bureaucracy. Said Philip Pullman at the conclusion of his Carnegie Medal acceptance speech, "We don't need lists of rights and wrongs, tables of do's and don'ts: we need books, time, and silence. Thou shalt not is soon forgotten, but Once upon a time lasts forever." The telling moment, for me, occurs in the third chapter of The Subtle Knife. Will finds himself in a situation where he must hide his identity. The alias he provides is "Ransom," as indicated above the eponymic name of C.S. Lewis's hero of the "Space Trilogy." What the two authors have created, then, are not parallel universes, but rather alternate worlds. The view from the one to the other is polarized; the symmetries align; light becomes brighter and contrasts turn dark. Because, regardless of what universe you are in, truth persists, in an eternal center, even when approached from opposite directions. Even in the midst of darkness the awful, punishing Harpies recognize truth. To the Gallivespian Tialys they explain why they did not attack Lyra when they had wounded her earlier, under similar circumstances, Because she spoke the truth. Because it was nourishing. Because it was feeding us. Because we couldn't help it. Because it was true. Because we had no idea that there was anything but wickedness. Because it brought us news of the world and the sun and the wind and the rain. Because it was true. What the Harpies read as truth is the story of a life honestly told. Not lives good or bad, but recounted for what they were; the goodness is in the honesty of the telling. (Also the moral of The Great Divorce.) The stories these authors tell, in turn, are true to their characters, and true to themselves. There is ultimately no point in searching for two sides of an argument buried somewhere in the rhetoric. There are three sides here, and many more beyond. And each author layers a face of the pyramid, and braces the glittering crystal against the gathering dark. From shovland at mindspring.com Sun Sep 25 03:58:26 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Sat, 24 Sep 2005 20:58:26 -0700 (GMT-07:00) Subject: [Paleopsych] Amsterdam in the morning Message-ID: <11560230.1127620706433.JavaMail.root@mswamui-valley.atl.sa.earthlink.net> A non-text attachment was scrubbed... Name: AmsterdamMorning.jpg Type: image/pjpeg Size: 151027 bytes Desc: not available URL: From shovland at mindspring.com Wed Sep 28 02:21:41 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Tue, 27 Sep 2005 19:21:41 -0700 (GMT-07:00) Subject: [Paleopsych] A new commandment: conserve Message-ID: <21062049.1127874101468.JavaMail.root@mswamui-cedar.atl.sa.earthlink.net> A non-text attachment was scrubbed... Name: Conserve.jpg Type: image/pjpeg Size: 156951 bytes Desc: not available URL: From checker at panix.com Fri Sep 30 20:50:04 2005 From: checker at panix.com (Premise Checker) Date: Fri, 30 Sep 2005 16:50:04 -0400 (EDT) Subject: [Paleopsych] J. Religion and Society: Anti-Mormonism and the Question of Religious Authenticity in Antebellum America Message-ID: Anti-Mormonism and the Question of Religious Authenticity in Antebellum America Journal of Religion and Society http://moses.creighton.edu/JRS/2005/2005-9.html [Again, the paragraphs are numbers and are not links to anything.] J. Spencer Fluhman Brigham Young University Introduction [1] Antebellum Americans who rejected Joseph Smith's religious claims were left with few interpretive options when writing about him. Lacking the intellectual tools that allow some modern scholars to "table" truth claims in their historical analyses, non-Mormon folks in the nineteenth century had a relatively simple choice: they needed only to decide whether Smith was a madman or a fraud. Tellingly, most antebellum commentators chose the latter and portrayed him as a self-conscious deceiver. Indeed, the practice of narrating Joseph Smith as a religious imposter was so commonplace that one can scarcely find an early anti-Mormon book whose title did not make the point: Origen Bacheler, Mormonism Exposed, Internally and Externally (1838); William Harris, Mormonism Portrayed; Its Errors and Absurdities Exposed . . . (1841); Eber D. Howe, Mormonism Unvailed [sic] . . . (1834); E. G. Lee, The Mormons; or, Knavery Exposed (1841); Richard Livesay, An Exposure of Mormonism . . . (1840); Adrian Van Brocklin Orr, Mormonism Dissected; or, Knavery "On Two Sticks" Exposed (1841); Tyler Parsons, Mormon Fanaticism Exposed . . . (1841); LaRoy Sunderland, Mormonism Exposed and Refuted (1838); William Swartznell, Mormonism Exposed . . . (1840); Samuel Williams, Mormonism Exposed . . . (1842). In exposing or unveiling Mormonism, though, anti-Mormons did not invent the language of religious imposture but rather brought Smith and the Latter-day Saints into a long-standing conversation about religious authenticity, authority, and the place of religious variety and innovation in Christendom. I intend what follows to serve as a comment on the place of what one scholar has called the "imposture thesis" of religion in America and an explanation of why anti-Mormon polemicists almost unanimously adopted it as a framework for understanding the Mormon prophet - or, put another way, why so much of the first wave of anti-Mormonism took the form of "anti-Smithism" (Manuel: 47-53, 65-70). American Fears of Religious Deception [2] In short, I argue that the historical circumstances attending the antebellum years, including the pervasive sensitivity to illusion and deception, coupled with both Protestant understandings of religious history and the uncertainty facing American churches, made Smith's claims to prophetic authority, additional scripture, and ecclesiastical superiority particularly compelling for some Americans and obviously false for far more. The very conditions, in other words, that gave rise to movements like Smith's also engendered the uncertainties that in turn shaped critiques of Mormonism throughout its early history. Anti-Mormons, moreover, felt no sting at the charge of "religious persecution" because they typically denied the very label of religion to Mormonism. In the end, works like Mohammetanism Unveiled (1829), Mormonism Unvailed [sic] (1834), Noyesism Unveiled (1849), and Spiritualism Unveiled (1866) shared more than just similar titles. They each betrayed the admission that religious claims are complicated, that if left to themselves people might just choose amiss, and that in a religiously voluntaristic and disestablished United States, a free market in churches might entail unintended - and for some, woeful - consequences. [3] The antebellum cultural preoccupation with deception is easily detected but not as easily explained. Add complicated and unprecedented religious circumstances to the formidable political, social, and economic upheavals that marked early national culture, though, and the historical admissions of anxiety (or downright befuddlement) become comprehensible (Noll: 195; Sellers). Colonial churches were thrown into varying degrees of disarray by Revolution and met an entirely new environment thereafter, as disestablishment, drawn-out but more or less complete by the mid-1830s, made it impossible for traditionally dominant churches to combine with the institutions of state to fence out religious upstarts (Curry; Lambert: 236-64). Anti-Mormon reactions to Smith and the Book of Mormon no doubt constitute the recognition that the new arrangements provided in some ways too much room for religious expression, a circumstance traditionalists had warned against during the disestablishment debates.[1]<1> The ambivalences about the relationship of Christianity to the republic, the pitfalls of religious freedom, and the management of religious variety that had flared as colonies became states were by no means resolved by the time of Joseph Smith. That prominent religious commentators experienced early national religious liberty and pluralism as a profound, if somewhat subterranean, tension is arguably most evident in their efforts to organize American religion into a comprehensive narrative or to situate Protestant Christianity in the context of other religious traditions. Antebellum Commentators and Religious "Imposture" [4] Notably, many of these writers saw their efforts as vital means of educating a sometimes fractious body of Christians, with the desired end of a more peaceable pluralism. Thomas Branagan intended his Concise View of the Principal Religious Denominations in the United States of America, published in 1811, "as a persuasive to Christian Moderation." Young people, he warned, were too often poisoned by "wrong impressions" about religion, which "produce[d] bigotry and intolerance, with all their destructive concomitants." Branagan was certain correct information would mitigate religious intolerance and accordingly proposed to offer readers the "true sentiments" of various Christian and non-Christian groups. He took care to note that he had undertaken his project "without passing my opinion relative to them individually," thereby avoiding "any slanderous reports to prepossess the reader against any of them" (iii-vi, 176, 181). Branagan's ensuing descriptions, however, seem, to modern eyes at least, to repudiate his envisioned impartiality. Catholicism, for instance, did not receive separate treatment, functioning only as the foil to the Reformers' heroism. He provided just enough space for the Unitarians to note that theirs was not the "side . . . supported by scripture." His descriptions of Jemima Wilkinson's "pretensions" and the Shakers were even less flattering (22, 45, 52, 92). When he detailed what he called "Anti-Christian" groups, Branagan candidly related that he purposed to shew the superiority as well as super-excellence of the Christian system . . . when put in competition with the most refined of the Anti-Christian Sects. I have taken the liberty to particularize a number of the most celebrated of these unenlightened sects, that the Christian may prize his privileges, and love the divine system of theology taught by God himself (105-6). Accordingly, his treatment of Deism, atheism, Judaism, and Islam ran from patronizing to visceral. After lambasting Paine and Spinoza in turn, he concluded by tracing Muhammad's rise "from a deceitful hypocrite" to his becoming the "most powerful monarch of his time" (110, 113-14, 116-18, 125, 128-29). [5] Branagan had at least one thing right. American Protestant churches were "in competition," both among themselves and, at least in the abstract, with non-Christian religious traditions. Other writers of religious reference works were forced to admit the same: their task was not simply to describe different faiths objectively, but as ardent Christians and (often more conspicuously) adherents to particular varieties of Christianity, they were duty bound to compare, to weigh, to assign value - to educate in the more dogmatically Protestant sense of the term. Accordingly, later writers felt no pressing need to adjust Branagan's approach. [6] Hannah Adams' Dictionary of All Religions, published in several editions in the United States and England, was undertaken, as readers were informed in the opening pages, with several rules in mind. First, "To avoid giving the least preference of one denomination above another: omitting those passages in the authors cited, where they pass their judgment on the sentiments, of which they given an account: consequently the making use of any such appelations, as Heretics, Schismatics, Enthusiasts, Fanatics, &c. is carefully avoided." Second, "To give a few of the arguments of the principal sects, from their own authors, where they could be obtained." Third, Adams intended to give as "general" an account of each group as possible and, fourth, to provide quotations rather than synopses, "to take the utmost care not to misrepresent the ideas" (1-3). This admirable concern for fair representation did not, however, extend to the "heathen nations," whose "obscene and ridiculous ceremonies" pervaded before the advent of Christ (the state of the Jews, she noted, was "not much better") or the Anabapists, whose "pretensions" had sown "insurrections" and social discord (6, 12, 23, 132). In Adams' account, the "French Prophets" were notable only for their "strange fits" and "pretended" prophecies. Similarly, she wrote that "Hindoos . . . pretended" to have been bequeathed the "vedas" from "Brama." Descriptions of Muhammad's "pretensions" followed; his successes dismissed with the allegation that he "contrived by the permission of poligamy and concubinage to make his creed palatable to the most depraved of mankind." Shakers in Adams' telling were noteworthy in that they "pretend to have the power imparted to them of working miracles" (84-85, 106, 156-57, 269). [7] J. Newton Brown's massive Encyclopedia of Religious Knowledge (1836) followed suit. His entry for "Bigotry" is worth an extended quotation. Bigotry consists, he wrote in being obstinately and perversely attached to our own opinions . . . Bigotry is mostly prevalent with those who are ignorant; who have taken up principles without due examination; and who are naturally of a morose and contracted disposition. It is often manifested more in unimportant sentiments, or the circumstantials of religion, than the essentials of it. Simple bigotry is the spirit of persecution without the power; persecution is bigotry armed with power, and carrying its will into act. As it is the effect of ignorance, so it is the nurse of it, because it precludes free inquiry, and is an enemy to truth: it cuts also the very sinews of charity, and destroys moderation and mutual good will. . . How contradictory is it to sound reason, and how inimical to the peaceful religion we profess to maintain as Christians! (1836: 239). Brown's entries for "heresy" and "orthodoxy" complicated matters, however. He granted that "heretic" was often used as a term of reproach, but defined it as one who defied "what is made the standard of orthodoxy." His passive construction obscured the real dilemma: who, in a pluralistic, disestablished American, decided what or who was orthodox? Brown had no such doubts and assumed that he was numbered among the qualified. Orthodoxy, he wrote, consisted in "soundness of doctrine or opinion in matters of religion," particularly, and this is the point, those doctrines "considered as orthodox among us," namely, "the fall of man, regeneration, atonement, repentance, justification by free grace, &c." (1836: 615, 894). Latter-day Saints, despite adhering wholeheartedly to each item (albeit ambiguously in the case of the last item), were clearly unorthodox, to say the least, in Brown's estimation.[2]<2> He pitied Smith's "misguided followers," whom he regarded as "simple and credulous" for believing in a book Smith "pretended to interpret." He deplored the actions of some anti-Mormons in Missouri, but Brown nonetheless felt it his duty to make "the facts [regarding Mormonism] known . . . which show the real foundation of the imposture" (1836: 844). [8] In even the most moderate attempts to catalogue American religious variety, writers still faced the reality that, given their ideological commitments, some of their subjects were simply unpalatable. John Hayward, who followed his Religious Creeds and Statistics . . . (1836) with the more detailed Book of Religions (1843), endeavored to gather information from "the most intelligent and candid among the living defenders" of each denomination (1843: 3). He went so far as to seek out newly-arrived Latter-day Saint preacher Joseph Young (Brigham's brother) in Boston for an authoritative representation of Mormon belief. Hayward described Young as "a very civil man" and included Young's written outline of Mormon belief in full. His interaction with Young hardly changed Hayward's mind, however (his article on Mormonism was culled from standard anti-Mormon sources), as his summation of Young's statement revealed. "Elder Young," he wrote, "seems to think that revelations from heaven, and miracles wrought, are as necessary now, and as important to the salvation of the present generation, as they were to any generation in any preceding age or period. This appears to be the sum and substance of the Mormon scheme" (1836: 139-42). To be fair, it should be noted that Hayward was quite candid about his endeavor of religious description. He had described the various "systems [to] settle the minds" of those without "definite opinions" about religion, and to "lead us all . . . by contrasting the sacred truths and sublime beauties of Christianity with . . . the absurd notions" of the heathen, skeptics, and, as it turned out, those who he felt only pretended to profess Christianity (1843: 3). [9] Several important insights emerge from these reference works. First, antebellum Americans agreed that the propagation of true religion was critical for maintaining the republic's strength. They also agreed, at least in principle, to the denominational theory that versions of the truth might reside (and peacefully coexist) in various Protestant churches. Second, not all movements claiming to be religious were accepted as valid. Disquieted by fears of religious deception, many antebellum Protestants found the old grounds for determining heterodoxy or fraud from orthodoxy ineffective. This uncertainty owed much to the period's sectarian proliferation and the perception that the post-establishment religious scene was rootless and hyper-competitive. Third, as a result, much of the period's polemical literature took the form of exposing religious impostures. This conceptualization was almost always applied to innovators or leaders of various religious groups; their followers, on the other hand, demanded other rhetorical tools. (Such a framework for understanding "false" religions in the past, incidentally, provided unintended but perhaps not unwanted consequences when attached to contemporary movements - rendering Mormons, for instance, as pseudo-Christian or non-Christian, more by a process of historical association than theological taxonomy.) Fourth, the seeming contradiction between the authors' stated aims of objectivity or toleration and their treatments of non-Christian and unpopular Christian groups is made comprehensible if viewed in conjunction with a particular set of assumptions and a certain corresponding logic, namely, that true religion was vital to the health of the young republic and should be tolerated and encouraged in its variety, but what appeared to be religion in other cultures - or unpopular movements at home - was not real religion at all and was thus worthless or even harmful. The question of tolerating these groups was correspondingly muddled. [10] Seen in this light, "imposture" was in fact an indispensable rhetorical device for antebellum Protestants. It ostensibly resolved the potentially pesky perplexity lurking in the term "religion," for it granted that untrue religion could imitate real religion by evoking deity, redemption, spiritual power, creation, salvation, etc. Untrue religion, in other words, could mimic the "form of godliness" even if it lacked the power. These assumptions about real religion and the world's religions is clear, for instance, in Hannah Adams' assertion that religious history began with the advent of Christ, her acknowledgement of pre-Christian religious traditions notwithstanding (7). The concept of imposture, though, was not without its problems. For one thing, the theory had a complicated past. As historian Leigh Eric Schmidt has shown, the origins of the framework are complex: the "imposture thesis" had been wielded with comparable utility by Protestant polemicists against the church in Rome and by early Enlightenment skeptics against religion in general. The use of imposture as an explanatory strategy during the century or so preceding the advent of Mormonism was so tangled, Schmidt concludes, that "it is difficult to mark where the Protestants' polemic ends and the rationalist's begins." The antebellum Protestants who wielded the concept against Mormonism, however, were either unconcerned or unaware of such complications, never hinting that believers of almost every stripe had been exposed to the "imposture" thesis at one time or another (85-86). [11] Mormons and anti-Mormons, then, found ready-made conceptual tools when they plunged headlong into this long-standing cultural conversation about religious legitimacy. Furthermore, while it is certainly the case that early Mormonism provided fodder for the charge of imposture, it remains true that anti-Mormons were considerably less concerned with LDS theology than with the figure of the prophet, at least initially - the message of either the prophet or his book was (almost) beside the point (Givens: 64). Early opponents were thus more concerned with Mormonism as form than as content; the combination of the period's multiplicity of spiritual voices and American attachments to religious freedom (at least in terms of one's religious "sentiments") presumably made countering any particular tradition's theology problematic. Latter-day Saint theology became important for anti-Mormons, but only as further evidence of Smith's perfidy and long after they had concluded that he was a mere, if somewhat talented, charlatan. Hiram Mattison's A Scriptural Defence of the Doctrine of the Trinity . . . (1846), in which he upbraids Mormons and other purveyors of what he considered modern "Arianism" for their heterodoxy, thus reads like a very different kind of attack because it was. Mattison's work and others like it, in taking up Mormonism as a theology (albeit a fatally flawed one), signaled a certain maturity in both Mormon and anti-Mormon thinking. The earliest critiques of Mormonism, though, could not dignify it with the label of theology because none were prepared to credit Joseph Smith with anything but imposture, least of all theology. Conclusion [12] In sum, antebellum narratives of false religion turned to everything but religion - and history's false prophets necessarily became despots, charlatans, and crooks - because nineteenth-century Americans had invented no other frameworks for understanding a figure whose religious claims they utterly rejected. In actuality, anti-Mormons could defame Smith, endeavor to thwart his movement, and even seek his demise, and at the same time claim quite sincerely that they had no argument with Mormon religion whatever. Thomas Ford, Governor of Illinois during the Saints' controversial stay at Nauvoo, could thus maintain that he held no personal prejudice against Mormonism while at the same time lamenting that he felt "degraded by the reflection, that the humble governor of an obscure State, who would otherwise be forgotten in a few years, stands a fair chance . . . of being dragged down to posterity with an immortal name, hitched on to the memory of a miserable impostor" (360). [13] In his theorizing about deceivers and society, anti-Mormon Origen Bacheler articulated the often-unspoken social logic that underlay decades of anti-Mormon polemics. "I respect the rights of conscience;" he wrote, "I am opposed to persecution for opinion's sake." But, he cautioned, it would be a grave mistake to extend the same "forbearance and compassion [due the] dupes of the Mormon imposture" to the "lying knaves who dupe them." Joseph Smith, he argued, was "entirely out of the pale of charity" and could be "viewed in no other light than that of [a] monsterous public" nuisance. Bacheler's contention that such a nuisance "ought forthwith to be abated" - he left to readers to figure out how - rested on the assumption that among the "social obligations" that fell to every "member of the community" was the responsibility that "he shall not knowingly deceive and impose upon that community." Not surprisingly, Bacheler charged that Smith had done precisely that and, as a result, all of the trouble between Mormons and their neighbors could rightly be blamed on him and other leading Mormons: "By their deception and lies, they swindle [their followers] out of their property, disturb social order and the public peace, excite a spirit of ferocity and murder" (48). Such logic not only led Smith into an 1831 South Bainbridge, New York, court on charges of being a "disorderly person" (i.e., "setting the country in an uproar by preaching the Book of Mormon") but ultimately to an early end in Illinois in 1844 (Bushman: 162; Firmage and Mangrum: 50-51). In the end, the antebellum histories of the "imposture thesis" and Joseph Smith paradoxically reveal on the one hand the promises of American religious liberty and, on the other, our conflicted and still-forming commitment to religious pluralism. Bibliography Adams, Hannah 1817 A Dictionary of All Religions and Religious Denominations, Jewish, Heathen, Mahometan, and Christian, Ancient and Modern. Fourth Edition. New York: James Eastburn and Company. Bacheler, Origen 1838 Mormonism Exposed, Internally and Externally. New York: Published at 162 Nassau St. Baird, Robert 1844 Religion in America. Or An Account of the Origin, Progress, Relation to the State, and Present Condition of the Evangelical in the United States. With Notices of the Unevangelical Denominations. New York: Harper and Brothers. Benedict, David 1824 History of All Religions, As Divided into Paganism, Mahometism, Judaism, and Christianity. Providence: John Miller. Branagan, Thomas 1811 A Concise View of the Principal Religious Denominations in the United States of America, Comprehending a General Account of Their Doctrines, Ceremonies, and Modes of Worship. Philadelphia: John Cline. Brown, J. Newton 1835 Fessenden & Co.'s Encyclopedia of Religious Knowledge: or, Dictionary of the Bible, Theology, Religious Biography, All religions, Ecclesiastical History, and Missions . . . Brattleboro, VT: Fessenden and Co. 1836 The Encyclopedia of Religious Knowledge; or, Dictionary of the Bible, Theology, Religious Biography, All Religions, Ecclesiastical History, and Missions; Containing Definitions of All Religious Terms . . . Brattleboro, VT: Fessenden & Co. Buckley, Thomas E. 1977 Church and State in Revolutionary Virginia 1776-1787. Charlottesville: University of Virginia Press. Bushman, Richard L. 1984 Joseph Smith and the Beginnings of Mormonism. Urbana: Illinois University Press. Curry, Thomas J. 1986 The First Freedoms: Church and State in America to the Passage of the First Amendment. New York: Oxford University Press. Eastman, Hubbard 1849 Noyesism Unveiled: A History of the Self-Styled Perfectionists; with a Summary View of Their Leading Doctrines. Brattleboro, VT: By the Author. Evans, John 1844 History of All Christian Sects and Denominations; Their Origin, Peculiar Tenets, and Present Condition. Second Edition. New York: James Mowatt. Firmage, Edwin B., and Richard C. Mangrum 1988 Zion in the Courts: A Legal History of the Church of Jesus Christ of Latter-day Saints, 1830-1900. Urbana: Illinois University Press. Ford, Thomas 1854 A History of Illinois: From Its Commencement as a State in 1818 to 1847. Chicago: S. C. Griggs and Co. Forster, Charles 1829 Mohammetanism Unveiled. 2 vols. London: A. & R. Spottiswoode. Givens, Terryl L. 2002 By the Hand of Mormon: The American Scripture that Launched a New World Religion. New York: Oxford University Press. Grant, Miles 1866 Spiritualism unveiled: and shown to be the work of demons; an examination of its origin, morals, doctrines and politics. Boston: The "Crisis" office. Harris, William 1841 Mormonism Portrayed; Its Errors and Absurdities Exposed, and the Spirit and Designs of Its Authors made Manifest. Warsaw, IL: Sharp & Gamble. Hayward, John 1836 The Religious Creeds and Statistics of every Christian Denomination in the United States and British Provinces; with some Account of the religious Sentiments of the Jews, American Indians, Deists, Mahometans, &c., alphabetically arranged. Boston: John Hayward. 1843 The Book of Religions: comprising the views, creeds, sentiments, or opinions, of all the principal religious sects in the world, particularly of all Christian denominations in Europe and America, to which are added church and missionary statistics, together with biographical sketches. Concord, NH: I.S. Boyd and E.W. Buswell. Howe, Eber D. 1834 Mormonism Unvailed [sic]: or, A Faithful Account of That Singular Imposition and Delusion, from Its Rise to the Present Time. With Sketches of the Characters and its Propagators, and a full detail of the manner in which the famous Golden Bible was brought before the world. To which are added, inquiries into the probability that the historical part of the said bible was written by one Solomon Spaulding, more than twenty years ago, and by him intended to have been published as a romance. Painesville, OH: Howe, E. D. Hutchison, William R. 2003 Religious Pluralism in America: The Contentious History of a Founding Ideal. New Haven: Yale University Press. Lambert, Frank 2003 The Founding Fathers and the Place of Religion in America. Princeton: Princeton University Press. Lee, E. G. 1841 The Mormons; or, Knavery Exposed. Philadelphia: E. G. Lee. Livesay, Richard 1840 An Exposure of Mormonism, Being a Statement of Facts Relating to the Self-Styled "Latter Day Saints," and the Origin of the Book of Mormon, by Richard Livesay, of Winchendon, Massachusetts, America, Minister of the Methodist Episcopal Church. Manchester, England: Wm. Shackleton and Son, Printers, Ducie Place. Manuel, Frank E. 1959 The Eighteenth Century Confronts the Gods. Cambridge: Harvard University Press. Mattison, Hiram 1846 A Scriptural Defence of the Doctrine of the Trinity: or a Check to Modern Arianism as Taught by Campbellites, Hicksites, New Lights, Universalists and Mormons, and Especially by a Sect calling themselves "Christians." New York: L. Colby. Noll, Mark A. 2002 America's God: From Jonathan Edwards to Abraham Lincoln. New York: Oxford University Press. Orr, Adrian Van Brocklin 1841 Mormonism Dissected; or, Knavery "On Two Sticks" Exposed. Bethania, PA: Reuben Chambers. Parsons, Tyler 1841 Mormon Fanaticism Exposed. A Compendium of the Book of Mormon, or Joseph Smith's Golden Bible. Boston: Printed for the Author. Rupp, I. Daniel 1844 He Pasa Ekklesia: An Original History of the Religious Denominations at Present Existing in the United States. Containing Authentic Accounts of Their Rise, Progress, Statistics and Doctrines. Written Expressly for the Work by Eminent Theological Professors, Ministers, and Lay-Members, of the Respective Denominations. Philadelphia: J. Y. Humphreys. Schmidt, Leigh Eric 2000 Hearing Things: Religion, Illusion, and the American Enlightenment. Cambridge: Harvard University Press. Sellers, Charles 1991 The Market Revolution: Jacksonian America, 1815-1846. New York: Oxford University Press. Sunderland, LaRoy 1838 Mormonism Exposed and Refuted. New York: Piercy & Reed. 1842 Mormonism Exposed: in which is shown the Monstrous Imposture, the Blasphemy, and the Wicked Tendency, of the Enormous Delusion, advocated by a Professedly Religious Sect, calling themselves "Latter Day Saints." New York: Office of the N. Y. Watchman. Swartznell, William 1840 Mormonism Exposed, Being a Journal of Residence in Missouri from the 28th of May to the 20th of August, 1838, together with an appendix, containing the revelations concerning the Golden Bible, with numerous extracts from the `Book of Covenants,' &C., &C. Pittsburg: O. Pekin. Williams, Samuel 1842 [?] Mormonism Exposed. Pittsburgh [?]: n.p. From checker at panix.com Fri Sep 30 20:50:33 2005 From: checker at panix.com (Premise Checker) Date: Fri, 30 Sep 2005 16:50:33 -0400 (EDT) Subject: [Paleopsych] Merlyn: His Dark Materials, Harry Potter, and Lord of the Rings Message-ID: His Dark Materials, Harry Potter, and Lord of the Rings http://www.bridgetothestars.net/index.php?d=commentaries&p=HDMHPLOTR By ]Merlyn, Webmaster mailto:merlyn at bridgetothestars.net They're compared constantly. Three great tales, two contemporary and one already considered a classic. But are these constant comparisons justified? Just what similarities do His Dark Materials, Harry Potter, and Lord of the Rings share? What is their purpose, who is their target, and why do they have the appeal that they do? We'll try to fit these three tales into the basic stages of the myth as listed by Joseph Campbell in 'The Hero With a Thousand Faces' and cover the various aspects of the three works from characters, good and evil, philosophy, religion, and conclusion. First off, it must be noted that the Harry Potter story is still being completed: only five of the seven books have been written to date, and I have not yet had the chance to read the fifth one (no, thank you, I am NOT spending that amount of money on a book that'll take me two days to read I can wait until it's in at the library). Still, there is enough of the story present to cover the majority of the aspects mentioned above, if not as completely as for the other two works. Joseph Campbell wrote an excellent book in 1948 about the similarities of world myths and religions. He lists 17 stages and aspects that seem to be basic to every mythical story, and since the three works we're looking at here are mythical in nature, they should contain these stages. The stages are listed in three parts of a cycle: Departure, Initiation, and Return. Departure takes the hero/heroine from their home and places them in situations that lead up to Initiation, where the hero/heroine must prove themselves to be worthy of the enlightenment that they seek. Once they have done so, they must Return to their homeland so that what they have learned or gained can be shared with the rest of the world. The first stage is The Call to Adventure. So far the three works are pretty similar: Lyra is content to live at Jordan College until Mrs. Coulter arrives and offers to take her with her on an adventure to the North. The Master gives Lyra the alethiometer to help her on her way, and she sets out. Harry receives the strange letters in the mail from Hogwarts school of Witchcraft and Wizardry, and Hagrid comes to collect him from the Dursleys. Harry's situation at home is slightly different from Lyra's. Lyra was happy to be the little savage that she is and live in Jordan, while Harry dreams of somehow getting free of the horribly cruel Dursleys, the family of his mother who have taken him in (grudgingly) after his own parents were killed when he was an infant. Frodo's adventure begins when his uncle Bilbo leaves him the Ring, and the wizard Gandalf instructs him that it must be taken to Rivendell (and later to the Cracks of Doom in Mordor) because it is a ring of extraordinary power, and greatly coveted by the Dark Lord, Sauron. Frodo really has no other desire to leave the beautiful Shire: if he hadn't inheireted the Ring, he would have been content to stay at home like any decent hobbit and live out his days gardening and smoking pipeweed. But, he has events thrust upon him that he cannot control, and whether he wants to or not, he is forced to set out on a journey that will span thousands of miles and many months, and which will affect not only the rest of his life, but the future of the world of Middle Earth. Three adventurers, each with a few friends and a goal in mind, that set out from the peacefulness of the home they know to face dangers in faraway lands. Of course, there is also the Refusal of the Call. In His Dark Materials, Lyra feels reluctant to leave the only home she has ever known, adventurous as she may be. Frodo moves from Bag End to a house in a different part of the Shire, biding his time and waiting for more knowledge before setting out. But where Lyra and Frodo are reluctant to begin, Harry couldn't dream of anything better than to set out. The Dursleys do their very best to prevent Harry from reading one of the letters from Hogwarts, and go so far as to move to a desolate island in the middle of a stormy sea, but once Hagrid arrives, Harry doesn't hesitate to go with him. Nothing seems to bind him to the cruel, but only, family he has lived with all of his life. Once they have set out, the heroes and heroine receive supernatural aid early on- this comes in the form of the Alethiometer (which, we learn later, is driven by none other than angels) and the rescue by the Gyptians for Lyra, Tom Bombadil for Frodo & company in the Old Forest, and the various wizards that Harry encounters, probably Hagrid most notably at this point, since he is the one who gives the most aid to Harry this early on in the story. Rescued from the first peril, the characters are now ready to Cross the First Threshold, as Campbell calls it. Lyra joins the Gyptians on their journey to the North, Harry enters Diagon Alley and then Hogwarts itself, and Frodo crosses the Old Forest and the town of Bree. They have all left the (relative) safety of home behind, and are now in the new world that they know little to nothing of, and will soon have to face many perils and dangers along the way. The last stage of the Departure is the Belly of the Whale, where the hero/heroine dies to their old ways and is reborn (metaphorically) into the second part of the Cycle, the Initiation. For Lyra this would be her capture and imprisonment at Bolvangar, where she almost has her soul cut away by the dreaded Silver Guillotine. Once she passes out of that, unharmed (though terrified), she is no longer the brat of Jordan, but is the heroine who knows that she must help the other children out of this horrid place and lead them to safety. That was her goal all along, but only once she had passed through the terrors of Bolvangar itself could she really gather the courage to do so. Harry doesn't have quite so literal a Belly of the Whale experience - for him it's more adjusting to life at Hogwarts, and he dies to the "muggle" aspect of himself and is reborn as a wizard in training. Frodo's wounding by the Nazgul lord and subsequent flight to Rivendell to be healed fits the category, as he very nearly dies when he feels the touch of the evil blade, and it is only the skills of Elrond that save him from becoming a wraith. He has left behind the Shire, and now prepares to set out from Rivendell to Mount Doom, no longer as simply the protector of the ring until it can be placed in safer and wiser hands, but the Ring Bearer himself, with the task of destroying the Ring in the land of Mordor. Now reborn, the heroes and heroine of these three stories begin the process of initiation, facing the Road of Trials. Each goes through varying challenges, from deceiving Iofur Raknison, to facing Voldemort at the end of the Sorcerer's Stone, to the journey through the Mines of Moria. Harry Potter begins to present a problem here, since the events of the first three books, while different, seem very similar in their rhythm, and really don't get Harry past the Road of Trials. At the end of the first book he returns home, but his story is hardly over. An abridged version of the rest of the cycle is passed through by Harry in each book, culminating with his (yearly) meeting and conquering of the evil Voldemort. Since the last two volumes have yet to be written, perhaps a general cycle will emerge, with much of the middle books covering the Road of Trials aspect, and the ending of the last book covering the Return nearly all on its own. In a sense this makes Harry out to be more complex a layout than the other two: he doesn't go through the 17 stages once, but 7 times. But does this work? We'll look at that later on. When the Road of Trials has been passed (and this differs in length for each work, probably including meeting Will for His Dark Materials, and most of the Fellowship of the Ring and the Two Towers for Lord of the Rings), there is a "Meeting with the Goddess/Divine". Rather than assume the literal for His Dark Materials and say that this is the meeting with the Authority, the Goddess is probably symbolized by Serafina Pekkala for Lyra. She is far more powerful, beautiful, and older than a normal human being, and her wisdom and aid help Lyra and Will multiple times through the course of their journey. And, of course, there is the temptress whom the heroes and heroine must encounter. For Lyra personally it is Mary Malone, when she tells Lyra and Will the story of how she fell in love, which puts the idea into their heads for a new way of looking at each other. There is also Mrs. Coulter as a temptress throughout much of the story, and for more than one character. For Harry this is a little less clear. Is the temptation his love for Cho Chang? (Remember, I've only read the first four books, so I don't know what happens with that) Perhaps his longing for his parents as seen in the Mirror of Erised, or the similarity he bears to Voldemort, and the delicate line he walks that he could so easily become like Voldemort? So far there appears to be no specific temptress for Harry- it is more of the temptation of Good and Evil and human desire overall that tempts him. Frodo's temptation is similar- there really isn't a set tempter or temptress- just himself, when he claims the ring on the crack of Mt. Doom. He, like Lyra, falls but whereas Lyra's fall is to her benefit, and the benefit of the many worlds, Frodo is only saved by the blind fate of Gollum. He also desires the Ring, and biting the Ring off Frodo's hand, slips and falls into the fire, taking the Ring with him. This brings up the view on religion that the different authors take- Pullman's view of the original Fall by Eve was a good thing, and lead towards the wisdom of mankind instead of blissful ignorance. Tolkien, a devout Catholic, viewed the Fall as a bad thing, and the state of life in Eden as more desirable than that of life after the Fall. Perhaps he also saw temptation and Fall as inevitable- Frodo, the most innocent character that could be found, was given the Ring in the hopes that he would not be overcome by it. But even he fell to its power. Harry, at the point to which I've read, has yet to Fall, or even really be tempted. He has faced Voldemort a few times now, and it is certainly within his power to do evil, but he has yet to really face something that will truly be a struggle for him. After Temptation there is the Atonement (or at-one-ment as Campbell says) with the Father, when the Hero/Heroine is accepted by their father figure, or the person that they have set out to save or protect. The probably comes in Asriel and Coulter's decision to sacrifice their own lives in order to make sure that Metatron can no longer pose a threat to their daughter. They both love her now, and realize that the Republic has only come into being to protect her. Harry's Atonement might come towards Dumbledore (although he seems to have accepted Harry from the start), or somehow with the ghost of his own dead parents. But they too love him already- maybe the Atonement will come with Snape, or some other powerful figure who has disliked Harry, but now find themselves helped by him? We've yet to see this part of Rowling's story. Frodo's Atonement is also a little unclear, though it is likely that it would be with Gandalf- Frodo's friend, yes, but perhaps a skeptic that he would be a success? Frodo did succeed- though with the unwilling help of Gollum- so perhaps that was what he needed to at last be accepted by Gandalf? Once the task for which they have set out for has been completed, successfully, the heros and heroine reach a state of Apotheosis, or Enlightenment. For Lyra it is the realization that she cannot stay with Will, and that the only window that can remain open is the one leading from the World of the Dead. Some have said that this is her real temptation, since it is something that she does not wish to accept or carry out, and which she does have the option to avoid (close the window for the dead and leave one open for her and Will, save the knife and make new windows at the cost of more specters etc), but which she decides to do anyway since it is better for the many worlds even if it is painful for her. Harry's Enlightenment will probably come after he finally destroys (or is faced with the chance to destroy) Voldemort, probably the conclusion of the seventh novel. Frodo's is the realization that the Shire cannot be his home and that he will be forgotten despite what he did and to avoid the pain of living in the homeworld that shuns him, he sets out for the Grey Havens to go with the Elves to the Undying Lands across the sea. (This is jumping around a bit with the order of events and the order in which the stages go, but the Enlightenment of the characters varies a bit). The Ultimate Boon stage signifies when the hero can no longer be harmed by evil forces- they are oblivious to the dangers around them for they have reached Enlightenment. Lyra and Will go unharmed by Father Gomez, Harry, try as he will, cannot be harmed by Voldemort (yet), and Frodo successfully saves the Shire from the minions of Saruman. The third and final part of the cycle is that of the Return. After the Hero/Heroine has reached the object of their quest, whether it is a physical act like the destruction of the Ring, or a mental state like Lyras when she chooses to be apart from Will, the character which we have followed so far must make the journey back to the world of the everyday, the world where the story began. At first, they Refuse, or do not wish to return. Lyra wants to spend the rest of her days with Will. Harry is forced at the end of each school year, to return to his horrible life with the Dursleys, and leave behind the world of Hogwarts. Frodo, now finished with his task, does want to return to the Shire, but he rightly fears that it will not be the same, Eden-like paradise that he has left behind. Each one of them have experienced so much that they dont quite know how to behave in the way that they did before they left- they have been changed by their adventures in more ways than one. All three of them could have been seen as children, innocent, before they left. Now, Lyra has fallen in love and been torn apart by having to be separated, Harry has learned of a magical world that lies in hiding from the normal world and he has become a part of it, and Frodo has seen cruelty, corruption, and carried with him the very essence of the Dark Lord. They will not be the same when they return, so they do not wish to do so. Yet they must, willing or not, and they do. They are transported back to the world they knew by the Magic Flight: the gyptian ship that (somehow?) has sailed to the Mulefa world and come to take Lyra and Will home, the return journey of the Hogwarts Express, and the uneventful return to the Shire (although once there, dangers arise once again). There is a Rescue From Without, in the form of Balthamos, Dumbledore, and Gandalf, when the hero/heroine is in more danger than they can know or do anything about. (Again, the order of events differs slightly from the basic order of the stages). They return across the Threshold to the world they once knew, through the window, platform 9 ?, and Rivendell. When they have returned they, as said before, have been changed so that they are now the Master of the Two Worlds, with the wisdom and experience from their adventures now at their disposal for use in Jordan College/St. Sophias Boarding School, Privet Drive, and the Shire. What they do is now up to them- it is the Freedom to Live as they please, whether building the Republic in the here and the now, pestering Dudley with the threat of magical force, or freeing the Shire from the evils of Saruman. So Lyra, Harry, and Frodo have passed through their adventure, changed, grown up, and empowered by their experiences, with new friends and different outlooks from when they began. They have lost their innocence, and become Enlightened, each in their own separate way. Weve seen now that the three stories embody many of the same characteristics, and although Harrys story is incomplete, it appears that the cycle is recognized in each of the separate books, and will be evident as a whole once the final volume is published. They each follow this basic format for the legend, but each does so in a very different way, with the views and styles of the author greatly affecting them. Lyra sets out on her journey and begins to see her purpose in the first book, joins Will and gains the Knife in the second book, and travels through the land of the dead/Falls in the third book. The story is divided into three separate sections, yet each flows into the others. The third book divides into multiple sub-plots, some of which dont really seem to add much to the trilogy (Coulter in Geneva?), and are there more to space out the suspense of Lyra and Wills story. It works, and the flow of the main plot feels right. It also allows the adventure of Mary Malone to be brought into play, and the culture and experiences of the Mulefa to show what kind of world the Republic of Heaven is striving to create. The last few chapters of The Amber Spyglass are strong, well written, and very, very painful. Some have said that His Dark Materials could have benefited if Pullman had included some characters for comic relief instead of being so somber and dark all the time. I disagree- the strength of the story was in its pain, and that was where Pullman focused. It isnt a comedy, and isnt as light as Potter, but it does have its humorous bits- take this quote from The Amber Spyglass: Ill explain if you like, but you dont seem very interested. Oh, I find whatever you do a source of perpetual fascination. But never mind me. What are you going to say to these people who are coming? Balthamoss sarcasm fits well with the story, and its enough to keep the mood at that delicate balance between light and dark. I would have been disappointed if Pullman threw in a character just to crack jokes. Harry goes through the three aspects of the Cycle each year, in each book, climaxing each time with his meeting with Voldemort. He learns a lot along the way, and puts it into use at the end. But does this yearly cycle work? It gives a nice rhythm to get used to, but it makes the Cycle seemed rushed at times, and I personally think that because Voldemort is encountered all the time, a lot of the fear for his character will begin to disappear. If hes always met and always defeated (or prevented from hurting Harry), wheres the threat? The division of the books into the years spent at Hogwarts makes sense, and perhaps it couldnt have been done better a different way, yet it just seems to get boring after a time. If Voldemort was kept as a distant threat that got closer and closer each year, with his minions facing Harry instead of himself, all leading up to a final meeting in the seventh book, perhaps the story would have been strengthened. Humor here is more evident, with the constant effects of spells gone wrong, talking paintings, half decapitated ghosts and other such fanciful creatures that give a lighter mood. Aside from a few bits in The Goblet of Fire (I havent read Order of the Phoenix, so I cant judge it) there really arent any DARK bits. It stops being funny, yes, but it never goes to some of the depths that His Dark Materials does. But, it works for the story and what Rowling is trying to do. She wanted to present a creative world that was absurd at times, and to make you laugh, while still getting a few themes in here and there, and she does so successfully. Frodos story, like Lyra's, is divided up into three books, the last two of which are split in half between the story of Frodo and Sam and that of the rest of the fellowship, whether it be Aragorn and Legolas and Gimli at Helms Deep, or Merry and Pippin with the Ents. The movies were done differently, and, I think, for the better, by jumping from one plot to another instead of doing half and half. That way the climaxes of each thread comes at the same time, rather than one half way through and the other at the end. Tolkiens writing style can get very windy after awhile, and moving between stories helps to move the plot along at a better speed. Other aspects of the book to movie adaptation werent so brilliant (notably the change in Faramir), but thats not what this commentary is about. Humor in Lord of the Rings hangs mostly on Merry and Pippin and their errors and mishaps along the way some of Gimli and Legolass relationship is funny as well, and Frodo and Sam each have their wry bits. I think Tolkien was also successful in finding the right tone of humor for the story that he was telling. The way that Pullman, Rowling and Tolkien take on characters is also quite different, and each has their own merits and flaws. His Dark Materials focuses on two major characters, Lyra and Will, whose lives and ways are quite different. One is a fluent liar, the other good at making himself invisible/uninteresting to those around him. Through chance (and a lot of it), they are thrown together on a quest that takes them to the land of the dead and out of it, and charges them with the responsibility of saving the worlds. They are 11 and 12 when the stories begin (and somewhere between 5-9 months pass throughout the course of the story), and by the end they have fallen in love. All the while they meet and befriend armoured bears (who are brave and virtuous, yet must forge their own souls, and can be quite fierce and whose customs are often times confusing and different), the witches (who are beautiful and wise, yet not all powerful: their spells fail occasionally, and they are divided between the two opposing forces), the gyptians (who are warm and friendly, yet are also quite dangerous) and finally Lyras own parents, Lord Asriel and Marisa Coulter. These two embody Pullmans point that it is not people who are good or evil, it is the acts that they perform; both are, in the end, brave and strive for a great vision, yet on the way commit many acts of cruelty and torture or kill innocent children. Are they good or are they evil? In Harry Potter, Harry himself is probably the strongest character- he is driven by his own morals and values, yet does make mistakes (some humorous, some not) that result in pain for others. His friends, Ron and Herminone, offer guidance and help, once again sometimes humorously and sometimes not. They are there for him, and though they each make mistakes, their friendship is strong enough that they can forgive each other. The teachers of Hogwarts are also there to guide Harry, but some seem more bent on getting in his way than helping him. Others, like the headmaster, Dumbledore, seem to come in as all-knowing and benevolent, always there to catch Harry when he falls. (At least up through the fourth book). Evil is clearly defined: Voldemort, Draco Malfoy etc. They have no good in their hearts, as far as Rowling shows them; just greed, corruption, and malice. There is no good or evil, Voldemort tells Harry, Just power, and those who too weak to seize it (paraphrased). Frodo and his friends and companions also have a strong friendship: if it werent for Sam, I strongly doubt that Frodo would have had the strength to ever get him to the Cracks of Doom. Some are well-intentioned, yet overcome by the Ring, like Boromir, others are strong enough to resist its temptation, like Aragorn. Gandalf is the all-knowing and benevolent in Lord of the Rings, although he is somewhat darker than Dumbledore. Evil has a more grey range to it, touching on some but not truly overcoming them, while those like Sauron are entirely evil, filled with nothing but cruelty and the lust for Power. I think he would have agreed with Voldemorts statement above. Yet, Voldemort was once a regular wizard, and very similar to Potter, though he was placed in the house of Slytherin instead of Gryffindor. Sauron was once a Maia, or minor angel, like Gandalf. Both fell to the corruption of Power, and (to this point) neither show any doubt or regret of their choices. In the discussion about good and evil, some who discuss these books overlook that His Dark Materials also has the all evil being: The Authority. Greedy for Power since the very dawn of time, claiming to those who came after him that he was their maker. By the time we meet him in The Amber Spyglass, he has delegated his power to yet another all-evil being, Metatron, while he himself becomes a bumbling old fool contained in a glass cage until Lyra and Will release him, allowing him to drift apart. He never fell from a state of grace like Sauron, and he was never like the young Voldemort: surrounded by power and driven to conquer it himself. He was always this way, always corrupt, and weakening in power rather than gaining it like the other two. While all three stories have characters that are all evil, do any of them have any characters that are all good? Frodo falls to the Ring in the end, Harry is a disturbed teenager who makes a lot of mistakes and is responsible for a lot of pain and suffering, and Lyra certainly does harm along the way. Dumbledore and Gandalf also make mistakes, though their wisdom seems to be more powerful in the end. To Lyra, the all knowing character would probably be Iorek or Lee, but both of them also make mistakes and kill. The Mulefan race has an excellent look to it, and its members are wise and good, yet that comes from the fact that they have glorified the Fall from grace by riding the wheels. This is a good thing for them (Pullmans point that the Fall brought Wisdom to humanity), but they are no longer in the state of pure grace. Each book also has rather stereotypical/cardboard characters: Father Gomez and the other priests in His Dark Materials, the Dursleys and Draco Malfoy in Harry Potter, and the orcs in Lord of the Rings. The priests in His Dark Materials are the props that Pullman attacks without giving them a chance to defend themselves (one of the great flaws of His Dark Materials is that there arent any priests/clerics who are good), the Dursleys and Malfoy annoy Potter and restrict him, but they dont really present any serious danger to him: theyre just there to annoy him. The orcs, similarly, are just underlings of Sauron without any real thought or will- theyre bred to do the evil works of Sauron, and arent meant to be developed as characters. Just as the characters of each book are portrayed in a different way, so too are the views on Good and Evil. His Dark Materials portrayal is summed up by Mary: I stopped believing that there was a power of good and a power of evil outside us. And I came to believe that good and evil are names for what people do, not for what they are. All we can say is that this is a good deed, because it helps someone, or thats an evil one, because it hurts them. People are too complicated to have simple labels. This is probably the strongest aspect of His Dark Materials, in its attempt to understand just what is good and just what is evil. In the end, as Mary says, it comes down to whether an act is good or evil, not whether a person is good or evil. Potter and Lord of the Rings are different, going for the more traditional fantasy portrayal of a Dark Lord as the opposing force, and the Good Guy (Harry and Frodo) and their companions up against him. Where His Dark Materials tries to understand the nature of good and evil, the other two simply use them as backdrops, telling us that this person is good and that person is evil. Of course, all three works have characters that are in the grey: who struggle with themselves and temptation- its just that there is more of this in His Dark Materials than in the other two, and thats where the focus lies. Very closely tied to Good and Evil is the portrayal of religion. This is one of the weaker points in His Dark Materials, due to the cardboard characters like Gomez, and how everything priests do is condemned as evil (all the constant descriptions of dirty fingernails and the like), and the being who has portrayed himself as God is targeted, as if Pullman thinks that the Almighty is responsible for humanitys mistakes and misinterpretations. Much of what religion has done has been bad, and has brought suffering to those who believe differently, and for that His Dark Materials has a good point- if you wish to see the Authority as portraying the evil acts done in the name of religion, then it isnt so bad. But James Bow says in his essays about His Dark Materials (probably the best look at the trilogy from a Christian viewpoint) that he wishes Christians in the worlds of the books had been given a chance to stand up to the tyrant of the Authority and join Lord Asriel- the way that the Authority and his forces behaved are nothing like what Christians believe, and they teach completely different messages. If anything, the rebel angels of His Dark Materials resemble Jesus and the way that Christianity is supposed to be enacted. Bow thinks that by generalizing the priests and believers in His Dark Materials, Pullman excluded those Christians who would agree whole-heartedly with the concepts of the Republic of Heaven. Harry Potters stance on religion is a bit harder to find. Rowling is Christian, but no references to God, angels, or religion in any sense comes up in the books to this point. It would seem that Harry and company are all atheist, or at least agnostic. They believe in Good and Evil, but no driving force behind them. Spells and magic are performed, but they arent in worship of the devil- theyre just done. There is a Christmas break at Hogwarts, but the birth of Christ is never mentioned. Lord of the Rings is the balance between the two: God, or Eru as he is referred to in the Silmarillion, created the world, and placed the Valar (high angels) in charge of different aspects of it. As Tolkien was a devout Catholic, a lot of subtle references to religion appear throughout the books. But, his point was not about religion like Pullmans was: most think that Lord of the Rings is a commentary on the Industrialization of England, (the Shire) by the Machine (the Ring)- it is a thing in the power of human beings, yet it controls us. Tolkien destroys the Ring like Pullman destroys the Authority- Progress can go on without the destruction of the environment, and people must be responsible for their own acts of evil instead of saying that God told me to do it. Lets recap: His Dark Materials is strong in ambition and its attempt to understand the nature of Good and Evil. Its main characters are well rounded, with different strengths and weaknesses both in their character and in the way they are portrayed. It strives to ask questions of its readers and makes them question what it is they believe about life, death, good and evil, and why. It is weak in its portrayal of organized religion due to the stereotypical characters of the priests, who should have been allowed into the grey region of good and evil like the other characters instead of written off as automatically being evil. Harry Potter, though unfinished, produces a good core of characters that struggle with themselves and those around them. Its settings and plots are creative and adventurous, and quite gripping. Its themes and messages arent quite as strong as those in the other two, and are more normal in that they dont so radically question the readers beliefs. Weakest point lies in Voldemort- who comes back every year to present a new threat, which Harry always manages to get out of. It gets a bit tiresome after a time, and many have said that the fifth book is long-winded (although I havent read it myself, so I cant judge). Harrys struggle with himself balances this out- its interesting the various parallels between himself and Voldemort (although I hope he doesnt grow up to be quite so dull), and how he must question himself and what he does to find the good and evil within himself. (But come on- does anyone really think Harrys gonna turn out evil?) Lord of the Rings provides a massive epic and a very creative world rich with culture and legend that is only partially explored by the trilogy itself. Its characters are nearly all clear cut- Boromir is perhaps the only grey figure in the books, and he is killed by his desire for the Ring. Frodo, like Lyra and Harry, struggles with the evil in himself and tries to do what he knows needs to be done. The sometimes weak characters are balanced out by the themes of Good and Evil, and the underlying messages about the environment (and many other things). Im not saying any is better than the others- each is very strong in their own field, and each accomplishes what they set out to do: they also each have their flaws, which is only to be expected. Some are more philosophical than others, some have better flow than others, some are written in a more appealing style than others, and some are more entertaining than others. In the end, its up to the reader to decide which he or she prefers more but whatever they choose, that doesnt mean that they automatically dislike the others- in fact, its very likely that theyll enjoy all three. Each could be put above the others for their own reasons, but the best choice would be to put them side by side as three very different, yet equally good, works of literature. From checker at panix.com Fri Sep 30 20:53:22 2005 From: checker at panix.com (Premise Checker) Date: Fri, 30 Sep 2005 16:53:22 -0400 (EDT) Subject: [Paleopsych] NYT: Tierney: Homo Sapiens 2.0 Message-ID: Homo Sapiens 2.0 New York Times opinion column by John Tierney, 5.9.27 http://select.nytimes.com/2005/09/27/opinion/27tierney.html A newspaper's most important function is to comfort troubled readers with horror stories from far away. Every day we strive to remind you: Hey, things could always be worse. This solace isn't easy to offer after two hurricanes. But consider the gray goo problem. Imagine that the Gulf Coast was inundated not with water but with a swarm of nanobots. These would be microscopic machines designed to break down substances like cancer cells in a body or pests in a farm field. But what if scientists accidentally created some superorganism that outcompeted all other life and wiped out everything on the Gulf Coast -- then spread like pollen around the world. What if they engineered nanobots that kept replicating and evolving until they broke down the substance of every living thing, leaving the planet covered in gray goo? This is part of what Joel Garreau calls the Hell scenario in ''Radical Evolution,'' his book analyzing the new forms of life -- including ''transhumans'' and ''posthumans'' -- coming to your neighborhood soon. A man has already used his thoughts to send e-mail and control a robotic arm. And in three years, there could be memory-enhanced humans who take pills to banish senior moments and raise their SAT scores by 200 points. Then, within a decade or two, there may come an ''inflection point in history'' comparable to the rise of humans from apes. People will use drugs, genetic tinkering and computer implants to make themselves and their children smarter than anyone today -- and this new breed will go on creating improved models of themselves at a breakneck pace. Unless, of course, they're all dead. The prophets of the Hell scenario warn of engineered viruses that are genocidal. Even if accidents or terrorism don't wipe out everyone, there's still the danger that the new species will eliminate the old-fashioned humans. Who needs those sickly slow-witted creatures anymore? My first impulse was to dismiss these apocalyptic visions along with all the previous ones. The population bomb, nuclear meltdowns, the energy crisis, cancer epidemics, global warming -- for decades I've been debunking prophecies of doom as either imaginary or wildly exaggerated. I know many scientists consider the gray-goo doomsday to be impossible. If nothing else, the nanobots would probably suffer a Windows crash long before eating the planet. Still, the Hell scenario gives me pause because its promoters aren't just the usual technophobes. Bill Joy, the former Sun Microsystems scientist who's been called the Edison of the Internet, is one of the prophets of doom calling for restraints on researchers. But no one has any practical suggestions for how to stop this work. Banning human cloning in America won't stop it from occurring somewhere else, like South Korea, because the potential benefits of these new technologies are irresistible. Some perfectly respectable scientists believe in what Garreau calls the Heaven scenario: a world without pain, privation, disease or death. Everyone will have instant access to any information; people will trade thoughts with one another. They'll be so networked that they may be considered one giant organism. To skeptics, this Heaven scenario is also known as ''the Rapture of the nerds.'' The more likely outcome is a scenario that Garreau calls Prevail. The new technologies will cause problems, but humans will muddle through, work together to find solutions and emerge better off, just as they always have. Garreau argues that the new breed of interconnected people will be collectively wiser than ever before -- he actually makes a persuasive case that cellphones and e-mail are a force for social good. (If you doubt this, read ''Radical Evolution'' and join the discussion of it at my book club at nytimes.com/tierney.) Stepping up the evolutionary ladder sounds so appealing that I'm glad to risk even the gray goo problem, but it wouldn't hurt to have a fallback plan. The best insurance I can imagine against a global plague would be to keep some humans cloistered from the global network, like the Irish monasteries that kept learning alive during the Dark Ages, or the grapevines in California that were taken back to France after its vines were wiped out. We might encourage some of the prophets of doom to practice their philosophy by withdrawing to a remote island and giving up their Internet connections, but the ideal refuge would be Mars. If officials hope to get money for NASA's new program of manned exploration, I suggest they go to Capitol Hill with a two-word sales pitch: gray goo. From checker at panix.com Fri Sep 30 20:53:33 2005 From: checker at panix.com (Premise Checker) Date: Fri, 30 Sep 2005 16:53:33 -0400 (EDT) Subject: [Paleopsych] NS: Ray Kurzweil: Human 2.0 Message-ID: Ray Kurzweil: Human 2.0 http://www.newscientist.com/article.ns?id=mg18725181.600&print=true 5.9.24 IN 2003, Time magazine organised a "Future of Life" conference celebrating the 50th anniversary of Watson and Crick's discovery of the structure of DNA. All the speakers - myself included - were asked what we thought the next 50 years would bring. Most of the predictions were short-sighted. James Watson's own prediction was that in 50 years, we'll have drugs that allow us to eat as much as we want without gaining weight. "Fifty years?," I replied. In my opinion that's far too pessimistic. We've already demonstrated it in mice, and human drugs using the relevant techniques are in development. We can expect them in five to 10 years, not 50. The mistake that Watson and virtually every other presenter made was to use the progress of the past 50 years as a model for the next half-century. I describe this way of looking at the future as the "intuitive linear" view: people intuitively assume that the current rate of progress will continue for future periods. But a serious assessment of the history of technology reveals that technological change is not linear, but exponential. You can examine the data in different ways, on different timescales and for a wide variety of technologies, ranging from electronic to biological. You can analyse the implications, ranging from the sum of human knowledge to the size of the economy. However you measure it, the exponential acceleration of progress and growth applies. Understanding exponential progress is key to understanding future trends. Over the long term, exponential growth produces change on a scale dramatically different from linear growth. Consider that in 1990, the human genome project was widely regarded as controversial. In 1989, we sequenced only one-thousandth of the genome. But from 1990 onwards the amount of genetic data sequenced doubled every year - a rate of growth that continues today - and the transcription of the human genome was completed in 2003. We are making exponential progress in every type of information technology. Moreover, virtually all technologies are becoming information technologies. If we combine all of these trends, we can reliably predict that, in the not too distant future, we will reach what is known as The Singularity. This is a time when the pace of technological change will be so rapid and its impact so deep that human life will be irreversibly transformed. We will be able to reprogram our biology, and ultimately transcend it. The result will be an intimate merger between ourselves and the technology we are creating. The evidence for this ubiquitous exponential growth is abundant. In my new book, The Singularity is Near, I have more than 40 graphs from a broad variety of fields, including communications, the internet, brain scanning and biological technologies, that reveal exponential progress. Broadly speaking, my models show that we are doubling the paradigm-shift rate (roughly, the rate of technical innovation) every decade. Throughout the 20th century, the rate of progress gradually picked up speed. By the end of the century the rate was such that the sum total of the century's achievements was equivalent to about 20 years of progress at the 2000 rate. Growth in information technology is particularly rapid: we're doubling its power, as measured by price-performance, bandwidth, capacity and many other measures, every year or so. That's a factor of a thousand in 10 years, a million in 20 years, and a billion in 30 years, although a slow, second level of exponential growth means that a billion-fold improvement takes only about a quarter of a century. The exponential growth of computing goes back over a century and covers five major paradigms: electromechanical computing as used in the 1890 US census, relay-based computing as used to crack Nazi cryptography in the early 1940s, vacuum-tube-based computing as used by CBS to predict the election of Dwight Eisenhower in 1952, discrete-transistor-based computing as used in the first space launches in the early 1960s, and finally computing based on integrated circuits, invented in 1958 and applied to mainstream computing from the late 1960s. Each time it became apparent that one paradigm was about to run out of steam, this realisation resulted in research pressure to create the next paradigm. Today we have over a decade left in the paradigm of shrinking transistors on an integrated circuit, but there has already been enormous progress in creating the sixth major computing paradigm of three-dimensional molecular computing, using carbon nanotubes for example. And electronics is just one example of many. As another, it took us 14 years to sequence the genome of HIV; SARS took only 31 days. Accelerating returns The result is that we can reliably predict such measures as price-performance and capacity of a broad variety of information technologies. There are, of course, many things that we cannot dependably anticipate. In fact, our inability to make reliable predictions applies to any specific project. But the overall capabilities of information technology in each field can be projected. And I say this not just with hindsight; I have been making forward-looking predictions of this type for more than 20 years. We see examples in other areas of science of very smooth and reliable outcomes resulting from the interaction of a great many unpredictable events. Consider that predicting the path of a single molecule in a gas is essentially impossible, but predicting the properties of the entire gas - comprised of a great many chaotically interacting molecules - can be done very reliably through the laws of thermodynamics. Analogously, it is not possible to reliably predict the results of a specific project or company, but the overall capabilities of information technology, comprised of many chaotic activities, can nonetheless be dependably anticipated through what I call "the law of accelerating returns". So what does the law of accelerating returns tell us about the future? In terms of the aforementioned paradigm-shift rate, between 2000 and 2014 we'll make 20 years of progress at 2000 rates, equivalent to the entire 20th century. And then we'll do the same again in only seven years. To express this another way, we won't experience 100 years of technological advance in the 21st century; we will witness in the order of 20,000 years of progress when measured by the rate of progress in 2000, or about 1000 times that achieved in the 20th century. Above all, information technologies will grow at an explosive rate. And information technology is the technology that we need to consider. Ultimately everything of value will become an information technology: our biology, our thoughts and thinking processes, manufacturing and many other fields. As one example, nanotechnology-based manufacturing will enable us to apply computerised techniques to automatically assemble complex products at the molecular level. This will mean that by the mid-2020s we will be able to meet our energy needs using very inexpensive nanotechnology-based solar panels that will capture the energy in 0.03 per cent of the sunlight that falls on the Earth, which is all we need to meet our projected energy needs in 2030. A common objection is that there must be limits to exponential growth, as in the example of rabbits in Australia. The answer is that there are, but they're not very limiting. By 2020, $1000 will purchase 10^16 calculations per second (cps) of computing (compared with about 10^9 cps today), which is the level I estimate is required to functionally simulate the human brain. Another few decades on, and we will be able to build more optimal computing systems. For example, one cubic inch of nanotube circuitry would be about 100 million times more powerful than the human brain. The ultimate 1-kilogram computer - about the weight of a laptop today - which I envision late in this century, could provide 10^42 cps, about 10 quadrillion (10^16) times more powerful than all human brains put together today. And that's if we restrict the computer to functioning at a cold temperature. If we find a way to let it get hot, we could improve that by a factor of another 100 million. And of course, we'll devote more than 1 kilogram of matter to computing. Ultimately, we'll use a significant portion of the matter and energy in our vicinity as a computing substrate. Our growing mastery of information processes means that the 21st century will be characterised by three great technology revolutions. We are in the early stages of the "G" revolution (genetics, or biotechnology) right now. Biotechnology is providing the means to actually change your genes: not just designer babies but designer baby boomers. One technology that is already here is RNA interference (RNAi), which is used to turn genes off by blocking messenger RNA from expressing specific genes. Each human gene is just one of 23,000 little software programs we have inherited that represent the design of our biology. It is not very often that we use software programs that are not upgraded and modified for several years, let alone thousands of years. Yet these genetic programs evolved tens of thousands of years ago when conditions were very different. For one thing, it was not in the interest of the species for people to live very long. But since viral diseases, cancer and many other diseases depend on gene expression at some crucial point in their life cycle, RNAi promises to be a breakthrough technology. Grow your own New means of adding new genes are also emerging that have overcome the problem of placing genetic information precisely. One successful technique is to add the genetic information in vitro, making it possible to ensure the genetic information is inserted in the proper place. Once verified, the modified cell can be reproduced in vitro and large numbers of modified cells introduced into the patient's bloodstream, where they will travel to and become embedded in the correct tissues. This approach to gene therapy has successfully cured pulmonary hypertension in rats and has been approved for human trials. Another important line of attack is to regrow our own cells, tissues and even whole organs, and introduce them into our bodies. One major benefit of this "therapeutic cloning" technique is that we will be able to create these new tissues and organs from versions of our cells that have also been made younger - the emerging field of rejuvenation medicine. For example, we will be able to create new heart cells from your skin cells and introduce them into your system through the bloodstream. Over time, your heart cells will all be replaced, resulting in a rejuvenated "young" heart with your own DNA. Drug discovery was once a matter of finding substances that produced some beneficial effect without excessive side effects. This process was similar to early humans' tool discovery, which was limited to simply finding rocks and natural implements that could be used for helpful purposes. Today, we are learning the precise biochemical pathways that underlie both disease and ageing processes, and are able to design drugs to carry out precise missions at the molecular level. The scope and scale of these efforts are vast. But perfecting our biology will only get us so far. The reality is that biology will never be able to match what we will be capable of engineering, now that we are gaining a deep understanding of biology's principles of operation. That will bring us to the "N" or nanotechnology revolution, which will achieve maturity in the 2020s. There are already early impressive experiments. A biped nanorobot created by Nadrian Seeman and William Sherman of New York University can walk on legs just 10 nanometres long, demonstrating the ability of nanoscale machines to execute precise manoeuvres. MicroCHIPS of Bedford, Massachusetts, has developed a computerised device that is implanted under the skin and delivers precise mixtures of medicines from hundreds of nanoscale wells inside it. There are many other examples. Version 2.0 By the 2020s, nanotechnology will enable us to create almost any physical product we want from inexpensive materials, using information processes. We will be able to go beyond the limits of biology, and replace your current "human body version 1.0" with a dramatically upgraded version 2.0, providing radical life extension. The "killer app" of nanotechnology is "nanobots", blood-cell sized robots that can travel in the bloodstream destroying pathogens, removing debris, correcting errors in DNA and reversing ageing processes. We're already in the early stages of augmenting and replacing each of our organs, even portions of our brains with neural implants, the most recent versions of which allow patients to download new software to their implants from outside their bodies. Each of our organs will ultimately be replaced. For example, nanobots could deliver to our bloodstream an optimal set of all the nutrients, hormones and other substances we need, as well as remove toxins and waste products. The gastrointestinal tract could then be reserved for culinary pleasures rather than the tedious biological function of providing nutrients. After all, we've already in some ways separated the communication and pleasurable aspects of sex from its biological function. The most profound transformation will be "R" for the robotics revolution, which really refers to "strong" AI, or artificial intelligence at the human level (see "Reverse engineering the human brain"). Hundreds of applications of "narrow AI" - machine intelligence that equals or exceeds human intelligence for specific tasks - already permeate our modern infrastructure. Every time you send an email or make a cellphone call, intelligent algorithms route the information. AI programs diagnose electrocardiograms with an accuracy rivalling doctors, evaluate medical images, fly and land aircraft, guide intelligent autonomous weapons, make automated investment decisions for over a trillion dollars of funds, and guide industrial processes. A couple of decades ago these were all research projects. With regard to strong AI, we'll have both the hardware and software to recreate human intelligence by the end of the 2020s. We'll be able to improve these methods and harness the speed, memory capabilities and knowledge-sharing ability of machines. Ultimately, we will merge with our technology. This will begin with nanobots in our bodies and brains. The nanobots will keep us healthy, provide full-immersion virtual reality from within the nervous system, provide direct brain-to-brain communication over the internet and greatly expand human intelligence. But keep in mind that non-biological intelligence is doubling in capability each year, whereas our biological intelligence is essentially fixed. As we get to the 2030s, the non-biological portion of our intelligence will predominate. By the mid 2040s, the non-biological portion of our intelligence will be billions of times more capable than the biological portion. Non-biological intelligence will have access to its own design and will be able to improve itself in an increasingly rapid redesign cycle. This is not a utopian vision: the GNR technologies each have perils to match their promise. The danger of a bioengineered pathological virus is already with us. Self-replication will ultimately be feasible in non-biological nanotechnology-based systems as well, which will introduce its own dangers. This is a topic for another essay, but in short the answer is not relinquishment. Any attempt to proscribe such technologies will not only deprive human society of profound benefits, but will drive these technologies underground, which would make the dangers worse. Some commentators have questioned whether we would still be human after such dramatic changes. These observers may define the concept of human as being based on our limitations, but I prefer to define us as the species that seeks - and succeeds - in going beyond our limitations. Because our ability to increase our horizons is expanding exponentially rather than linearly, we can anticipate a dramatic century of accelerating change ahead. Reverse engineering the human brain The most profound transformation will be in "strong" AI, that is, artificial intelligence at the human level. To recreate the capabilities of the human brain, we need to meet both the hardware and software requirements. Achieving the hardware requirement was controversial five years ago, but is now largely a mainstream view among informed observers. Supercomputers are already at 100 trillion (10^14) calculations per second (cps), and will hit 10^16 cps around the end of this decade, which is the level I estimate is required to functionally simulate the human brain. Several supercomputers with 10^15 cps are already on the drawing board, with two Japanese efforts targeting 10^16 cps around the end of the decade. By 2020, 10^16 cps will be available for around $1000. So now the controversy is focused on the algorithms. To understand the principles of human intelligence we need to reverse-engineer the human brain. Here, progress is far greater than most people realise. The spatial and temporal resolution of brain scanning is progressing at an exponential rate, roughly doubling each year. Scanning tools, such as a new system from the University of Pennsylvania, can now see individual interneuronal connections, and watch them fire in real time. Already, we have mathematical models of a couple of dozen regions of the brain, including the cerebellum, which comprises more than half the neurons in the brain. IBM is creating a highly detailed simulation of about 10,000 cortical neurons, including tens of millions of connections. The first version will simulate electrical activity, and a future version will also simulate chemical activity. By the mid 2020s, it is conservative to conclude that we will have effective models of the whole brain. There are a number of key ways in which the organisation of the brain differs from a conventional computer. The brain's circuits, for example, transmit information as chemical gradients travelling at only a few hundred metres per second, which is millions of times slower than electronic circuits. The brain is massively parallel: there are about 100 trillion interneuronal connections all computing simultaneously. The brain combines analogue and digital phenomena. The brain rewires itself, and it uses emergent properties, with intelligent behaviour emerging from the brain's chaotic and complex activity. But as we gain sufficient data to model neurons and regions of neurons in detail, we find that we can express the coding of information in the brain and how this information is transformed in mathematical terms. We are then able to simulate these transformations on conventional parallel computing platforms, even though the underlying hardware architecture is quite different. One benefit of a full understanding of the human brain will be a deep understanding of ourselves, but the key implication is that it will expand the tool kit of techniques we can apply to create artificial intelligence. We will then be able to create non-biological systems that match human intelligence. These superintelligent computers will be able to do things we are not able to do, such as share knowledge and skills at electronic speeds. From checker at panix.com Fri Sep 30 20:54:53 2005 From: checker at panix.com (Premise Checker) Date: Fri, 30 Sep 2005 16:54:53 -0400 (EDT) Subject: [Paleopsych] WP: Over-Ruled Message-ID: Over-Ruled http://www.washingtonpost.com/wp-dyn/content/article/2005/09/23/AR2005092302377_pf.html [I learned this when I was a small kid. But, like several people reported below, I get into endless trouble, though most often no one notices that a rule was broken, or at least didn't say so. I've never been praised for breaking a rule.] When There's No One to Ask, Just Do It By David Brown Sunday, September 25, 2005; B01 The people and agencies responding to Hurricane Rita's ominous approach to Texas and Louisiana appear to be fast learners. Preparations for this latest weather onslaught, while hardly perfect, went better than they did a month ago in New Orleans. People evacuated earlier. There were more shelters awaiting their arrival. Food and water were stockpiled in great quantities; troops and surveillance helicopters were ready to help those who stayed behind; an improved system of post-storm communication was in place. But preparation -- even when it hews closely to the "game plan" -- only gets you so far. In the coming days, people with varying levels of authority all along the Gulf Coast will likely have to make many decisions. Often they'll have to make them quickly, alone, and without experience to guide them. Let's hope they have learned one more thing from Katrina: Sometimes you need to break the rules to avert greater disaster. I got a glimpse of how some people learned this lesson when I interviewed some of the 65 workers who weathered Katrina and the resulting flood at New Orleans's 70-acre Carrollton Water Purification Plant. The day after the storm hit, the plant stopped working for the first time since 1906. Engineers, electricians, pump-operators and laborers scrambled to get it going again. Normally, when any worker at Carrollton throws an important switch, fills a boiler or starts up a pump, he must first get permission from the control room. That's the way they tried fixing it at first, but the plant came on line for just 20 minutes before once again shutting down. "The intercoms were out and cell phones didn't work," John R. Huerkamp, the chief of operations, told me. "We finally got to the point where the gentleman who was in charge of central control had to say: 'Look, if you in the boiler room need to roll a pump, roll it. You don't have to call and ask permission. Just do it.' " The new rule didn't guarantee success: On the second try the next day, the plant operated for only an hour. But it helped make success possible on the third try. "This was a whole learning experience," Huerkamp said. It's unfortunate that more people in New Orleans -- and in Washington, too -- didn't catch on so quickly. But the sad truth is that despite its success as a sportswear slogan, "Just do it" isn't a terribly popular idea in real American life. We've become a society of rule-followers and permission-seekers. Despite our can-do self-image, what we really want is to be told what to do. When the going gets tough, the tough get consent forms. To be honest, the forced relocation of a major city's population in less than a week was notgoing to happen without chaos, violence and death, even if it went according to script. But it might have gone better with something added to the script -- a little more insubordination and freelancing. How different might things have been if officials on the ground had somehow commandeered every bus or other large conveyance they could locate to get people out of the lowlands as soon as water levels started rising? Wouldn't it have been better if, before the storm, someone in the city public works department had unilaterally moved water, food, generators, gas cans and portable toilets to places like the Superdome and the Convention Center, where it was likely people would congregate? If an assistant school superintendent had ordered all the school buses moved to high ground? If the crews of some of the innumerable helicopters circling overhead after the flood had decided to drop off pallets of drinking water on the "interstate islands" where people were marooned for days? It's difficult to say what specific actions might have made what degree of difference. But it seems that there was a dearth of big, risky and unambiguous decisions by mid-level responders -- managers or intermediate officials with some resources potentially under their control, who had the greatest opportunity to do the right thing at the right time. Instead, there was an excess of waiting for leadership and coordination. You say letting people throw the switches whenever they think the time is right is a recipe for anarchy? Certainly it can be under normal circumstances. But a hurricane's aftermath creates abnormal circumstances. Anarchy is what happens when people are left without the essentials for life -- and are terrified to boot. They find their own stocks of water and food (and guns and drugs and liquor, too). The unfortunate truth is, when a 100-year hurricane hits a city that is poor and violent under the best of circumstances, if the people in charge don't break the rules, the people who aren't in charge will. It seems at least possible that there would have been less disorder after the storm if more people had put their hunches and reputations on the line before and during it. Of course there were examples of constructive rule-breaking in the Katrina disaster zone. One of the more memorable involved the mayor of Gulfport, Miss., who, as reported in this newspaper,ordered his police chief to hot-wire a privately owned fuel truck and move it onto city property. One of the more incredible was the report in the New York Times about two Navy helicopter pilots who, after delivering food and water to military installations along the Gulf Coast, heard a radio transmission saying helicopters were needed to rescue people in New Orleans. Out of radio range of their commanders and unable to get permission, they nevertheless went to the rescue of about 100 people. When they got back they were reprimanded, according to the article. One pilot was grounded and put in charge of overseeing a kennel holding the pets of evacuated service members. There were others. Some search-and-rescue teams agreed to carry out pets -- against the rules -- because they knew it was the only way the animals' owners would leave. But why weren't there more examples of ingenuity and initiative? Aren't Americans historically a people who don't bow to authority, who do things their own way? Isn't that part of the mythology of American restlessness, inventiveness and westward migration? From what I've seen -- in daily life, as well as in my reporting -- two things have poisoned American decisiveness, at least in the public sector. One is the consciousness of legal liability that has permeated our culture in the most astonishing way. The shortest, safest school outing requires signed releases. School nurses can't give children a tablet of ibuprofen without parental permission. Paper coffee cups warn me that coffee is hot. I bought a kayak a couple of years ago that came with a sticker -- "Important Notice! Read Before Use!" -- informing me that kayaks are used on water and that people can drown if they don't wear life jackets or don't know how to swim. This don't-sue-me mindset can pop up anywhere, any time. A small example occurred last winter when I rode a military plane from Banda Aceh to Jakarta while reporting on the tsunami in Indonesia The plane carried about 60 displaced Indonesians and 15 Westerners, including a security guard from the U.S. embassy who was accompanying several government contractors. We landed at 4 a.m. at the military airport in the pouring rain. Shaking with fever and anxious about how I would find my way to downtown Jakarta at that hour, I asked the embassy guard whether I could get a ride in the van that was waiting for him and the contractors. "I don't know who you are," he said. "Anyway, our insurance doesn't cover people like you in the car." The Hungarian ambassador to Indonesia, also on the plane and clearly a much bigger risk-taker, gave me a lift in his chauffeur-driven automobile. Another reason many Americans in authority hesitate to make risky decisions is the fear of criticism and even public humiliation -- at the hands of the news media, late-night comedians and, now, the nonstop cacophony of the blogosphere. Many members of my profession make a living, pay mortgages and send children to college in part by telling people how they could have done things better. We make a point about conflicts of interest, whether real or merely perceived, and whether or not they would make any difference. We get on the case of people who do too much, and we get on the case of people who do too little. We are obsessed with motive, and in general assume questionable competence or bad faith among public servants. Except in the rare case where action is immediately deemed heroic and subjected to little criticism -- the behavior of fire and law enforcement officials on Sept. 11, 2001, is a notable example -- there are few functions of government that, in their minds at least, reporters, editorial writers and columnists couldn't do better. Not to mention Jon Stewart. While this critic-and-second-guesser role is an important part of journalism, in practice there's too much of it, and it comes at a price. The price is that people have become afraid to do things that fall outside their job description without explicit permission and implied forgiveness for possible bad outcomes. Five days after the hurricane, a Federal Emergency Management Agency official ordered Mark N. Perlmutter, a 50-year-old orthopedic surgeon from Pennsylvania, to stop treating patients on the tarmac of the New Orleans airport because he had not filled out the proper paperwork. He protested, explaining that the woman he had just diagnosed with diabetic ketoacidosis might die without immediate intravenous fluids and insulin. But he was led away. The official said to him, "We cannot guarantee tort liability protection," Perlmutter told me yesterday. After learning that on-site certification wasn't yet possible, the doctor was allowed to return to the tarmac and get his medical instruments. The woman, who was semi-conscious when he'd first seen her, was dead, Perlmutter said. He then flew to Baton Rouge in a helicopter and got certified, a process he said "took about two minutes." This is an extreme example of rule-following -- which is why it made got news coverage. There are other examples. One of them was the behavior of the 769th and 527th engineering battalions of the Louisiana National Guard, which were housed at the Convention Center when that building became an island of deprivation, chaos and lawlessness. The 350 armed soldiers knew enough about what was going on to barricade their part of the building against the mob, and to come and go from a side door so few people would know of their presence. Later, they said no one had told them to restore order in the convention center. That's bad enough (and I know this is the know-it-all reporter talking). What's worse is that they didn't do it without being asked. "The idea of helping with the convention center never came up. We were preparing ourselves for the next mission," said the 769th commander, Maj. Keith Waddell, according to a Washington Post report. This was an engineering battalion, not trained in quelling civil disturbance. Fair enough. Then why issue them rifles, ammunition and helmets? These weren't U.S. troops, whose role in local law enforcement is circumscribed by federal law. This was a local Guard unit. Isn't the common denominator of being part of the state militia -- in whatever function -- that you are expected to keep order at times of popular rebellion? Certainly the prospect of entering a crowded hall containing armed men who might shoot at you in the dark behind the protective screen of hundreds of innocent civilians is terrifying. It is also a situation that very possibly could result in the death of guardsmen. But isn't this a risk that people who join the guard agree to face? The idea "never even came up"? I personally doubt this. But if it's true, it makes the whole thing even more astounding. By now, New Orleans appears to have become an extremely orderly place. At nearly every corner, soldiers stand ready to check ID. Rules are followed punctiliously. Everyone coloring inside the lines -- it's a great system until the wind starts blowing really, really hard. Author's e-mail: [2]browndm at washpost.com David Brown covers science and medicine for The Post. From checker at panix.com Fri Sep 30 20:55:04 2005 From: checker at panix.com (Premise Checker) Date: Fri, 30 Sep 2005 16:55:04 -0400 (EDT) Subject: [Paleopsych] Foreign Policy: Here Today, Gone Tomorrow Message-ID: Here Today, Gone Tomorrow http://www.foreignpolicy.com/story/cms.php?story_id=3158&print=1 [et seq., through 3188. All articles appended.] September/October 2005 * The Sanctity of Life By Peter Singer * Political Parties Subscription Only Archived Content By Fernando Henrique Cardoso * The Euro By Christopher Hitchens * Japanese Passivity Free Registration Required By Shintaro Ishihara * Monogamy Free Registration Required By Jacques Attali * Religious Hierarchy Subscription Only Archived Content By Harvey Cox * The Chinese Communist Party Free Registration Required By Minxin Pei * Auto Emissions Subscription Only Archived Content By John Browne * The Public Domain By Lawrence Lessig * Doctors? Offices Subscription Only Archived Content By Craig Mundie * The King of England Subscription Only Archived Content By Felipe Fern?ndez-Armesto * The War on Drugs By Peter Schwartz * Laissez-Faire Procreation Subscription Only Archived Content By Lee Kuan Yew * Polio Subscription Only Archived Content By Julie L. Gerberding * Sovereignty Subscription Only Archived Content By Richard N. Haass * Anonymity Free Registration Required By Esther Dyson Albert Einstein claimed he never thought of the future. It comes soon enough, he said. FOREIGN POLICY decided to not grant 16 leading thinkers that luxury. Instead, to mark our 35th anniversary, we asked them to speculate on the ideas, values, and institutions the world takes for granted that may disappear in the next 35 years. Their answers range from fields as diverse as morals and religion to geopolitics and technology. We may be happy to see some of these endangered species make an exit, but others will be mourned. All of them will leave a mark. The Sanctity of Life By Peter Singer During the next 35 years, the traditional view of the sanctity of human life will collapse under pressure from scientific, technological, and demographic developments. By 2040, it may be that only a rump of hard-core, know-nothing religious fundamentalists will defend the view that every human life, from conception to death, is sacrosanct. In retrospect, 2005 may be seen as the year in which that position became untenable. American conservatives have for several years been in the awkward position of defending a federal funding ban on creating new embryos for research that prevents U.S. scientists from leading an area of biomedical research that could revolutionize the treatment of many common diseases. When they are honest, conservatives acknowledge that giving up some medical advances is simply the price to be paid for doing the right thing. This year, however, that view became much more uncomfortable. South Korean researchers showed that human stem cells can be cloned by replacing the nucleus of an unfertilized human egg with the nucleus of an ordinary cell. The South Korean breakthrough poses a stark challenge to the conservative position. The possibility of cloning from the nucleus of an ordinary cell undermines the idea that embryos are precious because they have the potential to become human beings. Once it becomes clear that every human cell contains the genetic information to create a new human being, the old arguments for preserving unique human embryos fade away. The year 2005 is also significant, at least in the United States, for ratcheting up the debate about the care of patients in a persistent vegetative state. The long legal battle over the removal of Terri Schiavos feeding tube led President George W. Bush and the U.S. Congress to intervene, both seeking to keep her alive. Yet the American public surprised many pundits by refusing to support this intervention, and the case produced a surge in the number of people declaring they did not wish to be kept alive in a situation such as Schiavos. Technology will drive this debate. As the sophistication of techniques for producing images of soft tissue increases, we will be able to determine with a high degree of certainty that some living, breathing human beings have suffered such severe brain damage that they will never regain consciousness. In these cases, with the hope of recovery gone, families and loved ones will usually understand that even if the human organism is still alive, the person they loved has ceased to exist. Hence, a decision to remove the feeding tube will be less controversial, for it will be a decision to end the life of a human body, but not of a person. As we approach 2040, the Netherlands and Belgium will have had decades of experience with legalized euthanasia, and other jurisdictions will also have permitted either voluntary euthanasia or physician-assisted suicide for varying lengths of time. This experience will puncture exaggerated fears that the legalization of these practices would be a first step toward a new holocaust. By then, an increasing proportion of the population in developed countries will be more than 75 years old and thinking about how their lives will end. The political pressure for allowing terminally or chronically ill patients to choose when to die will be irresistible. When the traditional ethic of the sanctity of human life is proven indefensible at both the beginning and end of life, a new ethic will replace it. It will recognize that the concept of a person is distinct from that of a member of the species Homo sapiens, and that it is personhood, not species membership, that is most significant in determining when it is wrong to end a life. We will understand that even if the life of a human organism begins at conception, the life of a personthat is, at a minimum, a being with some level of self-awarenessdoes not begin so early. And we will respect the right of autonomous, competent people to choose when to live and when to die. Peter Singer is professor at Princeton University and the University of Melbourne. His books include Practical Ethics (New York: Cambridge University Press, 1979) and Rethinking Life and Death: The Collapse of Our Traditional Ethics (New York: St. Martins Press, 1995). Political Parties By Fernando Henrique Cardoso We take it for granted that political parties are vital to modern political life. They have shaped representative democracies since the late 19th century. Yet, their prospects are not bright in todays large democracies. In fact, these powerful political machines may soon disappear. The ground is already shifting underneath their feet. Political parties have based their platforms on ideological and class divides that are becoming less important, especially in more advanced societies. Although class consciousness still matters, ethnic, religious, and sexual identities now trump class, and these affiliations cut across traditional political party lines. Today, the labels left and right have less and less meaning. Citizens have developed multiple interests, diverse senses of belonging, and overlapping identities. Some political parties have managed to adapt. Think of the British Labour Party, under Prime Minister Tony Blair, or Brazils Workers Party, whose economic policy has very little to do with its trade union origins. Others wont be so lucky. Political dislocation exists alongside a growing fatigue with traditional forms of political representation. People no longer trust the political establishment. They want a greater say in public matters and usually prefer to voice their interests directly or through interest groups and nongovernmental organizations. The debate on genetically modified food in Europe, for example, can hardly be understood without reference to organizations allegedly representing consumer interests, such as Greenpeace. And thanks to modern communication, citizens groups can bypass political parties in shaping public policy. Political parties no longer have a lock on legitimacy. Voting, of course, remains essential. But voting doesnt require political parties, either. Indeed, the more important the issue, the more likely governments in places as different as Switzerland, Bolivia, and California will seek legitimacy directly in referenda rather than through parliaments or legislatures, the traditional stomping grounds of parties. The rejection of the European constitution in France and the Netherlands demonstrates that major political partiesall of which supported the constitutionoften have little leverage once an issue is posed to the people. In this environment, political parties are at a critical junction: They must transform themselves or become irrelevant. To survive, they must design flexible agendas not dependent on traditional class and ideological divides. Somehow, theyll have to recapture the public imagination. And theyll have to accept that others deserve a seat at the political table. Otherwise, the party may be over. Fernando Henrique Cardoso was president of Brazil from 1995 to 2003. The Euro By Christopher Hitchens "So, said J?rg Haider with a slightly unpleasant smile, you like the new Esperanto money? I was interviewing the leader of Austrias Freedom Party in early 2003, at a time when he was also applauding Saddam Hussein and supporting the suicide bombers in Israel and Palestine. His sarcastic comment about the newly introduced euro notes made me want to believe in the new currency even more. On a long reporting trip to Europe, I had been rather affected to find myself using the same money in Paris one evening as I had used to pay a Berlin taxi driver in the morning. I remembered how the Franco-German coal and steel agreement that was the nucleus of the European project was designed to make war within continental Europe materially impossible. On New Years Day 2002, it suddenly became possible to employ the same currency in Finland as in Greece (which surrendered the worlds oldest surviving monetary denomination in the form of the drachma). Why should one listen to any sneering about that, especially from a man not fully reconciled to the outcome of the Second World War? My internationalist prejudice is not something for which I feel like apologizing, even now. I remember how I twisted with embarrassment when Norman Lamont, British Prime Minister John Majors chancellor of the exchequer, returned from Brussels with the grand news that he had won the right to keep the visage of Her Majesty the Queen on any British version of the euro bill. If the Germans could make the remarkable sacrifice of the deutsche mark, their greatest postwar achievement, then why quibble over the insignia of the House of Windsor? I looked forward to showing my children the old British currency, just as I had kept a sentimental box of the ancient British coinage that had been making holes in our pockets before decimalization. And now I cant quite believe that my children, or their children, will be using the Esperanto money after all. As suddenly as it began, the whole idea of a common currency seems to have receded. The likelihood of new countries adopting the euro has become remote ever since the French and the Dutch repudiated the proposed European constitution earlier this year. But more than that, there is a pronounced nostalgia for the old money in Germany and in other nations that have already adopted the euro. If a referendum is involved, I cannot see the British electorate voting to abandon the pound, with or without the queens head, in any circumstances. The Scandinavian periphery now seems less, rather than more, persuadable. As for the new and aspiring members, such as Poland and Turkey, one winces to think of the disillusionment that will set in now that so many brave promises will be postponed. This economic setback is determined in part by political and bureaucratic failures large and small. Europes passport, to take a tiny example, could have been worth flourishing at a frontier post. But a series of dull compromises reduced it to a tawdry paperback, bound in some off-color maroon: too obviously a document designed by a committee. Then I should like to know at what dire meeting it was decided that the first seven words of the preamble to the European Constitution would read: His Majesty The King of the Belgians Until Albania or Belarus joins, which seems a long way off, Belgium and its monarch come first in the European alphabet. But this is not how things were done in Philadelphia, and the emphasis is not at all designed to produce a more perfect union. I take absolutely no pleasure in saying this. I did not at all care for the alliance of parties, from xenophobic to post-Stalinist, that combined to defeat the constitution and that now yearn for the euro to be undone. But I cant rid myself of the memory of that smirk on Haiders face. If the euro is going to be only one currency among many, then it will have lost its essential point. Esperanto aimed to replace the Babel of competing languages with one universal tongue, and it succeeded only in adding an extra tongue that was a mere hybrid. A euro that is legal tender only in some parts of Europe will not only emphasize the continents failure to eliminate differences: It will itself become one of those differences. Christopher Hitchens is a columnist for Vanity Fair and a visiting professor of liberal studies at the New School University. His most recent book is Thomas Jefferson: Author of America (New York: HarperCollins, 2005). Japanese Passivity By Shintaro Ishihara In todays accelerating world, we are exposed to changes that might have taken two or three hundred years to unfold during the Middle Ages. Time and space have contracted, and nothing now happens in isolation. Japan is having difficulty adjusting to this new world. It clings to a hopelessly idealistic and historically illegitimate constitution handed down by U.S. occupation forces nearly 60 years ago to block Japans reemergence as a military power. Japan now entrusts its survival to the United States, has forsaken independent thinking, and has become spineless. Some people have contended that Japan can prosper as a nation of peaceful merchants. That might have been possible as long as the United States was a reliable guardian. Today, with the limited capability of the United States as a superpower apparent, this dependence is extremely risky for Japan. It is ironic that the Japanese economyespecially in the financial sectoris susceptible to plunder by the very Americans who were originally supposed to be our patrons. The Japanese used to have the spirit and backbone of the samurai, the same warriors who were applauded by Walt Whitman when they visited the United States in the 1860s. When will we recover our national virtue, described so well by Ruth Benedict in The Chrysanthemum and the Sword? Much will depend on how East Asia evolves, especially militarily, in the next decade. One critical factor will be where Chinawith its growing military and stubborn Communist Partycasts its gaze and whether its ambitions will be pursued with the same kind of hegemonic intentions employed in Tibet. It will also depend on whether China, which has repeatedly asserted claims on Japanese territory, persists in its provocations. I wonder how the United States will interpret its security treaty with Japan if our nation decides to confront China, perhaps even militarily, in the dispute over the Senkaku Islands, a part of Okinawa with potentially valuable seabed resources. There are many other uncertainties. The overheated Chinese economy is on the verge of collapse. What form will the frustration of the Chinese people take and how will it erupt? Economic collapse in China may trigger a Soviet-style disintegration that will lead to the dissolution of the Communist regime. Nor is China the only concern. North Korea, with a political regime that can only be described as insane, is busily developing a nuclear capability and brandishing it as a bargaining chip. Let us not forget that this is a terrorist nation that has abducted more than 100 Japanese citizens and likely murdered most of them. Pyongyang has warned that it would hit Japan with missiles if Tokyo decides to impose economic sanctions, Japans sole form of leverage. Leaving aside uncertainty about the accuracy of North Korean missiles, the question of how Japan and the United States would respond remains critical. These regional tensions and uncertainties may finally stimulate Japan to emerge from its futile passivity and become a strong nation willing to accept sacrifices. When Japan again exhibits the backbone that helped it become the first non-white nation to modernize successfully, the balance of power in this region will change dramatically. Japan, not China, is the regions sleeping lion. Shintaro Ishihara is governor of Tokyo. Monogamy By Jacques Attali Two hundred years ago, few people foresaw legalized divorce or open homosexualitylet alone gay marriage. Abstract art and jazz were unimaginable. Aesthetics, morals, and family relationships, it seems, are the bane of the futurologist. We constantly speculate about the future balance of power, looming conflicts, and emerging technologies. Yet somehow, we imagine that morals and aesthetics are immutable. So we forget to ask how conceptions of good and evil, acceptable and unacceptable, beauty and ugliness will change. And they will. Monogamy, which is really no more than a useful social convention, will not survive. It has rarely been honored in practice; soon, it will vanish even as an ideal. I do not believe that society will return to polygamy. Instead, we will move toward a radically new conception of sentimental and love relationships. Nothing forbids a person from being in love with a few people at the same time. Society rejects this possibility today primarily for economic reasonsto maintain an orderly transmission of propertyand because monogamy protects women against male excesses. But these rationales are dissolving in the face of powerful new trends. The insatiable demand for transparency, fueled by democracy and the free market, is placing the private lives of public men and women under greater scrutiny. The reality of multiple lives and partners will become more apparent, and societys hypocrisy will be revealed. The continued rise of individual freedom will permanently change sexual mores, as it has most other realms. Likewise, jumps in life expectancy will make it nearly impossible to spend ones entire life with one person and to love only that one person. Meanwhile, technological advances will further weaken the links between sexuality, love, and reproduction, which are very different concepts. Widely available birth control has already stripped away an important obstacle to having multiple partners. Just as most societies now accept successive love relationships, soon we will acknowledge the legality and acceptability of simultaneous love. For men and women, it will be possible to have partnerships with various people, who will, in turn, have various partners themselves. At long last, we will recognize that it is human to love different people at the same time. The demise of monogamy will not come without a struggle. All the churches will seek to forbid it, especially for women. For a while, they will hold the line. But individual freedom, once again, will triumph. The revolution will begin in Europe, America will follow, and the rest of the world will eventually come around. The implications will be enormous. Relationships with children will be radically different, financial arrangements will be disrupted, and how and where we live will change. To be sure, it will take decades for the change to be complete and yet, if we look around, it is already here. Beneath our hypocrisiesin movies, novels, and musicthe shape of our future is visible. Jacques Attali is a writer, president of PlaNet Finance, an international nonprofit organization, and a contributing editor to FOREIGN POLICY. Religious Hierarchy By Harvey Cox It is easy to forget that, for centuries, most people were unaware that they had any choice in religious matters. They were surrounded by people like themselves, and only a few ever met believers from other traditions. No more. A mosque is being built around the corner and, look, the Dalai Lama is on TV again. Thousands of religious and spiritual chat rooms and blogs have popped up. This is the age not only of the cafeteria Catholic, but also of the cafeteria Buddhist, Baptist, and Mormon. More and more people view the worlds religious traditions as a buffet from which they can pick and choose. In this environment, religious hierarchy is crumbling fast. The notions of consumer choice and local control have stormed the religious realm, and decentralization of faith is now the order of the day. Religious leaders who once could command, instruct, and expel now must cajole, persuade, and compete. Protestant Christians, of course, have always been suspicious of hierarchy as a matter of principle. In practice, however, they have often let church bureaucrats run their affairs. Today, local Methodist or Lutheran congregations often ignore the dicta of church leaders, and denominational brand loyalty is a thing of the past. The 77 million-member Anglican Communion recently faced a schism over the ordination of a gay bishop. In response, the Archbishop of Canterbury could only try to encourage a dialogue between the feuding parties; a resolution of the crisis from on high was out of the question. Christians are not the only ones straining against the religious hierarchies of old. In the early 1990s, the entire organized lay wing of Nicheren, the largest Buddhist organization in Japan, effectively seceded, leaving behind a rump priesthood without parishioners. Although a casual observer might assume that hierarchy is alive and well in Islam, the opposite is closer to the truth. Muslims have never developed a clear hierarchy, and they have battled over questions of succession and doctrine ever since the death of the prophet. Even the limited hierarchy that did exist has broken down. The Talibans leader, Mullah Muhammad Omar, became Afghanistans spiritual leaderand even donned the cloak of the prophetwithout the consent of other Islamic religious figures. Osama bin Laden presumes to issue religious rulings without formal training. Indeed, the present crisis in the Islamic world may stem from too many loud and conflicting voices, all claiming religious authority. Even the Catholic Churchthe lodestar of religious hierarchyis vulnerable to decentralization. Pope Benedict XVI knows that the churchs traditional flowchart is in trouble, and he intends to salvage it. He certainly has a long track record, including his campaign against the Latin American liberation theologians who tried to enlist the resources of the church for radical social change. He was less concerned with their alleged Marxist leanings than with the thousands of lively Catholic base communities they were organizing all over the continent, groups that did not fit into the churchs chain of command. Now, American Catholics are also demanding more say, staging vigils in churches they refuse to allow to be closed, withholding contributions, and taking dioceses to court. Voices are bubbling up from the bottom and seeping in from the edges, and hierarchy is showing signs of decay. The guardians of religious hierarchy understand the danger that lurks inside this revolution. Religions without unassailable leaders and with hungry competitors may find themselves marketing as much as ministering. Meeting buyer preferences may be essential in business, but it can eviscerate the integrity of the religious product. Imagine what the Ten Commandments or the Sermon on the Mount might have been if Moses or Christ had poll-tested them. And, yet, just such carefully tailored messages may be the key to the spectacular success of the so-called megachurches, which rarely make a move without consulting market research. Grappling with choice contributes to a religious maturity unavailable to someone who simply accepts what is passed down from above, and for that reason it could actually strengthen the capacity of the religious to cope with the challenge of secularism. Of course, the lack of recognized authority could also lead to fragmentation. But even that has an upside. Pentecostalism, for example, has no hierarchy, but its divisions and rivalries have generated an entrepreneurial energy that has made it the fastest growing Christian movement in the world. They have proven that sometimes religion without hierarchy can endure, and even thrive. Harvey Cox is professor at Harvard Divinity School and author of Fire from Heaven: The Rise of Pentecostal Spirituality and the Reshaping of Religion in the Twenty-First Century (Reading: Addison-Wesley, 1995). The Chinese Communist Party By Minxin Pei It may appear the Chinese Communist Party has never had it so good. Inside China, the party faces no serious challenges to its authority. Internationally, talk of China collapsing is out, and China rising is in. We are regularly told that globetrotting Chinese diplomats are running circles around their American and European counterparts, cutting deals and burnishing Beijings image around the world. But inexorable forces are arrayed against the long-term survival of the Communist Party in China, and its chances of staying in power for another 35 years are slim. Ultimately, the party may fall victim to its own economic miracle. The partys unwillingness to establish the rule of law and refrain from economic meddling may yet slow the remarkable growth of the last decade. But for the sake of argument, lets assume China can continue to grow. Another 35 years of solid economic growth (even at a much slower 5 percent a year) would mean an annual per capita income of about $7,000. Professionals, private property owners, and hard-working capitalists will number in the hundreds of millions. If history is any guide, it will be next to impossible for an authoritarian regime to retain power in such a modern society, let alone one as large and diverse as Chinas. If economic success does not end one-party rule in China, corruption probably will. Governments free from meaningful restraints on their power invariably grow corrupt and rapacious. That is true in China today. Party discipline has broken down. Selling government appointments for personal profit has become widespread. The cumulative effects of pervasive official corruption can transform a developing autocracy into a predatory regime. The experience of General Suhartos Indonesia suggests that predatory autocracies have trouble turning high rates of economic growth into political stability. There, even 30 years of impressive growth wasnt enough to save the regime. Autocracies that are expanding economically contain the seeds of their own destruction, mainly because they lack the institutional capacity and legitimacy to weather economic shocks. In this postideological era, the partys sole justification for its political monopoly is its capacity to improve the lives of the Chinese people. The party still pays lip service to an amalgam of Marxism-Leninism and Chinese nationalism, but with little credibility. A ruling party without core values lacks mass appeal and the capacity to generate it. Even its own elites are growing increasingly disillusioned, cynical, and fearful about the partys future. It is telling that many senior officials, including one provincial governor, regularly consult fortune tellers. A party capable of reinvention and regeneration might be able to skirt these looming dangers. But the Chinese Communist Party is growing arthritic. By 2040, it will have been in existence for 119 years and in power for 91. Today, the world has no septuagenarian one-party regimesand for good reason. Of course, in democratic societies, political parties undergo major transformations all the time. But one-party regimes have no intrinsic incentive to reengineer themselves and little capacity to correct course. Accumulated strains and ailments are left untreated until they precipitate larger crises. The Chinese Communist Party experienced this cycle once before, and the Cultural Revolution nearly destroyed the party. It recovered from that self-inflicted disaster only by thoroughly reinventing itself and adopting a distinctly anticommunist policy of market reforms. Will the party be as lucky next time? If the fortune tellers are being honest, theyll tell Chinas leaders the future isnt bright. Minxin Pei is senior associate and director of the China Program at the Carnegie Endowment for International Peace. Auto Emissions By John Browne Those skeptical of the data on global climate change point out that there is a lot we still dont know. But there are some things we do know: By 2020, there will be another 700 million additional vehicles on the roadsmany in China. Ensuring that these new vehicles incorporate the latest clean technology will be one of the most critical public policy challenges of our time. The absence of total certainty or consensus on the dangers of climate change must not impede constructive action. Fortunately, scientists and engineers havent let it. And because of the advances they are making, I am convinced that one of todays most pressing environmental problems will soon disappear. By 2040, harmful vehicle emissions will be a thing of the past. Those who can remember the dark fumes pumped out of cars and trucks know that weve already come a long way. Lead, sulphur, and benzene have been progressively reduced or removed from new vehicles. In the United States, lead emissions have dropped by about 95 percent. If only a third of the cars in 2050 run at 60 miles per gallon rather than 30 miles per gallon, carbon dioxide emissions will decline by 1 million tons a year. But the progress wont stop there. New refinery technology is producing ever cleaner fuels. The quality of lubricantswhich allow engines to operate efficientlyis improving. And engines themselves, whether hybrids or upgraded internal combustion machines, are becoming cleaner fuel burners. The combination of these trends will have a tremendous impact as the worlds capital stock of vehicles turns over during the next 35 years. Vehicles, of course, are only one source of potentially harmful emissions. The static uses of energyfactories, schools, and homesaccount for the bulk. There, the challenge is to transform both the products that generate energy and the goods produced so that the worlds increased energy needs can be met without savaging the environment. It is too early to predict that victory, but work is in progress. And I wouldnt bet against human ingenuity. Lord John Browne is the group chief executive of British Petroleum. The Public Domain By Lawrence Lessig Within every culture, there is a public domaina lawyer-free zone, unregulated by the rules of copyright. Throughout history, this part of culture has been vital to the spread and development of creative work. It is the part that gets cultivated without the permission of anyone else. This public domain has always lived alongside a private domainthe part of culture that is owned and regulated, that part whose use requires the permission of someone else. Through the market incentives it creates, the private domain has also produced extraordinary cultural wealth throughout the world. It is essential to how cultures develop. Traditionally, the law has kept these two domains in balance. The term of copyright was relatively short, and its reach was essentially commercial. But a fundamental change in the scope and nature of copyright law, inspired by a radical change in technology, now threatens this balance. Digital technologies have made it easyindeed, too easyfor creative work in the private domain to spread without permission. Piracy is rampant on the highways of digital technology. In response, code writers (both legislators and technologists) have created an unprecedented array of weapons (both legal and technical) to wage war on the pirates and restore control to the owners of culture. Yet the control these weapons will produce is far greater than anything we have seen in our past. So, for example, the United States has radically increased the reach of copyright regulation. And through the World Intellectual Property Organization, wealthy countries everywhere are pushing to impose even tighter restrictions on the rest of the world. These legal measures will soon be supplemented by extraordinary technologies that will secure to the owners of culture almost perfect control over how their property is used. Any balance between public and private will thus be lost. The private domain will swallow the public domain. And the cultivation of culture and creativity will then be dictated by those who claim to own it. There is no doubt that piracy is an important problemits just not the only problem. Our leaders have lost this sense of balance. They have been seduced by a vision of culture that measures beauty in ticket sales. They are apparently untroubled by a world where cultivating the past requires the permission of the past. They cant imagine that freedom could produce anything worthwhile at all. The danger remains invisible to most, hidden by the zeal of a war on piracy. And that is how the public domain may die a quiet death, extinguished by self-righteous extremism, long before many even recognize it is gone. Lawrence Lessig is professor of law at Stanford University. Doctors' Offices By Craig Mundie Getting sick today is a chore. Finding out whats wrong means scheduling an appointment, driving to the doctors office, filling out forms, waiting, and answering questions while being swabbed and poked. Then you wait for test results, pick up your prescriptions, and schedule more appointments with specialists. The nuisance of seeking care is quickly becoming a crisis around the world, as declining birth rates and aging populations put a crushing burden on national healthcare systems. Soon, governments, insurers, and taxpayers around the world will be forced to confront a complicated and inefficient system that focuses too much on managing disease when it arrives and not enough on preventing people from getting sick. A critical step in reforming the system will be making visits to a doctors office a last resort rather than a first step. This shift will require all kinds of structural, legal, and financial changes, but innovations in computing, communications, biology, nanotechnology, and robotics will ease the way. The Web is already allowing patients quick access to quality health information once dispensed only by white coats. Soon, patients will access customized health plans online. Diagnosing and treating many everyday conditions will be as simple as depositing a drop of blood in a machine and, within moments, having the computer tell you what you have and how to get rid of it. Doctors wont be obsolete, of course. In fact, general practitioners will be more important than ever, but theyll spend more time assessing options for preventive action and less time shepherding patients through their offices. Doctors will increasingly rely on highly personalized treatmentssuch as new drugs targeted specifically to personal needs, or even nanomachines that attack bad cholesterol or eliminate tumors too small to detect today. Specialists, in turn, will be free to focus on highly difficult procedures and push the frontiers of healthcare. Many of these technologies will reach the developed world first, but the rest of the world will benefit in turn. And it will behoove the rich countries to hasten the spread of its innovations. In an era when new diseases can circle the globe in hours, its in everyones interest to stop the next pandemic before it happens. The end result will be a technologically driven shift toward preventive medicine that will help keep soaring health costs in check and make visits to the doctor more rareand less painful. Craig Mundie is senior vice president and chief technical officer for advanced strategies and policy at Microsoft. The King of England By Felipe Fern?ndez-Armesto In 1948, the embattled Egyptian King Farouk said that soon only five ruling royals would be left: the kings of hearts, clubs, diamonds, and spades, and the English monarch. It now looks as if he was off by one. The monarchy will not, however, drown in a wave of republican sentiment; nor will it be discarded because it fails. The crisis, when it comes, will be provoked by the unwillingness of the royal family to carry on with the job. In theory, royals should symbolize collective national purposeif and where such a thing existsand embody common values. That was the role for which Queen Elizabeth IIs brood seemed perfectly suited when they were young. Courtiers, counsellors, and the media cast them as an ideal of bourgeois gentility. Then history took over. The royals turned out to be all too representative of their timesmore like a sitcom household or a soap-opera dynasty than a model family: dim or daft, undisciplined, self-indulgent, driven by petty enmities, and animated only by infidelities. Their pomp and glitter now look tawdry and overpriceda gold tooth in a mouth full of decay. Charles, the prince of Wales, who has done so much for society and the environment, could have harnessed the goodwill of his people. Instead, he has turned his tragedy into farce. The latest of his bumbles was to book a shabby civil wedding, which can be represented as legal only by appealing, ludicrously, to the European Convention on Human Rights. We have thus discovered the worlds smallest and richest disadvantaged minority. In short, the royals have done an abominable job in a role they chose for themselves. By any normal criteria of employment, they ought to be sacked. Lamely and risibly, however, they can still do the day jobwhich is to stay mum, sign legislation, and entertain top foreigners. The British, on the whole, are willing to let them continue, not out of lingering affection but for want of a viable alternative. Soon, however, the royals themselves will lose the will to go on. Even the prince of Wales, who yearns to be king, no longer likes the country he is called to represent. From his point of view, the British have abandoned all their distinctive traditionssurrendering them to new, classless, politically correct values. Celebrity has replaced noblesse oblige as the nearest surviving thing to an aristocratic ideal. At the millennium celebrations, the queen had to link arms with the prime minister and mouth auld lang syne like a barmaid. If youre a royal, what is the point of carrying on in such a distressingly unfamiliar world? The next generationthe duo of Wills and Harryhas no appetite for the job. Both take after their mother. The shallow, meretricious egocentrism of Dianas life and times represents the only future these postmodern princes can hope to enjoy. Deracination, anomie, and future-shock separate them from the traditions to which they are supposedly heirs. Neither of them is very cleverindeed, even with every advantage possible, Harry proved incapable of getting close to an average performance in national entrance exams. Yet both princes surely have enough sense to realize that the job of king is now utterly unappealing. After what their parents have suffered from the public and the pressthe obloquy, the derision, the intolerable intrusions into their private livesthey can only face their fate with dismay. As Charles grows old, the boys will long for the prospect of being pensioned playboys rather than dutiful royals. The problem for the monarchy will be of a kind well known in other kinds of theater: how to get bums on thrones. Felipe Fern?ndez-Armesto is professor of history at Tufts University and a professorial fellow at Queen Mary, University of London. He is author of Ideas That Changed the World (New York: DK Pub., 2003). The War on Drugs By Peter Schwartz The war on drugs will soon be over. It wont have been won or lost, and we certainly wont have wiped out illicit drug use. People will still pursue their personal pleasures and uncontrollable addictions. No, the war on drugs will end because drugs as we know them today will be gone. The model drug of the future is already here in the form of crystal methamphetamine, a drug that is sweeping the United States and making inroads abroad. Its cheap and easy to makelittle more than Sudafed doctored up with plant fertilizer. One hundred percent of the profit goes to the manufacturer; no intermediary or army of couriers is required. Made of locally acquired materials in the garage or basement, the drugs production is nearly impossible to stop. Only the stupid and incompetent get caught. Thirty-five years from now, the illicit professionals who remain in the business will be custom drug designers catering to the wealthy. Their concoctions will be fine-tuned to ones own body and neural chemistry. In time, the most destructive side effects will be designed out, perhaps even addiction itself. These custom drug dealers will design the perfect chemical experience for those who can afford it. The combination of cocaine with skiing, sex, or other intense physical activities is common today; likewise for pot and making music. In the future, there will be custom drugs for meals, golf, gardening, and more. Like crystal meth today, some drugs will reach the point of home manufacturing. And they will all be designed to make their use invisible to othersno red eyes, nervous tics, or lethargy. The shift to custom drugs that are locally produced will have some positive effects. Opium fields in Afghanistan and coca plantations in the mountains of Colombia will wither, creating new economic realities for those countries. The loss of cash crops will sting at first, but farmers and traders producing legal goods that are taxable and transparent will ultimately facilitate the building of healthy societies. Cocaine couriers wont sweat their way through customs, and human mules will stop smuggling bags of heroin in their guts. Drug lords will not need to launder billions of dollars or pay for private armies, and street corners wont have drug dealers waging gunfights for turf. The prison population in Western countries, and particularly the United States, will shrink. But as the violence of the drug trade dies down and as drugs become safer, drug use will blossom. The boundary between legal performance enhancement (Viagra) and the illegal drugs of pleasure and creativity will blur. The political and social pressure against drug use will remain, but it will increasingly resemble the campaigns against performance- enhancing drugs for athletes. Widespread use will spark debates about fairness and authenticity: Is a drug-using musician better than one who composes and performs naturally? Is it fair for only the wealthy to have the richest sexual or culinary experiences? Just as the legal system is struggling with new realities of intellectual property in a digital age, it will struggle to control innovation in the chemistry of pleasure. We may even wistfully look back at a time when there were smugglers to be chased and coca fields to be burned. The bad guys were brutes, largely foreign or inner-city hoodlums. The new drug sellers will be chemists, most likely caught on tax-evasion charges. Users, too, will be harder to hate. Theyll look a lot like you and me. Peter Schwartz is chairman of the Global Business Network, a Monitor Group Company. Laissez-Faire Procreation By Lee Kuan Yew Demography, not democracy, will be the most critical factor for security and growth in the 21st century. Booming populations are a drag on developing countries, and low fertility rates are sapping growth in developed societies. The poor are making themselves poorer through rising birth rates, and the rich will have less dynamic societies because they are not replacing themselves fast enough. Population growth is outstripping the capacity of governments to deliver basic services in the Middle East and Africa, producing breeding grounds for extremist and terrorist movements. Rich societies will, in turn, see migration from these places as a threatand they will resist. Sex, marriage, and procreation may not be beyond the reach of government influence for much longer. Governments facing population explosions and implosions will soon have no choice but to grapple with matters generally considered private. Efforts to cajole and educate populations into more positive procreation trends have had only limited success. European states, for example, have made Herculean efforts to reverse declining fertility rates, with disappointing results. Singapores fertility rate is a dangerously low 1.25 percent. Pro-natal policies have increased fertility only slightly. Without immigration that often exceeds the natural yearly growth, Singapores economic growth rate would be as sluggish as Japans. When public campaigns have partially succeeded, as in some Scandinavian countries and in France, they have forced society to reconceptualize the roles of marriage and the family, with the father taking on more of the mothers role, a transformation Asian families find difficult to accomplish. Even then, these countries are unlikely to get fertility rates to exceed replacement levels. Barring a dramatic change of course, they will need immigrants to keep their economies vibrant. Countries that most welcome migrants have an economic advantage, but open immigration policies also carry risks. New waves of migrants will be ethnically different, less educated, and sometimes unskilled. They will often be among the very religious in otherwise secular societies. Many will move illegally. The greater ethnic diversity they create can cause social tensions and have profound effects on cultural identity and social cohesion. Japan is perhaps the best example of a state that both fears and needs immigration. It has a reproduction rate of less than 1.3 percent and a rapidly aging population, yet it has shown a limited willingness to welcome immigrants. The United States, on the other hand, has traditionally been the most welcoming of immigrants. Although it has near replacement fertility levels, 80 percent of its projected population growth of 120 million in the next 50 years will come from immigration. Will it remain as open politically and culturally as Hispanics change the countrys character and culture? This dilemma is even starker for Europe, where most migrants are Muslims from North Africa and the Middle East. They are not likely to be assimilated into a largely Christian secular society, and their social isolation could impede the struggle against Islamic terror. It will gradually dawn on governments that immigration alone cannot solve their demographic troubles and that much more active government involvement in encouraging or discouraging procreation may be necessary. Those governments most able to think imaginatively about these problems will save their societies and their neighbors much pain and suffering. Lee Kuan Yew was prime minister of Singapore from 1959 to 1990 and is now minister mentor. Polio By Julie L. Gerberding Few causes merit greater celebration than the end of a disease. But despite the dedicated efforts of the last century, the world has only held such a celebration oncewhen smallpox was eradicated in 1977. Current generations who know smallpox only as a fading scar on the upper arm forget the impact that this global killer had over centuries. Its eradication in the United States alone has saved countless lives and at least $17 billion. Today, the world is poised to add another disease to the list of those that will no longer threaten humans: polio. As difficult as smallpox eradication was, polio has presented an even tougher challenge. Some polio infections alert doctors with tell-tale paralysis, but for each of these cases, about 200 people may have only minor flu-like symptoms and can silently transmit the disease for weeks. As a logistical challenge, one observer has written, the difference between smallpox and polio eradication is the difference between extinguishing a candle flame and putting out a forest fire. Yet we have never been closer to ending the disease. In 1988, there were an estimated 350,000 cases of polio worldwide. In 2005, the confirmed caseload has been slashed to just 760 people in 13 countries. Through national and international leadership, local heroism, and economic investments, immunization rates are climbing in most countries. In 2003, 415 million children in 55 countries were immunized during National Immunization Days, using more than 2.2 billion doses of oral polio vaccine. Most national health services have responded quickly to outbreaks. China, for example, stamped out a potential flare-up last year. The World Health Organization launched a massive preemptive vaccination campaign in Somalia to prevent an outbreak from spreading into the country from neighboring epidemic areas. The obstacles now are not a lack of vision or inadequate technology; they are civil war and cultural mistrust. Several Nigerian states have, at times, blocked polio immunization campaigns, believing the vaccine to be a Western plot designed to render their women infertile. The August 2003 refusal by the state of Kano resulted in hundreds of children being paralyzed and the virus spreading to neighboring countries. Despite these setbacks, the U.S. Centers for Disease Control and Prevention and its partners around the world believe that polio eradication is within our grasp. Each global infectious disease poses unique challenges, but the strategy is clear: eradication in one region after another; isolation to a limited number of countries; and aggressive campaigns to break the chain of transmission and infection. In the Americas, public health authorities have already eradicated measles and are stopping the transmission of rubella. We are optimistic that these diseases, and others, will soon go from endangered to extinct. These eradications will be triumphs for public health scientists and practitioners. Even more important, they will be a testament to the power of global cooperation against diseases that recognize no boundaries. Julie L. Gerberding is director of the U.S. Centers for Disease Control and Prevention. Sovereignty By Richard N. Haass Sovereigntythe notion that governments are free to do what they want within their own territoryhas provided the organizing principle of international relations for more than 350 years. Thirty-five years from now, sovereignty will no longer be sanctuary. Powerful new forces and insidious threats will converge against it. Nation-states will not disappear, but they will share power with a larger number of powerful non-sovereign actors than ever before, including corporations, nongovernmental organizations, terrorist groups, drug cartels, regional and global institutions, and banks and private equity funds. Sovereignty will fall victim to the powerful and accelerating flow of people, ideas, greenhouse gases, goods, dollars, drugs, viruses, e-mails, and weapons within and across borders. All of this traffic challenges one of the fundamentals of sovereignty: the ability to control what crosses borders. Sovereign states will increasingly measure their vulnerability not to one another but to forces of globalization beyond their control. Impersonal forces arent the whole story, though. States in the future will sometimes choose to strip sovereignty from their fellow states. Similarly, a government that lacks the capacity or will to provide for the basic needs of its citizens will forfeit its sovereignty. That reflects not just moral scruple but also a hardheaded understanding that neglectbenign or otherwisecan generate destabilizing refugee flows and trigger state failure, which creates openings for terrorists. The 1999 NATO intervention in Kosovo, which forced Serbia to give up control of the restive province after years of abusive rule, may well be a prototype for the future. Implicit in all this is the notion that sovereignty is conditional, even contractual, rather than absolute. If a state sponsors terrorism, develops weapons of mass destruction, or conducts genocide, then it forfeits the normal benefits of sovereignty and opens itself up to attack, removal, or occupation. The diplomatic challenge will be to gain widespread support for principles of state conduct and a procedure for determining the remedy when these principles are violated. States will also willingly choose to shed some of their sovereignty. This trend is well under way, most clearly in the trade realm. Governments agree to accept the rulings of the World Trade Organization because, on balance, they benefit from a rules-based international trading order, even if a particular ruling impinges on their right to protect national industries. Global climate change is also prompting limits on sovereignty. The Kyoto Protocol, which runs through 2012, requires signatories to cap greenhouse gas emissions. One can imagine an even more ambitious accord in which a larger number of governments, including the United States, China, and India, would accept stricter limits based on a recognition that they would be worse off if no country accepted such restraints. All this adds up to a world that is not fully sovereign. But nor is it one of either world government or anarchy. The world 35 years from now will be semi-sovereign. It will reflect the need to adapt legal and political principles to a world in which the most serious challenges to order come from what global forces do to states and what governments do to their citizens rather than from what states do to one another. Richard N. Haass is president of the Council on Foreign Relations and author of The Opportunity: Americas Moment to Alter Historys Course (New York: PublicAffairs, 2005). Anonymity By Esther Dyson A world where everyone knows everything about everyone else has been a common dystopia. The villain in these frightening worlds has often been a shadowy government, thirsting for information and control. And that remains a frightening possibility in many parts of the world. But there are other, less gloomy outcomes. A world without secrets might actually yield a more forgiving culture with stronger, more informed individuals. Citizens of the developed world now give off information about themselves at unprecedented rates. Authorities demand information from us when we fly, pass through tollgates, cross borders, and enter public buildings. As the investigation of the July London bombings has revealed, dozens of cameras may capture a city stroll. The cyber trails that people leave are now well known. As many have discovered to their chagrin, records of e-mails sent and Web sites visited rarely disappearand they often pop up at the most inconvenient moments. Its ironic that the Web once seemed to promise individuals new opportunities to explore the world without showing their face. Instead, it is turning out to be a powerful force against anonymity. Most information about peoples online actions is traceableif someone with resources cares to go to the trouble. But there will be much more to this trend than the familiar fear of governments spying on innocent victims, or even they-asked-for-it dissidents. The bigger questions revolve around the tolerance of societies for diversity and recognition of the human capacity for change. The technologically adept and dedicated may be able to preserve some form of anonymityfor a time. Some people, for example, will create multiple identities online for the various sites they visit, the social networks they enter, and the online merchants they frequent. To be sure, most identities will be traceable by authorities with subpoena power, but not by your neighbors, your colleagues, or even your prospective employer. But, in the end, these defenses will break down and our slime trails will become increasingly visible. Those trails will pose a challenge for societies eager to judge instantly. Are we likely to have a tolerant society when whole swaths of once private behavior become visible? Unprecedented transparency may actually force a cultural changea sort of statute of limitations on reputations. Curiosity will continue (were human beings, after all), but a healthier understanding of how people can change may be the ultimate result. This salutary cultural change will not ease the concerns of those who fear anonymitys passing. But there is reason to doubt the breadth of this concern. The popular perception is that people want anonymity; in fact, it appears that most people crave recognition. Many young people want it so much that they join multiple networking sites, rate themselves and friends on various scales, and fill in online questionnaires and surveys. Even as individuals evince more and more concern about privacy and identity theft, they flood onto the Web as themselves, publishing blogs, posting photos, contributing reviews, and revealing all (or so it seems) on dating sites. In effect, people are trading anonymity for a voice. The Web is empowering individuals to engage with others not just as consumers picking from whats on offer, but as active negotiators, defining specifications for others to meet. That effect is particularly visible in the commercial and social realms, but less clear vis-?-vis the governments of the world. As anonymity fades, a critical question will remain: Are we getting as much as were giving up? Esther Dyson is editor of Release 1.0, the technology industry newsletter published by CNET Networks. From checker at panix.com Fri Sep 30 20:55:15 2005 From: checker at panix.com (Premise Checker) Date: Fri, 30 Sep 2005 16:55:15 -0400 (EDT) Subject: [Paleopsych] NYT: (Fiona Apple) Re-emerging After a Strange Silence Message-ID: Re-emerging After a Strange Silence http://www.nytimes.com/2005/09/26/arts/music/26appl.html [Now I don't like this kind of music the least little bit, but I was struck by her mournful countenance in a photograph when she first hit the scene. Born between my two daughters, she has a quintessential Generation X personality. William Strauss and Neil Howe, in _Generations: The History of America's Future, 1584 to 2069 (NY: William Morrow, 1991) and one of the ten books that has most influenced my thinking, say this: "A recessive REACTIVE GENERATION grows up as underprotected and criticized youths during a spiritual awakening; matures into risk-taking, alienated rising adults; mellows into pragmatic midlife leaders during a secular crisis; and maintains respect (but less influence) as reclusive elders" (p. 74). [This generation is made up of those born between 1961 and 1981, while the boomers, an idealist generation, were born between 1943 and 1960. The last reactive generation was the Lost Generation, those born between 1883 and 1900. Previous reactive generations were the Cavalier (born 1615-47). Think Jacob Leisler, Elisha Cooke, Increase Mather, Samuel Willard, Benjamin Church, Nathaniel Bacon, Metacomet, William Kidd) Liberty (born 1724-41) Think Washington, Adams, Henry, Paine, Allen, Arnold, Revere, Boone. No need to give first names here! Gilded (born 1822-42). Think Grant, Cleveland, Twain, Dickinson. Carnegie, Rockefeller, Wild Bill Hickock, Sitting Bull) Lost (born 1883-1900). Think Truman, Eisenhower, Hemingway, Fitzgerald, Patton, Capone, Bogart, Mae West. Reactive (as the authors then called it. They didn't like the term Generation X, since it was not really a generation, but the term stuck. Born 1961-81). Think (this was written in 1991, remember) Any Carter, Samantha Smith, Michael lewis, Brett Easton Ellis, Mary Lou Retton, Tom Cruise, Michael J. Fox, Mike Tyson. (All this from the foldout page of the book) [The upcoming generation, those born in 1982 and after, will be a "civic" generation. The last one was the G.I. (government issue, which now calls itself the "Greatest Generation"), those born 1901-24. Before that, we must go back to Mr. Jefferson's Republican generation (1742-66). The Strauss-Howe cycle breaks down, since the Civil War was so disruptive that no civic generation emerged. All U.S. Presidents between Kennedy and Bush I were G.I. There was never a President from the following Adaptive generation (Silent, born 1925-42), though Dukakis and Mondale iirc were of the right age. Clinton and Bush II are both from the Idealist generation (Boomers, 1943-60). [What makes these cycles more than just a coincidence is the authors' idea that America is unique in changing rapidly and having enough parental choice in upbringing so as to affect how parents raise their children as they react to their own upbringing. I'd have to restudy the book, but I recall that the authors did seem some generational change in Europe, though not nearly as great as in the United States. They didn't see these cycles anywhere else. [What would be exciting would be to know whether the rest of the world is now so affected by America that it, too, is going through the same (or maybe just similar) generational cycles of the same (or similar) duration. Or maybe there is now enough change and parental choice outside the Occident for there to be generational cycles but of different kinds and different lengths depending on the country or region. [Convergence here to the American model would be a change of the deepest sort. [Now to Fiona.] By [3]LOLA OGUNNAIKE Fiona Apple, the soul-baring singer who hasn't released an album since 1999, wishes she had a more compelling explanation for her absence. "The truth is that I haven't been doing anything that interesting," she said, shrugging one afternoon late last week. "I got off the road last time and I just felt like not writing and not doing anything for a long time." Ms. Apple will finally be back next week with the release of "Extraordinary Machine," the third album in her decade-long career. And judging from the 500 fans who flocked to the Virgin Megastore in Union Square in Manhattan on Tuesday to hear her sing, her return is none too soon. Ms. Apple attained cause-c?l?bre status earlier this year when fans pressured her record company, Sony, to release the album, an early, unfinished version of which had been leaked on the Internet. She called her decision to step back into the limelight a "really big experiment," given her past public struggles with popular success. Will the touring, television appearances and photo shoots cause her to "freak out again," she wondered? Or will she manage to find some pleasure in it all? While promoting her first album her attitude was, "Please like me, please understand me," Ms. Apple, 28, said chuckling. "The second time was: 'Please don't misunderstand me again. Please understand me this time.' And this time it's really about me taking something that's been so stressful in the past and making it joyful. I don't want to be suffering all the time." Suffering - Ms. Apple has made a cottage industry of it. She even addresses her penchant for pain on the title track of "Extraordinary Machine": "I seem to you to seek a new disaster everyday." But she writes: "I mean to prove I mean to move in my own way, and say,/I've been getting along for long before you came into play." In 1996 she made her debut with "Tidal," which sold three million copies and won a Grammy. Her second effort, "When the Pawn. ..." - the unabridged title is 90 words long - was hailed by critics but proved a commercial disappointment. It did not help that Ms. Apple had a reputation for being difficult, a tortured soul with attitude to spare. In one of her more infamous tantrums, she berated audience members at the 1997 MTV Video Music Awards ceremony for worshiping celebrities. A Manhattan concert in 2000 was cut short when the singer, upset over sound difficulties, began sobbing uncontrollably onstage. Ms. Apple, who admits to being emotional ("it runs in my family"), said she was cast as a troubled loose cannon by the media because controversy makes great headlines. "I was the right girl for the part," she conceded. "I cried a lot. I said a lot of stuff. There were lots of great rumors about me. Everything I did was put in bold print and italics." It is difficult to believe that this tiny sliver of a woman once caused such a big commotion. Dressed in a floor-length peasant skirt, T-shirt and faded navy-blue hoodie, a genial Ms. Apple spoke in sprawling, uninterrupted sentences as she sat in a restaurant at her Midtown hotel. Thoughtful and introspective, she was all too willing to have her haunting blue-silver eyes look inward. During her sabbatical, she said, she would often sit in her backyard in Venice, Calif., thinking and playing with pine cones. "I was making little pine-cone people with razor blades," Ms. Apple said, raking her fingers through her wavy brown hair. "That's all I did." Her inertia did not sit well with some in her immediate circle. They accused Ms. Apple of being lazy, crazy and unproductive, she said. "It really hurt a couple of close relationships of mine," said Ms. Apple, who split with her boyfriend Paul Thomas Anderson, the film director, three years ago. "It infuriated me because they couldn't believe that when I'm sitting and thinking that's how I work." Several years ago she decided that she was ready to begin recording again and called on Jon Brion, who produced "When the Pawn. ..." Their collaboration, while smooth before, was shaky this time. "Jon would play me stuff and I wouldn't be able to tell what I liked and what I didn't like," Ms. Apple said. After emerging from a deep funk, she eventually decided to rerecord her songs with the producer Mike Elizondo, who has worked with Dr. Dre. According to Ms. Apple, things were going well until executives at Sony began asking her to submit individual songs for their approval. Only then would they determine how much more recording money she would receive. Sony had already sunk nearly $800,000 into recording the original version of "Extraordinary Machine." "They basically wanted me to audition my songs," Ms. Apple said, visibly offended. Lois Najarian, a representative for Sony, denies this and blamed Ms. Apple's perception of events on miscommunication. "That was surely not the case," Ms. Najarian said. Unhappy with what she termed an "unlivable" arrangement, Ms. Apple threatened to abandon the project. When the Brion-produced version of "Extraordinary Machine" showed up on the Internet earlier this year, Ms. Apple, upset that her unfinished work was available, thought Sony would scrap the album. "Who is going to give me money to make songs that are already out there?" she recalled thinking at the time. Little did Ms. Apple know that a group of fans was pleading with Sony to release her album, which they thought had been shelved. Both Sony and Ms. Apple say it was not. On the Web site [4]www.freefiona.com they railed against the "corporate giant" standing between them and their beloved. "Please give us Fiona and we'll give you money back," read one poem posted on the site. Hundreds of foam apples were sent to the company, and in January a dedicated band of protesters, led by the Free Fiona founder Dave Muscato, stood outside the Madison Avenue offices of Sony BMG chanting, "We want Fiona." She is quick to credit her freefiona fans with her comeback. "It's good to know that if you organize you can make change, because that's certainly not what I was doing," Ms. Apple said, "I was walking away." From shovland at mindspring.com Thu Sep 29 03:39:22 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Wed, 28 Sep 2005 20:39:22 -0700 (GMT-07:00) Subject: [Paleopsych] Liberal Revenge Gazette 9/28/05 Message-ID: <9211414.1127965162746.JavaMail.root@mswamui-cedar.atl.sa.earthlink.net> A non-text attachment was scrubbed... Name: LiberalGazette092805.jpg Type: image/pjpeg Size: 173175 bytes Desc: not available URL: From shovland at mindspring.com Fri Sep 30 02:34:53 2005 From: shovland at mindspring.com (shovland at mindspring.com) Date: Thu, 29 Sep 2005 19:34:53 -0700 (GMT-07:00) Subject: [Paleopsych] Born Again, Again Message-ID: <1794986.1128047693787.JavaMail.root@mswamui-bichon.atl.sa.earthlink.net> A non-text attachment was scrubbed... Name: BornAgainAgain.jpg Type: image/pjpeg Size: 143258 bytes Desc: not available URL: